What are Scripted Shaders?

Shaders are an amazingly powerful tool for manipulating the visual appearance of objects in games. There are many different kinds of shaders, but I chose to focus on vertex and pixel shaders. Vertex shaders allow developers to move and shape objects, creating effects such as plants swaying in the wind or pulsating organic materials. Pixel shaders change the color of individual pixels, which can be used to make things look shiny, smoldering hot or like they're being illuminated in certain ways. But really, the possibilities of effects that can be created using shaders feel endless.


One difficulty however is that they are written in a programming language called HLSL. If you're not already somewhat familiar with coding this can be a roadblock, hindering artists to implement their ideas. I wanted to see if I could make it easier for other disciplines to create shaders by merging them with a node based script system. And maybe I could also speed up the process of making them by bypassing the otherwise necessity of writing HLSL code.


This is what I decided to do for my Specialization course, which is the final course (apart from the group projects) at TGA. It was done during 5 weeks half-time.

Work process and challenges

Before I started out I asked one of my team's technical artists to show me how they worked with shaders in Maya. This gave me a basic idea and some inspiration to what scripted shaders might look like.


The first, and one of the most difficult, challenges was to figure out how the script system should communicate with the GPU. I started to only consider the vertex shaders and had a look at our current default vertex shader.

First I considered the returnValue.myPosition variable, which is the final position on the screen that the vertex is transformed to. The variable is set to vertexProjectionPos, which further up in the code is calculated to be the multiplication of the camera's projection matrix and the variable vertexViewPos. That variable is in turn the result of another matrix and vector multiplication, involving the camera's world transform. And this goes on, all in all transforming the input vertex's position to projection space via the matrices toWorld, toCamera and toProjection. These matrices were already sent from the CPU, meaning I knew I had access to them on the c++-side. So I thought that it should then be possible to access these matrices via nodes, merge them into one matrix and send that matrix to the GPU, resulting in the shader having something like


                  returnValue.myPosition = M * input.myPosition.xyzw,


where M is the matrix consisting of different transforms decided by the one writing the script, giving them control over which matrices are put into the final result. Having figured this out, I also thought that the same logic should apply to the other return variables. Normals, bi-tangents and bi-normals all use a worldRotation matrix, the UV gets an identity matrix and the worldPosition uses the toWorld matrix. My new vertex shader and the script nodes working with it turned out like this:

The default vertex shader

The nodes for the vertex shader:

The c++ for the vertex shader node:

Sidenote:

It was a bit tricky to add a new input/output type to the node system, so in order to save time I decided to only work with 4x4 matrices. The rows 7-25 is mainly casting these matrices to their proper dimensions. The important parts are the ones marked in red. The other variables than the position variable are handled in essentially the same way, just some casting in between. Using the right dimensions from the beginning would've meant that I could send less data to the shaders (especially the 4x4 uv-matrix which is cast to a 2x2 matrix). I could've cast it after getting it from the nodes and before sending it to the shader, but for consistency's sake I decided to keep it as a 4x4 matrix and didn't get around to change it.

The new data sent to the GPU:

Sidenote on the scripting system:

During our course in scripting we were given a finished node based scripting system made in ImGUI. It was merged, by two programmers in our project group, with our game engine and we then proceeded to make new node types, new input/output types and fiddled somewhat with the underlying structure and implementation.

The pixel shaders were a bit trickier to translate to nodes than the vertex shaders. But as with the vertex shaders, I started by looking at our default pixel shader. I figured that most of what's going on is that we are taking samples from our three default textures, the albedo, normal and material texture. I spent some time thinking how I could sample textures outside the GPU. But ultimately, I guessed that would be quite difficult since the input for every sample is every vertex's uv-coordinates. That's data that would be very expensive to gather on the CPU and send to the GPU, every frame. So I decided that the actual sampling had to take place on the GPU. The trouble was that I still thought it would be neat with a "Sample Image" node.


With this in mind, I considered that the node for my scripted vertex shader at first was called "Set Model Instance VSScript Transforms", before one of my teachers advised me to simply call it "Vertex Shader". This got me thinking that when it comes to tools, nothing really has to do exactly what it sounds like it's doing as long as it has the expected result. So after a while I thought that maybe I could have a node called "Sample Image" after all, but what's really going on is that the input (the texture to sample) is the index of the texture and the output is an enum deciding if we want the rgba, r, g, b or a value of the sample. I could then gather this data and send it to the shaders, where it checks for each data package which texture should be sampled and what data from the sample is of interest.


After I got this working, I also added a variable in the data being sent to the shaders that decided wether we should sample at all or just set the variable to a raw value. For instance, sometimes you might just want to set the albedo to be completely red, regardless of what textures are available. If you're interested in how all the data for the pixel shaders was handled and how the flow from node to HLSL looked like, click here.

The default pixel shader

What one of the sampling functions looks like. The others are similar, only they pick different texture channels according to our texture packing.

Limitations and Improvements - Reflections on Tools

The Vertex Shaders

Once I had all the basic functionality I set out to implement I started to play around with these scripted shaders (see the results here!). That's when I realised most of the limitations of what I had created. For instance, I had an idea that I wanted to make the model bubble in a weird way. I thought that should be possible by moving each vertex along its normal's direction with a random timing. But then I realised that with what I had created, I could only manipulate each vertex in the same way since I only send in one big matrix for deciding the vertex position and I don't have access to the individual vertices in the node system. This meant that I could make the model grow, shrink or strech, but it would not be possible to create the "boiling" effect I had in mind.


Another thing I have been thinking about afterwards is the transformation of vertices' world position to post projection space. I don't know how often the user actually would like to change the toCamera -> toProjection part. There is probably some very cool effects that can be done by meddling with the projection that I'm not thinking about right now. But maybe to make the tool easier to use I would like to replace the part with the "Three Matrix Mul" node with a "Object Properties" node where the user can set scale, rotation etc. Then I hide all the matrix mathematics in the c++-code of the node because I'm not sure if that is of interest to the user. It might be, I guess at this point I would just have to give this tool to someone else and get their feedback. It might also be a very individual thing, some artists probably want more control over the matrices and some don't.



The Pixel Shaders

Regarding the scripted pixel shaders, I realised after some testing that my original thought of nodes not having to do exactly what they appeared to do as long as they gave the expected result didn't really hold up. I think the idea is still good, however when it comes to my "Sample Image" node the user will probably expect that after that node you should have the value of the sampled texture, and that you should be able to change that value. For instance, just looking at the nodes it appears that you should be able to draw a link from "Sample Image" to a Divide node and devide the sampled value by two to get the sampled value but "dimmed down". This would not be possible with these nodes, since the value passed from the sample node actually is a Vector2, holding information of which texture to sample from and what data to gather, not the actual result of the sample.


Speaking of the Vector2 being passed from the Sample Image to the Pixel Shader node, I have also been thinking about hiding the input type to the Pixel Shader node. When the user creates that node, before the sampled input pins have a link connected, a default Vector2 input appears. This can possibly be confusing (why should the Albedo Sampled pin take a Vector2?). Essentially, what this all means is that complexity and shortcuts that I tried to hide from the user is leaking through the visual interface. Those problems are probably what I would have liked to solve first if I had more time. Once the basic functionality is in place, making the tool easy and intuitive to use should be first priority.


The fact that the user can't play with the value from the Sample node bugs me a little bit. What the sample part of the scripted pixel shader became is actually a big "pick the option you want"-thing, instead of creating your own input. And I'm unsure how useful it is to have the option to for instance make the metalness sample the albedo texture's g-channel. Again, maybe there are cool effects that can be done with this, but regarding the sampling I would have liked to open up the creativity of the tool more.



Other thoughts

Finally, something else that I've been thinking about if I had more time is code generation. When I started I guessed that code generation might be outside this project's scope, and I also wanted to try out my idea of one vertex/pixel shader to handle all script inputs. But with more time, code generation might have been the answer to my problem of not being able to manipulate each vertex differently and the sample problem. If I somehow could design shaders with nodes and then generate HLSL code based on that, I feel like that would probably take these scripted shaders to a new level, much like Unreal or Maya are doing with their shader systems.

Summary

  • Created support for Matrix4 in the node editor and implemented some new nodes to support Matrix4-mathematics
  • Set up structure for CPU to GPU communication of vertex/pixel shader data 
  • Made shader related nodes (Vertex Shader, Pixel Shader, Get Textures, Sample Image)
  • Figured out how the data should be stored and packed on the CPU via model instances
  • Writing the new, more general, vertex/pixel shaders that handled the node data