I took a few extra days I had off recently to work on exploring new tools, specifically Substance Designer (4). In this post, I’ll be talking about my first time experiences with it and directly contrasting it to my experiences with the Quixel Suite. My end target was Marmoset Toolbag 2, if only for general testing.
For the purposes of this test, I authored a simple near-future RPG round. It’s not an overly complex (or, according to Nirrti, a common sense driven) model, but it does hit on a few points that will inform me about the toolset’s strengths and weaknesses, specifically that of the built-in baker.
- Custom cages – the inner spiraling on the low poly is a complex concave shape that is more or less the worst case scenario for overlapping from simple push cages. I had to make a custom cage to make any bake work. I also did this to see how well auto-cages would handle nasty geo like this (hint: not very well)
- Traditional high poly – I did the high poly in max and exported that to zbrush for some sculpting, but nothing dramatic enough to force a retopology. The damaged variant I sculpted will be completely executed via texture work. I’m not an expert at zbrush but it’ll provide a nice alternate texture.
- Floating bake-only highpoly geo – the front face has some floating inlays that need special consideration in bakes to avoid improper shadow and height map data.
- Multiple Subobjects – the fins are a separate object on the same UV space, and a proper bake must be exploded to avoid errant overlaps and bad AO shadows, but ideally combined to the same bitmap.
- Low poly cylinder/wave testing – I considered redoing my geo a bit to increase the quality of the bake along the surface of the cylinder by avoiding waviness, but decided to keep it low to see how SD’s baker would handle it.
- Custom decals – since being able to work with text and shapes that can’t be easily generated is rather important, I created some decals to work with.
With the model done and ready, I made a ‘baseline’ to compare to. Since I’ve got about 8 months of the Quixel Suite under my belt, I decided to push through the entire pipeline using that toolset. I used xNormal as the baking tool, and made short work of the model’s texture. I grabbed a few material presents and and did some custom dynamask editing to clean up seam overflow and make some tweaks to wear to make them look less ‘dDo-y’, and imported my custom decals over from the PSD I was using to store/compost my bakes. It took me an hour or two in one sitting, with most of the time being a project loss with a mid-workflow crash that lost about 30 minutes of unsaved work. Such is life with Quixel Suite (1.8).
As an aside, I duplicated the project and re-imported my bakes with the zbrush damage model, tweaked my existing settings for more wear and tear, and re-exported the maps. That took less then half an hour, and produced some really nice results with little extra work.
With that completed, I set out to replicate my work with Substance Designer.
Substance Designer Pass
After following the introductory tutorials and analyzing a few sample projects, I set out to texture my RPG. Please keep in mind that this is very much a first impressions discussion, I’m sure that it does not reflect the total power of the tool in the hands of someone with experience, but it’s more of a ‘this is what I was able to do as a first try’ breakdown. There are things that I likely did wrong, and I’m sure that I’ll look back at this and kind of roll my eyes at how obvious a simpler path would have been in a couple of months.
One of the appeals of SD is that it has integrated bakers for AO, normal, gradients, etc built in, so in theory, you can sidestep baking in the modeling package or xNormal. The bakers are fast, but I found that generated AO was a little lacking for my taste. It didn’t handle the concave section of the RPG so well, the AO was too subdued for such a deep cavity. Since AO is an important component of procedural textures; I was hoping it would yield better results. I also found that it had a harder time negotiating smoothing errors as well as xNormal does, and the default settings didn’t leave as much padding as I’m comfortable with. Another caveat was that because the body and fins were baked separately, they got separate outputs. I couldn’t find any setting that allowed me to just combine all sub-objects to the same output, so I ended up creating a cheap subobject mask in photoshop and combined my maps in the node phase of the texture process.
I couldn’t find a solid way to incorporate floating geo (the front panel indents) in the SD baker. There are a lot of settings and checkboxes to go over, and I have no doubt there is a way to do it, but I wasn’t able to find it off hand. I’m still partial to 3ds Max as an AO baker only because I really know how to get a good bake, even if it does take a good couple of hours for an excellent result. If it’s not worth an evening of baking, though, xNormal still has better results and more finely grained options to tweak then the SD AO baker. All that said, it fared quite well overall, and I imagine a good mesh with a good clean cage is not going to have a single problem using SD for its sole bake source. I also really like SD’s position (xyz gradient) map, since it’s the first time I’ve had that as a ‘one click’ bake option.
I decided to start from scratch for the actual substances, there were reference substances in the example files that I could of copied, but I found it very rewarding and more informative to just create my own.
The tutorial series guided me in the direction of separating out subobject materials into separate substance files and combining them in the last pass, using my ID map as a mutli-switch masking layer. This is very similar to the dDo workflow, where smart materials are applied to the entire texture and are masked off after all is said and done.
One thing worth mentioning about the above shot – I was still getting the hang of the I/O system, so it is much messier then it needs to be. Ideally, you should be able to make one connection, but it your sub-substance missing or has improperly linked nodes, you have to drag some of the links to the inputs manually. Now that I know how to do it right, I shouldn’t have this problem in the future. That’s kind of a theme with SD; you do can brute force a bad workflow because it lets you, but that almost defeats the point of using the tool in the first place.
I used the PBR, and with a realtime previewer, I was able to get nice looking material definition by pulling a few sliders. If I was going for accuracy, there are tons of PBR reference swatch sheets for known surface types floating around.
My end nodegraphs were really messy and probably very inefficient. On the above image, you can see my materials for paint, plastic, metal, copper, gold, and glow. Besides getting the very basics from a reference, these were mostly created with a ‘drop, link, and look’ type workflow. Drop in a node, link it to something, see what it did to the pipeline, and tweak things until it looks okay. It was all pretty fun, and definitely the meat of the program. Even as I went from one substance to the next, I found myself getting a little better and smarter about usage each time. I can easily see myself making a whole library of base substances from both my own hand and downloaded, thus cutting down on the time it takes to make these things.
I’m pretty familiar with nodes, but I don’t think I’ve really used them for pure image manipulation tools before. It’s not an impossible hurdle to jump, but it does require a little rethinking as to how to approach the path to get from A to B. The closest photoshop analog I can think of is using adjustment layers. Just imagine that all functions that photoshop has – like invert, noise, blur, and such – were available as adjustment layers that you never collapse. That’s kind of similar to the SD texturing experience. Most of the ‘texture’ comes from either a bitmap (normalmaps, baked AO, decals, etc) or a noise layer of some sort. If you look at the above graphs, you can see that I started with the bitmaps on the left, SD generated maps at the top center, and the end results on the right, with everything in-between being the process from going from A to B. (Also, the furthest left bitmap, the white with the black hole in top center, is the masking map for the combining the fin and body bakes. that’s why there’s two of all of my bitmaps)
The basic path for all of the edge wear layers is something like this: curvature from normal + blur + gradient adjustment to brighten/harden edges + noise + gradient to adjust strength = mask for dirt or edge wear. This is essentially what the dynamask editor in dDo does behind the scenes, but instead of using (and/or tweaking) Quixel tested and approved results, you’re essentially making your own, for better or for worse.
SD node work isn’t without its gripes.
- The biggest annoyance was having to be conscious of whether my last node was black/white or color. It’s not like SD really cares too much and will give you converter nodes when it can, but it creates a lot of mess if you don’t at least make an effort to keep things clean. Some nodes, like blur, require a specific input of color or B/W and you have to swap out the node if you grab the wrong one. you can see from the above graphs that it’s something I grappled with and ended up flip-flopping on when I wasn’t paying attention.
- The normalmap blend node is very sensitive; even the slightest variations in the hightmap input can become craters on top of your normalmap. It has an intensity slider that goes from 0-10, and I found myself sticking from 0.1-0.9; I couldn’t imagine what ten times the strength would do, or more accurately, when it would be needed. It’s fine for small things like noise and dirt overlay, but down the road, if I intended to do heavy normalmap adjustments, I would definitely end up using nDo or even crazybump to get better results. (Provided the texture is using a bitmap workflow)
- Some of the adjustment nodes at times felt a little limiting. The default “Blend” node only has some adjustments (multiply, linear add, masked blend from alpha, etc) other functions like overlay, soft light, and hard light have their own nodes that you have to grab from the toolbox for some reason.
- I found myself relying heavily on gradient nodes for most of my adjustments since I found it a little more powerful and easy to use then the adjustment node, esp if you want to posterize, harden, or invert your edges. This could be due to my preference to use gradient maps in PS over the Levels editor.
- Setting inheritance got a little messy for me near the end. SD allows you to change the output resolution (and other things) of the maps on the fly, and will recompute them as needed at any point in the pipeline. If you know what you’re doing, you can probably really cut down on compute time if you make size sacrifices in various points during the pipeline, and it’s nice that you can output the same texture from 256×256 to 4098×4098 by just pulling a slider, but one of my submaterials got a little confused about its intended size at one point and I ended up on a wild goose chase to find out where it happened.
- The toolbox has a lot of complex nodes that I could spend hours in documentation learning how to use. I imagine I could have saved myself some time and gotten some edge wear masks prebuilt from a single node, but that’ll take more research and time.
When all was said and done, my .sbsar file was huge. I’m not sure if it was grabbing all of the baked maps and compiling all of the generated ones or what, but it came to be 200MB+ and took a solid 5 minutes to generate. On top of that, it crashed Marmoset when I tried to link it. I’m pretty sure this was just me not knowing how to do it right. Shame on me for thinking I could just do it by clicking a simple button! I’m going to hold off on making any snap judgments on the export process since one the greatest advertised strengths of the package is integration with real time engines. It’s something I hope to try, but it appears to require more SD knowledge to leverage then what I currently have. I did the more traditional bitmap route this time:
I did get a few anomalies when I loaded up my maps in Marmoset – the roughness seemed a little off when compared to the SD builtin viewer, particularly on the dull grey metal. I checked sRGB and linear color space settings, but those weren’t the problem. I’m hoping it’s just a checkbox I missed or the difference between IBL skyboxes – it kind of worries me that the interactivity of live updating on the previewer is undermined if it’s not going to match the final result.
I didn’t get a chance to do everything I wanted to do, but I do have plans for what I want to try next time I use the program, for sure.
- I want to give my material subnodes inputs for things like normals, base metalness/roughness and color, turning them into a black box ‘drop and get good results’ type process, exposing the base values as parameters down the line.
- Importing my damaged variant and adjusting values to see how quickly I can come up with nice results. If I don’t do what I just suggested by abstracting my bitmaps from my materials first, that could take ages.
- Figuring out the proper export procedure and getting the sbsar + plugin to generate the textures.
- Testing UE4 integration.
- Trying out Substance Painter. I think that warrants its own blog post though.
- Make procedural bricks or a tiling floor pattern. Everybody seems to do that with this program at some point. When I first saw SD stuff, I though that was all it did.
Substance Designer is a complex tool. The difficulty of your SD experience directly relates to how well you know how to use it; the learning curve is quite dramatic. This is a program that requires you to watch tutorials and look at examples before you start, the pool is just too deep to jump in unprepared. You can contrast this to dDo, where (if you know photoshop) it’s 0 to attractively textured in a few minutes, even on the first try. dDo’s simplicity is its greatest strength and its heaviest weakness – it comes with a great library of prebuilt, high quality materials, but you need to dig into the nuts and bolts of the application to make your texture look unique. That requires building your own materials to tweaking and painting over custom dynamasks to create a texture that others don’t immediately recognize as auto-generated. It’s at this point where dDo goes from intuitive and easy to clunky and at times frustrating to use.
SD, on the other hand, assumes you are going to be doing the nitty-gritty nuts and bolts work in the first place, and doesn’t really do anything for you. It has a few examples to play with, but you really have to build your own materials and learn how it works to get the most out of it. Substance Designer isn’t just a texture suite, it’s a heavy automation tool that takes time to master, but is worth the journey. Seeing what others have done with it and looking at their example products, I can see that it has much deeper potential then dDo, but it comes at the price of time to learn. Simply using SD for your texture doesn’t guarantee it’s going to look good when your’re done.
I was a little disappointed that I was unable to implement the plugin workflow, as I believe that it is where it starts to break away from other packages. The idea of passing low level variables that can make dramatic changes to the texture on the fly in the end package has the potential to be very powerful. Instead of having to export bitmaps and using, for example, UE4’s powerful node editor to write a shader system to add variability on top of those bitmaps, you can eliminate the disk space of storing the intermediate steps altogether and let SD take the wheel (with your guidance) from raw model to end result. That’s something I really want to try, but it is obviously going to take more then the 10 or so hours I’ve been able to scrape together in my fleeting spare time to try.
What I think SD represents is a new approach to texturing, leveraging a procedural workflow to automate the more tedious portions of hand drawing edge wear, in conjunction with randomization and noise to generate a base for various materials instead of relying on photos or hand-drawn bitmaps. Add seed control and the ability to apply the same process to different bases, and you effectively have the ability to generate complex variation that can be reused effortlessly. What’s really cool is that it looks like companies are starting to use it in this capacity. If you have an additional hour and are interested in the subject, I highly recommend this video, it’s from a guy far more talented then I talking about using both programs in modern game development pipelines:
In short, I like Substance Designer. I need to learn how to use it better before I could consider replacing dDo, but I can see it slipping into my workflow alongside it. Getting the most out of SD means mastering part of an asset creation pipeline, like shader editing or modeling. It’s not a ‘shortcut’ tool like dDo is designed to be. The two tools can fill the same general pipeline slot and approach the same problem of texturing in the same general way, but the target audience and product scope is different. dDo is targeted for people that don’t really mind giving the reins to a program to get quick, good results for textures without investing a ton of time. Substance Designer is for people that want to go deeper, that want to have have total control of the look of a texture combined with on-the-fly modularity and even programability in their textures, at the expense of time to setup and learn.
It’s been a nice change of pace from the work I’ve been doing on Due Process, where everything fits into a 128×128 box, lighting is hand painted on the texture, and every pixel counts. I imagine I’ll be writing about that soon, but right now I’ve got too much work to do to talk about workflow and put together beauty shots. Until then, thanks for reading!