Upon the release and finishing of my first 3 props, I decided to write down this article of things I learnt while working on these pieces. There’s a lot to digest here, but I tried to skim it down a notch just to make it more interesting and avoid overly long statement.

Reference gathering
As the title suggests, in this stage I gather all the necessary or possibly important reference pictures, images, blueprints, dimensions. This applies not only to shape of the prop but also textures — any grunge, detail, scratching or other thing going on. Everything needs to be well documented in order to be replicated faithfully in 3D renders. And lastly, the same theory applies to final renders, I try to keep some ideas for final lighting scenarios like product advertising, commercials or other forms where you can reverse engineer the lights and their positions.
Block out
Next step after reference gathering is a block out which serves a higher purpose. I always add basic shapes to the scene which serve as a literal block out of shapes, dimensions and general scale of the object. This way, proportions are handled in the very beginning, and I don’t to worry much about fixing any odd looking sized part of the model later on. That’s why I learnt to keep quality at maximum, as it’s always easier to check once for 10 minutes, than to find out my mistake later on in the pipeline just so I can fix everything 5 times as long due to this little bug. I don’t worry much about shading or optimization in the block out as that will be done later in the process. At this stage, however, I always prepare a soil for the next stage — high poly, therefore trying to find areas where support loops will run, where to cut them, etc.
High poly
Once the blockout is shipped I can finally dive into high poly stage, which as suggests consists of a huge amount of polygons, which represent the highest quality model, which is not so optimized, but looks awesome. With that in mind, I add all the loops that will help hold shapes of all corners of the prop. That’s where I use subdivision modifier to subdivide the mesh and bring more detail to it, at the cost of performance. This is where I keep an eye on any issues of pinching which happens when either multiple support loops are too close together or when the edge flow is just not satisfactory. I always try multiple scenarios and find out which works the best, to avoid any artefacts happening on the visual side. It’s exactly the shading I keep in mind throughout this stage of the process, as I always need to remember next stages of the pipeline, which includes baking. Therefore, any high polygonal artefacts may result in being visible in the bake, which would ruin the final prop. On the other hand, however, there are ways to use pinching to my advantage, as I did with my phone prop which required rubber part to have a subtle bump on top of it. That’s where I stuck multiple edges very close together to create the pinching effect that would resemble the rubber bump. To help and see any shading issues I always use Blender’s material for high poly models, which is located in viewport display, where I set base color value to whites, roughness to lower number around 0.4 and metallic to 1, to bring all the problematic areas closer to visibility. If necessary I also play with the default viewport lighting rotation just to assure the model is spotless. Lastly, I add some strong distinct colors to different materials on the high poly, to create ID map later on when baking. This makes texturing a lot easier and smoother as it allows me to group materials separately, but more on this topic later.

Low poly
When I wrap up high poly model, next stage is naturally low poly, which is the actual shippable model that goes to the game engine. This stage consists of removing support loops, which will strip model of its subdivided look, and make it truly low poly. Afterwards I keep checking which areas of the model do not contribute to the silhouette of the high poly, therefore can be deleted and make the low poly mesh more optimized. One key element is to keep in mind the actual viewing distance of the player from the model — thus a small bolt doesn’t require to be made of cylinder with 32 faces and 3 step bevel on edges to look believable from 3 metres away. I’m usually trying to avoid overly stretched geometry like long thin stretched triangles which make the model less optimized and puts more pressure on graphics card in general. Another step once the model is simplified enough is to add smoothing groups, or as some may refer to as hard edges and seams. Both of these are a measure for the next step of the pipeline — baking. I’m using hard edges on areas that have a steep slope or angle change like 80° or 90° degrees respectively. The reason for that is to avoid any stress that could happen on the normal map while baking, otherwise we’d see a lot of tension on the map, which is signified in yellow color. At the same time as I select hard edges of the model, each of these hard edges will also be seams, as otherwise I’d get bad looking artefacts in the bake, which would again lower the believability of the model. Another thing I keep in mind however is that there may be other seams, depending on how it’s needed, later on during UV stage. Lastly, I check and rename all the meshes from high and low poly, so when I import them into Marmoset, they’re automatically labelled and grouped together, making the whole process smoother.
UVs
With all the former preparation I start looking over the model and try to find most optimal way of splitting it into UV islands. My general rule is to try and make as least islands as possible, and as large as possible. All of this is very relative, however, as each model has different UVs and therefore will require different care. In this stage I’m trying to maximize the UV space used by these islands, in order to squeeze as much texture space available. That will grant me better texture resolution and higher quality of textures. When splitting a low poly into islands, I tend to try and avoid any uneven or not straight edges, as that gets to become a significant issue after bake and texturing. To avoid this ‘stair stepping’ or aliasing effect, I tend to straighten islands, which also makes better use of UV space — due to it’s straight line shapes, I’m able to better fit more islands into smaller space. Another issue that requires my attention is the orientation of islands, as sometimes when two identical islands are rotated differently, it can create a negative effect on texturing these areas — for example wood orientation would be horizontal on one and vertical on the other one. Hence I always try go group these islands together and keep the same orientation on them. I also check any possible stretching by applying checker texture to see if the area is stretched, and if the checker pattern is too far from resembling a square pattern, I need to fix my UVs as this indicates the final texture would be distorted. Once all the islands are unwrapped, cut and straightened, I pack everything using auto packer which heuristically helps find the most optimal pack and saves a lot of time. Lastly, I apply triangulate modifier, which makes life easier if I have to revert back to low poly mesh for adjustments.
Baking
This way our model is triangulated and imported to Marmoset, which I personally use for baking and rendering of the final mesh. I always try to maintain high settings of the bake, just to get the maximum quality and find any issues (as, if you can see them in the higher quality, you can definitely see them in lower quality too). Then I just hit bake and go through mesh to spot any issues and write a list of things I found. More often than not I need to adjust the cage size and skew in order to get better results. Moving on, I go back to 3D package to address these issues and rebake until the model is in perfect condition. When bakes are looking good, I move on to next stage which is texturing.
Texturing
For this stage of the process, which I find the most challenging, I tend to take my time and rather analyze what’s in front of me thoroughly and have a good idea of what I’m getting into. That being said, I keep checking and looking over all the major areas of interest — where are the scratches, where are, if any, damages? Questions like these help me form an informed plan and map of things I’ll need to carry out in order to perform this task properly. I’m primarily texturing in Substance Painter and planning on expanding my texture ventures into Designer as well, which would allow me better control of the texture creation process. I keep going through the layer stack and make sure all of the channels are filled with information and can’t see any checker pattern. My main workflow consists of using generators doing the heavy lifting and then last 15–20% being done by hand to add that next level of believability and breaking down some of the repeated patterns that human eye can pick up. I tend to try and keep any height channel edits very subtle, as this channel is very sensitive with editing. My main focus, however is mostly the roughness map as it can bring a lot of believability when used with care. Other channels like base color I try to replicate what I see on the actual reference, and metallic map is rather simply 0 for non metallic surface and 1 for metallic surface. For the phone I tried experimenting a little with the custom channels, which allowed me to make custom changes like changing base color dynamically. Hence I created 6 color iterations while using only one exported texture set, which was blank white by default. When I feel textures are done, I move on to the next stage — lighting it up.
Lighting
First things out of the way, I set up a nice scene in the renderer of choice, mainly Marmoset in my case, but for the phone I stuck with Blender Cycles, which did an outstanding job. I always set up a nice 3 light setup, starting from basics and trying to copy the light reference to replicate the lighting of the image. It also helps to bring details of choice forward to attention of the viewer. The choice of background is mainly simple as well, I choose a default mesh I have from a dear friend of mine, which is basically an L shaped beveled plane. The color of the plane however depends on circumstances, and the mesh. What fits great? What’s gonna make the mesh stand out more? All of these little details contribute to the final image, therefore I keep trying to find better scenarios and see which works better. It’s no exact science after all, merely a trial and error process. There are other things I like to do like post processing in Affinity Photo (which made me dump Photoshop for a one time fee), like balancing a brightness and contrast. Alternatively adding some overlays like I did with the phone, which was a great idea by one of the clever artists over at 3DFT (much love and support from these guys). When all the renders are done I proceed to make a final render in 1:1 aspect ratio, which will be a final thumbnail. Usually I stick to something that pops out while being on a very small scale, like thumbnails do. For the choice of angle or area of the prop for the thumbnail I either go with full scale prop that fits the image, or some interesting detail that might attract potential viewer on the ArtStation to click the thumbnail.

Conclusion
With all of these points in mind I try to make as best art as possible. There is always something I could do better, and something that went well, but at the end of the day, the main motivation is to be as good as I possibly can and never stop to improve.