Topics by tag:
Recently published articles by Dave G:
So, after tiring of Mari’s new project panel and its non-OS version of its open dialog, I decided to make a script (Download here) that sends the selected mesh from Maya to Mari for painting. It has some nice options that I think will be welcome:
The features are shown in the video here:
If you don’t choose to send an existing texture, it will create a 50% grey “color” channel that is set to the resolution of the slider, which snaps to Mari’s supported resolutions.
I added the ability to send a tessellated and displaced mesh even though I know that Mari does displacement previews but these don’t use your actual values from your existing Maya model so this script creates a mesh that is closer to what will render. It’s not the best solution since the geometry can be really jaggy but it’s better than nothing. Just make sure you only run this displacement script on cage-type geometry because it smooths the mesh and then tessellates so it will extremely slow if it’s a already high-ish res mesh.
As you can see from the video, I originally made this script to send Python to the clipboard since I didn’t see any documentation for the command port that is used by Nuke to talk to Mari but there was an example script that showed that it receives straight Python. So my script does all the gross code escaping and wraps the Python commands in a single line and passes it to Mari.
To get the script to speak to Mari, you need to make sure that Mari’s command port is open to the default port of 6100. This is in the Scripts portion of the Mari preferences.
The only limitations to the script right now are that it needs to have a relatively simple material applied to the selected mesh – it will simply fail if you have a V-Ray Blend Mat applied to it, for example – a fix for this will come soonish. You also have to have a mesh selected when you run it. Also, textures are currently sent only as RGB, with no scalar option. This will come later and I’d like to add an auto-detect for the applied texture res for the slider. Tiled mats are not supported – obviously that is a dealbreaker for many but hopefully I can get that working some time in the future. I haven’t tested it in Windows or Linux but it should work fine. The exported meshes are stored in a project_path/GoZ/ folder because I didn’t want to create a different folder for OBJs that I use with my ZBrush meshes.
One of the tricks I learned as a photo retoucher back in my university days was that, if you wanted to tile a texture or extend a background, the fastest way is to duplicate the layer and then flip it. When you align the edges, there will be a match at the edges, which is usually easier to clean up than a messy clone job. Well, Maya’s texture nodes let you mirror your U or V texture so that this is done for you without having to head to Photoshop. For something like a plywood reference texture, it works really well:
Obviously, that seam is going to be a little obvious but I am using it as a painting reference, so I don’t need to clean it up. Other textures may be less obvious when mirrored.
0 notes | Permalink
If you tweak cage meshes in Maya that are to be subdivided and displaced, you end up worried about a couple things: what will happen to your seams when you do that tweak and what is the displacement going to do when applied to your tweak? As I mentioned in an earlier post, you can do a polygonal displacement preview but sometimes it is sufficient to just preview the displacement on your cage mesh in Maya. So I wrote a script that will be part of the next version of V-Ray Tuner that makes a temporary surface shader and applies the displacement map for the currently selected mesh as the diffuse colour. If you run it again, it switches back to the original shader and deletes the temp surface shader – no crap left around.
One nice feature of this script is that it uses the same file node as your material so even if you change the link to a new file you might have touched up, it will update the original shader’s displacement file node even though you are previewing the surface shader. I hope you find it useful.
Grab the script here. It works with all renderers.
3 notes | Permalink
So I am working on something that needs some tricky UV and vertex tweaks and, if you’re a V-Ray user, you probably know that viewport 2 is fast but V-Ray’s default baked previews in Maya are really low res – I don’t even think that 64x64 pixels qualifies as “low res,” it’s so bad. Luckily, after getting in touch with Chaos Group about it, they let me know that there is a workaround to get high res previews. Jack up the resolution in viewport 2’s “Bake Resolution for Unsupported Texture Types” setting, save and reload the scene and you’ll have a high res preview:
Thanks again to Chaos Group for the exceptional support. After years of struggling with all the problems and dearth of support for mental ray, it’s so refreshing to have such good support in V-Ray. They never cease to impress me.
One of the big things you need to get used to with Mari, if you’re coming from something like Mudbox, is that a lot of tools need to work with the paint buffer. Where you can smudge any applied texture in Mudbox, you can only do a slerp (Mari’s smudge) to the active paint buffer in Mari. There is a workaround to get baked textures into the paint buffer though: use the clone tool and clone right on top of itself, so that you have zero offset between your source and your clone. This basically lifts the cloned area right into the paint buffer so you can use something like the slerp. Check it out:
0 notes | Permalink
Updated: now supports vertical UDIM tiling and textures from Mudbox.
I’ve been avoiding using multiple UV tiles for a long time since the workflow can be pretty annoying and there is a lot of room for human error but, with the coming of Mari on the Mac, I have been using them finally and developed a script to handle the creation of multi-UDIM tiled textures in Maya:
Grab the script here. The hooking up of the textures is based on this post on CGSociety, where it’s shown how you can completely avoid the layered texture workflow by just daisy-chaining the out colour of one texture into the default colour slot of the next. My script then reads the filename of the texture and guesses its frame offset (U/horizontal only for now) by the number before the file extension – if it’s colordust.1003.tif, then it’s assumed it’s offset by two U tiles. It also disables UV wrapping on the 2D texture nodes. When the connections are done, the top-level texture (that needs to be connected to your shader) is selected. This workflow works with any renderer. I’ve also updated it to support vertical UDIMs:
Important: the script relies on these two standard naming conventions: filename.1031.tif (name, dot, number, dot, extension) for Mari and Material1_Flattened_Diffuse_u2_v1 (name and then tiles appended to end) for Mudbox. These are the defaults so you shouldn’t have any problems but the script won’t work if you change the dots in the Mari name or the underscores in the Mudbox one.
Just a note that renderers like V-Ray and Arnold support UDIM and Mudbox-style tiles – see V-Ray docs here but those only appear at render time. You don’t get a preview in Maya since it doesn’t recognize the filename.UDIM.tif format. Thanks to Roman Lapaev for pointing this out though.
4 notes | Permalink
Stacks of paper and money are one of those things that would be complete slowverkill to create as individual planes with SSS. Unless you need them to flap at the edges, a stack of bills can be easily faked with a series of lines along the edges that are then used as a mask for a stretched edge fake in Photoshop, and later as a separate bump map:
The render (notice the light catching the bump on the stack at the bottom):
The one complication to this workflow is that Photoshop will usually detect that you’re trying to edit a bank note image and not let you edit it:
That’s why I had to do the initial layering in the excellent and cheap photo editor Pixelmator and the banknote image wasn’t detected in the layered image used above. If you want to do a dollar stack, you might need to use The GIMP if you’re looking for a free Photoshop alternative.
You can spend all day making low-compression UVs but if you have seams, there is only one way to avoid them showing up for displacements and normal maps (without using Ptex): they must meet parallel in UV space. There’s no way to have two pixels rotated along the seam and have them meet nicely. Notice my meteorite’s UVs and how the displacement is perfect along the seam:
A clearer picture with no texture:
The dead-simple UV layout might surprise you:
Watch what happens if I use a set of UVs that have lower compression but form an organic shape with no hard edges:
The displacement render shows a seam:
The goods news is that this makes UV layouts pretty easy. All you do is pin the edges after squaring them and relax/unfold the inside edges. Sure, you’re getting a wonky proportion of object to UV space along the edge but it’s a lot better than having a seam. Here’s the Maya workflow for making squared-edge UV shells:
Headus UV Layout has a better method for preserving straight edges while unfolding and relaxing. You just hit the i key over the edges and it will straighten them when you do the f key unwrap. If you don’t have UV Layout and you have to deal with UVs, it’s an amazing app and highly recommended.
Where Ptex fails
The reason Ptex works so well with quadrangle faces is that there is no shearing of pixels and every texture edge is meeting parallel. Add in a triangle and Ptex suddenly doesn’t work so well. Look at a default Mudbox sphere near the poles:
So, if you’re going to use Ptex, make sure you are using a nice quadrangle topology or keep any triangles well tucked away.
UVs are, more often then not, something you cope with rather than welcome into your workflow. But they aren’t always the spawn of Satan. If you use UV overlapping, it can save you a lot of time texturing repetitive areas like the windows of a building. If these areas all have the same topology (number and layout of faces), you can save yourself a lot of work. Here is a quick video showing how you can extract areas of similar topology into distinct units that can then share one set of unwrapped UVs:
The video shows a quick-and-dirty unwrap so don’t think I’m that sloppy in general. I’m using my Facer MEL script palette for the UV transfer portion. Once you’re done, combine the separated mesh and then you can just drop one window graphic into the texture and have them all share that:
If you are working on a mobile game, this kind of high-efficiency texturing is needed to reduce load times and bandwidth.
If you need more variety for your windows, or want to later bake a separate occlusion dirt map that uses unique UVs for the windows, take a look at my tutorial here. This way, you can combine the quick work of a tiled texture with the variety of a hand-tweaked dirt map by using two UV sets and a multiply/divide node.
Sorry about the low post count lately. Been very busy finishing a magazine, writing articles for Ars Technica and I’m finishing an e-book that I’ll be announcing and publishing soon.
After using procedural texture workflows for a while, you get used to having access to individual components of a shader and, if you’ve ever used something like UDK where vertex weights can be used for rapidly mixing shader components, going back to a big, static, memory-hog texture for something like terrain is just not appealing. So if you’re looking for way to get the blending benefits of a paint workflow with the control of multiple layers with individual UVs, there is a simple way to do it. Create a Blend Colors utility and plug your two textures into the two slots of the Blend Colors and plug that into a Maya Phong, Lambert or Blinn material (the V-Ray or mental ray shaders come later) so you can see a good preview of the material. Then duplicate your object and assign a new dummy material that will be your paint surface. Once you assign a 3D Paint Tool texture, you can use that resulting file as the input for your blend (you only need to plug in the red, green or blue channel of the texture to the blend slot since it needs a luminance value, not an RGB texture):
The benefit of this slightly odd workflow is that you can get a realtime preview of the blending and change UV parameters for each material. If you lock the layer that the paint object is on and don’t offset the duplicate, it makes it more intuitive without eyeballing two different objects. Here’s an example of that with a slightly more complex shader network, showing that you can blend the bumps (or anything else) with that Paint Tool file:
At that stage, it’s a bit overkill for previewing though. Once you have a good idea of the blend, just plug the textures into V-Ray materials and then plug those into a V-Ray Blend Material, using the 3D Paint Tool texture as the blend amount. Make sure you have Save Texture On Stroke set for this since V-Ray relies on the disk file, not just what’s in memory. If you were really adventurous, you could paint only on the R, G, and B channels and use those separately as inputs for different blends or Fresnel amount, etc, much like a vertex painting operation in UDK (I made a sample file with this setup if you want it). If you want to keep your preview material on your object but render the V-Ray Blend material, that’s easily done. Just link up your two materials:
And have Maya toggle the V-Ray shader at render time using this method. That way, you get the best of both worlds: flexible materials, good visualization, and a proper V-Ray mat for renders.
If you would rather not worry about UVs and things like that, you can always use a Ptex texture painted in Mudbox or 3D-Coat and plug it into the blend slot of a V-Ray Blend Material. But that doesn’t give you a real-time preview like the method above does (at least not yet in Maya 2012 - hopefully we’ll see this in the future). This is really just a technique for preserving UV materials as layers and quickly visualizing things all in Maya.
15 notes | Permalink