Topics by tag:
Recently published articles by Dave G:
If you tweak cage meshes in Maya that are to be subdivided and displaced, you end up worried about a couple things: what will happen to your seams when you do that tweak and what is the displacement going to do when applied to your tweak? As I mentioned in an earlier post, you can do a polygonal displacement preview but sometimes it is sufficient to just preview the displacement on your cage mesh in Maya. So I wrote a script that will be part of the next version of V-Ray Tuner that makes a temporary surface shader and applies the displacement map for the currently selected mesh as the diffuse colour. If you run it again, it switches back to the original shader and deletes the temp surface shader – no crap left around.
One nice feature of this script is that it uses the same file node as your material so even if you change the link to a new file you might have touched up, it will update the original shader’s displacement file node even though you are previewing the surface shader. I hope you find it useful.
Grab the script here. It works with all renderers.
5 notes | Permalink
When you start tweaking a cage mesh that has a displacement texture in Maya, it gets really tricky to predict what is going to happen when the displacement kicks in after you’ve edited the base mesh. Luckily Maya has a displacement to polygons filter in the Modify/Convert menu that facilitates these edits. Duplicate and smooth your base mesh and then apply the filter and you’ll get an idea of what your tessellated and displaced mesh looks like at render time. It isn’t perfect since it doesn’t smooth UVs but it gives a pretty accurate rendition:
I’m using a simple script that I’ll be adding to V-Ray Tuner but it works with any renderer. Here it is:
string $meshy = `duplicate -rr`;
move -r 0 0 -10;
polyPerformAction "polySmooth -mth 0 -dv 1 -c 1 -kb 1 -ksb 1 -khe 0 -kt 1 -kmb 1 -suv 1 -sl 1 -dpe 1 -ps 0.1 -ro 1" f 0;
You’ll probably notice that my displacement is not following the 1 gain / -0.5 offset alpha settings. I have tweaked it manually so these settings have changed.
So I am working on something that needs some tricky UV and vertex tweaks and, if you’re a V-Ray user, you probably know that viewport 2 is fast but V-Ray’s default baked previews in Maya are really low res – I don’t even think that 64x64 pixels qualifies as “low res,” it’s so bad. Luckily, after getting in touch with Chaos Group about it, they let me know that there is a workaround to get high res previews. Jack up the resolution in viewport 2’s “Bake Resolution for Unsupported Texture Types” setting, save and reload the scene and you’ll have a high res preview:
Thanks again to Chaos Group for the exceptional support. After years of struggling with all the problems and dearth of support for mental ray, it’s so refreshing to have such good support in V-Ray. They never cease to impress me.
The latest version of V-Ray Tuner is out now. 3.5 includes some
If you need to script the V-Ray Framebuffer, there are a series of options that are shown when you enter “vray vfbControl;” in the MEL command line.
While there is no way to query current settings, you can turn them off with something like:
vray vfbControl -exposure 0;
vray vfbControl -curve 0;
It’s good to turn off the exposure and curves setting when doing stuff like wireframe batch renders that can be messed up by contrast and exposure controls.
I just recently noticed that render output from V-Ray for Maya doesn’t go straight to the Console so I had to ask Chaos Group where the render log is on OS X. Both OS X and Linux’s are called vray4maya_log.txt and they are stored in /tmp. Open it in Console.app under OS X or tail in Linux and it will refresh automatically as messages are added:
The log location on Windows is in C:\Users\you\Appdata\Local\Temp.
Sometimes in Maya, you duplicate certain things like V-Ray lights with attached HDR images and only the light itself is duplicated, forcing you to reattach the image connections. In scenarios like this, you should use the Duplicate Shading Network command in the Hypershade:
That will duplicate the light and duplicate connections, so that get the linked sub-nodes. Those sub-nodes are also duplicates, so you can change the colour of an image connection without affecting the colour of the original network’s nodes. Since I use this often, I have this command bound to a hotkey in the Hotkey Editor:
I’ve been playing with a lot of different hair workflows lately and found that ZBrush’s Fibermesh is good for making shaped, long hair, where this can be difficult to do with plug-ins like Shave and a Haircut or Yeti. Many people use hand-drawn guide curves with a Maya hair system and there is a good tutorial for that here. But I am using straight OBJ output from ZBrush’s Fibermesh and converting it to renderable curves with this Python script for Maya. The reason you want to do this is because faceted polygons will face in one direction and give you an unrealistic reflection while a renderable curve will more accurately imitate a round hair. So, here is a quick rundown of how you get your polygonal Fibermesh into Maya for use with V-Ray:
The nice thing about that script is that it lets you pick a threshold, so if you use a ton of hairs in ZBrush, you can set the conversion threshold to 0.5 so you only make half the curves. If you have too few curves even at 1.0, try using the linked workflow above to use the hair system approach to fill in spaces between the guide curves instead. You’ll want to use a much lower threshold for that though, since it is really slow to generate with a lot of curves. I had to do just that workflow in my example above. Here is the 1:1 fibermesh conversion:
It’s too sparse (and my hair is too thick).
and a 0.1 ratio curve conversion from the Fibermesh with a hair system applied and 28 hairs per clump assigned in the hairSystemShape1 settings:
Still needs work, but obviously it’s a lot closer already. Thanks to Roman Lapaev for the script link.
So I’m working on this piece with a guy floating in the water and thought I’d share the workflow I used to make the simple water ripple around his body. Within ZBrush, have your water plane and object meeting and then take a screenshot from above with perspective off. Open the screenshot in Photoshop, crop it to square – since you’ll be making an alpha map for your brush – and then use outline as a selection. Offset the selection, stroke it and blur, and you now have an alpha map for ZBrush:
My water cube is too small but it’s just for a still, so I can clone out that wonky bit at the top. Also, water has an index of refraction of 1.333 but I used 1.6 to increase the refractive effect since I’m not going for realism. As you can see in the video, I like to use Smart Objects for my Photoshop layers since the blur filter can always be changed later if I want less or more of the filter and I can scale the stroked layers non-destructively.