Topics by tag:
Recently published articles by Dave G:
I found a bug in Mudbox 2012 where 16-bit channel PNGs load as 8-bit but wanted to preserve the edits I’d done on this texture. So I copied the edited map to the 16-bit original and used difference mode to show where the changes were done and loaded that as a mask:
This is a good way of checking if two images that look similar are actually different.
0 notes | Permalink
When you work on a magazine that does a lot of stories about real people – not celebs who have nice photos – you often have terrible photos that you are forced to deal with. The current magazine contract I’m on has one of these cases; there is a story about a man who ran a marathon to raise money for cancer research and all the photos are pretty rough looking and we can’t shoot him because he has unfortunately passed away. The best (meaning the least terrible) option was one shot that still had big problems: it was an over-compressed JPEG and it wasn’t clear who the subject was, since everything was in focus.
After jacking up the saturation to give the photo more life, the solution to the subject and quality problems was to hand-paint a Z-depth pass to simulate the depth add then apply a depth of field blur (I’m using the amazing Frischluft Lenscare). This served two purposes: you get to decide what the subject matter is by focusing on it and the high-quality bokeh blurring makes the photo sleeker and more film-like. Our terrible photo, while still no Cartier-Bresson, has graduated to publishable. The Z-pass:
The final comp:
Probably easier to see the focus if you look at it animated in Nuke:
If you look closely, you can see there are some edge halos that require touch-ups but that was pretty straightforward.
If you don’t have the money for Frischluft, Photoshop’s Lens blur has similar features. It’s not as nice as Frischluft and requires more touchups but the same technique can be used by pasting the Z-depth into an alpha channel and it will be picked automatically as the depth source when you open the plug-in:
Often the source material you have for a texture isn’t ideal, forcing you to do some finicky Photoshop work to remove distortion. This has always been most complicated when you have a curved surface and you need to build a straight edge out of it for texture painting. The Photoshop Edit/Transform/Warp can help with these but it’s never perfect, so you end up layering and mixing edits, which is slow. Photoshop CS5’s Puppet Warp is the ideal way to fix these things:
You can see how there’s minimal blurring going on there. If you have something more complex as a source shape, you might need to do it in parts:
I just picked that one off of Google Images as an example – you can tell the blurry parts aren’t really a great for texturing. Puppet Warp is great but it doesn’t save you from the “crap in, crap out” rule.
5 notes | Permalink
Nuke is one of those programs that doesn’t currently have support for ICC colour profiles, so it doesn’t automatically compensate for your display the way that Photoshop or a browser like Safari or Firefox does. You may see “sRGB” in the Viewer menu but without this compensation, it’s never going to match the actual sRGB rendering that Photoshop and others will do correctly. This is especially problematic for wide-gamut displays and, although I have two sRGB NEC 2490WUXi Spectraview screens that are hardware calibrated with the Eye1 Display 2, I’m also not immune to these colour shifts. Certain times it was tolerable and I would make final tweaks in Photoshop but, as I’m using Nuke increasingly in colour-intensive print work, it became obvious I needed to sort this out to work comfortably. Reds are especially effected by this lack of calibration:
Nuke at the left and Photoshop at the right. Everything’s displayed in sRGB on my calibrated sRGB-ish monitor but still there is oversaturation in the reds within Nuke.
So here is the quick process that is used to generate a 100% accurate 3D LUT monitor profile (not a generic “wide gamut”) profile for Nuke to use as the destination rendering intent for your comps. This workflow requires that you have a hardware calibrator because that’s the only way to get an accurate ICC monitor profile that is used in the process within Photoshop. The best way to explain this is with a video and some comments:
My deer in Nuke and Photoshop after applying the calibrated 3D LUT:
Any differences you think you might see are due to the scaling within the Nuke screenshot. I’ve sampled the output and it’s the same between them. might If you missed it in the video, it’s key that you don’t apply the LUT node to your actual output, just your Viewer. So make sure that Write nodes are before the Vectorfield node.
With this one monitor sorted out, this probably has a lot of you wondering “what if you’re using two or more displays?” Well, that’s why I bought these two NEC screens with the Spectraview software/hardware capabilities. It lets you calibrate one monitor and use the white point from that one as the target for the second, so calibration is perfectly synced. That way, you don’t have to worry about involving another Nuke profile when you drag the Viewer over to another screen. I highly recommend the NEC Spectraviews if you need accurate colour across multiple displays. If you’re looking for further reading on colour management, Practical Color Management: Eddie Tapp on Digital Photography is very good.
Update: If you are getting banding in the calibrated display, change the colorspace in/out to sRGB.
3 notes | Permalink
I’ve been playing with the Topogun 2 beta and it’s amazing at generating low-topology models quickly and baking out maps. I occasionally get blips in the maps though, due to samples around the source mesh’s UVs, so this is a technique I use with noisy renders to clean them up. Draw a path along the problem samples, make a copy of the layer, do a Dust and Scratches filter on the copy until the blips are gone and mask out everything but a thin brush stoke along the path:
Load it up and you’ve got a nice clean low-res model map:
This is basically the same technique I mentioned before to remove dust from scans but 3D people probably avoided that demo, thinking “this is a retouching thing.”
0 notes | Permalink
Few things in digital imagery, whether 2D or 3D, are as confusing as colour management can be. The idea is simple, but what makes it complex is similar to what makes a giant audio mixing desk – with it’s myriad inputs, outputs and gear attached – can be brutal to wrap your head around. More often than not, when you first start thinking about colour calibration, linear workflows (in 3D), and all the software and OS involved, you suffer a massive brain fart and just give up. The other confusing part when dealing with gamma, colour profiles and curves is that it’s not usually clear what is a rule and what is really up to the individual to adjust for creativity. The problem is that curves and gamma play too big a part in the final result of creative imagery to be ignored. This is especially true for CMYK conversion in Photoshop.
Without understanding the curve that handles RGB to CMYK conversion in Photoshop, you are left with the default setting, which can often produce good results, but it is an average for all scenarios, and a one-size-fits-all approach is not always the right way to handle things where quality and colour are concerned. How do you know if you need to tweak the defaults? It’s pretty easy: if you convert an image that has a lot of saturation in the darks, and find the colour gets sapped badly, then you should probably consider a custom CMYK black generation curve. This is accessible from the Color Settings dialog under the Edit menu:
The part that interests us today is the CMYK setting under Working Spaces:
The default setting is Medium and a literal translation of the curve reads like this: “Everything in your channels, up until 25% saturation will be done in CMY, and once the saturation starts to go above that, black kicks in and slowly takes over the proportion until you reach 100% darkness.” (Yes, I know K stands for “key” – thanks, design students.) This is a good scenario for images that are concentrated in the mid-tones, but it’s not good for images that are heavily balanced towards the blacks because, once you get over the 80% saturation threshold, black is making up a large proportion of the mix, sapping saturation. Most print people know that a 100% red is made with 100% magenta and 100% yellow – but notice how little yellow and magenta are at the full saturation end of the Medium black generation curve. Here’s an example of how that would affect a photo with a saturated red-to-black transition:
That’s the RGB original.
That’s the default Medium CMYK conversion. Notice how washed out it is and how the colour is not reaching the darker portion. The hue shift is just the colour being brought in-gamut for CMYK.
For this reason, I keep my black generation settings on Light:
The difference in translation is immediately different just by looking at the curve: “Use CMY all the way up to almost 50% saturation and then start to use black.” You can see that the ink total also increases even though we haven’t touched the ink limit. Here’s how it affects the conversion above:
It is noticeably better. There is no distinct edge to the colour falloff. But read on to see how I make a better curve for this.
You’re probably wondering “why not just make the total ink 400% (all CMYK plates at 100%) and never have to worry about this balance again?” This is the first thing print newbs do and it’s a problem for press printing. Paper on a web press breaks if you add too much ink and it also increases drying time, so it will smudge. Don’t mess with the total ink, unless you have received a total value from your press. The number goes higher as the paper weight goes up, it’s lower for web press/higher for sheet-feed presses, higher still for cover stock (typically I set cover stock ink limit at 320%). It’s a good idea to get this from the press you’re dealing with because a higher number means you have more colour to play with, meaning more saturation.Pro technique: Mixing CMYK conversions for optimal final images.
By now, you’re probably itching to see some real-world examples of how this affects images and I have an “Extreeeeeeeeeeeeeeeeeeme to the Max!” example for when a mix of two conversions is actually best.
Yes, I know the image is noisy – it’s not a final processed shot, just a pic my girlfriend took of me in Mexico that I thought fit the bill nicely for this post. I should have used a Light conversion (instead of medium) as my baseline CMYK top layer but I wanted to show how the default settings completely wash out details in the eyes due to the muddy, even mix of the four plates. If I was doing a final retouched image, I’d be mixing and blurring these two layers more and brushing in a bit of faked colour where needed. But this is a solid start.Black and White images: when black-heavy generation is needed.
When I am dealing with black and white fashion photography or something with greys, I set it to Heavy, since having too much CMY in your greyscale conversions is problematic to manage on press. If you’ve ever been to a press check, you know how much of a nightmare it can be to get the colours to match the colour proof, so letting black handle more of the greyscale balance means you have less potential for shifting colour values when the press man invariably screws everything up until half way through the print run. This is why it’s also essential to send an approved colour proof (not any old inkjet) to your press to use as a match if you get something done on a press. Without this in front of them, all the expensive colour calibrators and 30-bit monitors in your office won’t make any difference – you’ll get something with colours guessed by the people on press and no recourse when you say get 50,000 books that look terrible. It’s $50 well spent.
Grab the plug-in here and be rid of this annoying behaviour forever.
22 notes | Permalink
GoP update now works with object names so it’s easier to use with named renders like my baked V-Ray Tuner output passes:
The V-Ray Tuner is small but significant. I made a batch bake section and a “DaveBake420” command that takes all your selected objects and renders out baked textures for all active render elements. It even names the rendered image according to the object name. It needs a few things but it’s already saved me a lot of time. Check it out:
Same links as before:
2 notes | Permalink
Now that I’ve covered a lot of 3D stuff, let’s talk about some printing stuff, for those who don’t know too much about print. My background is as a magazine art director, and I’ve dealt with some tricky CMYK scenarios through the years. Today was one of those.
This morning I was awoke at 5 a.m. by a call from a press check. It was from a rep who was concerned about some graphic lines occasionally thinning in a template graphic I was using. When you’re given a magazine template you’re invariably given these types of elements, so you don’t think much of them, especially something as simple as some grey lines. Well, it seems, these grey lines were “special” in the euphemistic sense of “not too bright.” Whoever made them had designed these very thin lines at exactly 45° and then screened them at 50% black:
When you have lines this thin, it’s also a bad idea to try and mix three colours. It’s just not going to line up on a web press that shifts more than you’d like it to. But the real problem here was the angle of the lines combined with screening. I’ll explain why.
For those of you who don’t know how press printing works, colour images are made up of halftoned plates (cyan, magenta, yellow, black) that mix thickness of dots in variation to create a full gamut of saturation and luminance:
Conventional haftoning is done with these angles (if they were all the same, you’d only see one colour): cyan 15°, magenta 75°, yellow 0°, black 45°. Notice the angle of black? It’s the exact same as the angle of the template graphic above. What happens when you have a thin line and halftoning is that when the plates are made for the press, the screen door effect of the halftone alters these lines, thinning and thickening them depending on the position of the screen. Watch what happens as the halftone screen moves over a 50% grey:
If the lines were 50% cyan, and drawn at 15°, we’d being seeing the same effect there. This moiré effect should show up in proofs but the publisher opted for inkjet Sherpa proofs, where line screen isn’t simulated. So keep this in mind when making your graphics. AND STOP WAKING ME UP.
More esoteric CMYK nuggets coming soon.
Maya 2011 switched to Qt as a development platform, and Qt tends to lag a bit for Mac-specific features like application bundles or aliases. One of the casualties in Maya 2011 was the ability to set Photoshop as your image editor. It simply can’t be done without typing the path out:
With Mudbox, you can at least dig into the application bundle and set the Mudbox binary as your app. But this doesn’t work with Photoshop. The workaround is easy: set your image hander preferences in The Finder, per file type:
Make sure to click “Change All.” Do this for all types of image formats and, from then on, Maya will open your images in Photoshop. Although this takes a bit longer, this has the added benefit of letting you control which application is handled by which image type. You could have IFF files handled by FCheck, PSD files handled Photoshop and TGA files handled by Graphic Converter. Plus, you’ll never have to worry about setting it in another version of Maya again.
If you’re looking for a fix for aliases not working with Qt apps like Maya or Nuke, use UNIX symlinks and my Finder Service for OS X. To install it, just put it in your ~/Library/Services/ folder (make one if it doesn’t exist). From then on, Nuke and Maya’s Qt dialogs will show your symlinks in their dialogs, where they just ignore aliases:
2 notes | Permalink