Topics by tag:
Recently published articles by Dave G:
Nuke is one of those programs that doesn’t currently have support for ICC colour profiles, so it doesn’t automatically compensate for your display the way that Photoshop or a browser like Safari or Firefox does. You may see “sRGB” in the Viewer menu but without this compensation, it’s never going to match the actual sRGB rendering that Photoshop and others will do correctly. This is especially problematic for wide-gamut displays and, although I have two sRGB NEC 2490WUXi Spectraview screens that are hardware calibrated with the Eye1 Display 2, I’m also not immune to these colour shifts. Certain times it was tolerable and I would make final tweaks in Photoshop but, as I’m using Nuke increasingly in colour-intensive print work, it became obvious I needed to sort this out to work comfortably. Reds are especially effected by this lack of calibration:
Nuke at the left and Photoshop at the right. Everything’s displayed in sRGB on my calibrated sRGB-ish monitor but still there is oversaturation in the reds within Nuke.
So here is the quick process that is used to generate a 100% accurate 3D LUT monitor profile (not a generic “wide gamut”) profile for Nuke to use as the destination rendering intent for your comps. This workflow requires that you have a hardware calibrator because that’s the only way to get an accurate ICC monitor profile that is used in the process within Photoshop. The best way to explain this is with a video and some comments:
My deer in Nuke and Photoshop after applying the calibrated 3D LUT:
Any differences you think you might see are due to the scaling within the Nuke screenshot. I’ve sampled the output and it’s the same between them. might If you missed it in the video, it’s key that you don’t apply the LUT node to your actual output, just your Viewer. So make sure that Write nodes are before the Vectorfield node.
With this one monitor sorted out, this probably has a lot of you wondering “what if you’re using two or more displays?” Well, that’s why I bought these two NEC screens with the Spectraview software/hardware capabilities. It lets you calibrate one monitor and use the white point from that one as the target for the second, so calibration is perfectly synced. That way, you don’t have to worry about involving another Nuke profile when you drag the Viewer over to another screen. I highly recommend the NEC Spectraviews if you need accurate colour across multiple displays. If you’re looking for further reading on colour management, Practical Color Management: Eddie Tapp on Digital Photography is very good.
Update: If you are getting banding in the calibrated display, change the colorspace in/out to sRGB.
I’ve been playing with the Topogun 2 beta and it’s amazing at generating low-topology models quickly and baking out maps. I occasionally get blips in the maps though, due to samples around the source mesh’s UVs, so this is a technique I use with noisy renders to clean them up. Draw a path along the problem samples, make a copy of the layer, do a Dust and Scratches filter on the copy until the blips are gone and mask out everything but a thin brush stoke along the path:
Load it up and you’ve got a nice clean low-res model map:
This is basically the same technique I mentioned before to remove dust from scans but 3D people probably avoided that demo, thinking “this is a retouching thing.”
0 notes | Permalink
Few things in digital imagery, whether 2D or 3D, are as confusing as colour management can be. The idea is simple, but what makes it complex is similar to what makes a giant audio mixing desk – with it’s myriad inputs, outputs and gear attached – can be brutal to wrap your head around. More often than not, when you first start thinking about colour calibration, linear workflows (in 3D), and all the software and OS involved, you suffer a massive brain fart and just give up. The other confusing part when dealing with gamma, colour profiles and curves is that it’s not usually clear what is a rule and what is really up to the individual to adjust for creativity. The problem is that curves and gamma play too big a part in the final result of creative imagery to be ignored. This is especially true for CMYK conversion in Photoshop.
Without understanding the curve that handles RGB to CMYK conversion in Photoshop, you are left with the default setting, which can often produce good results, but it is an average for all scenarios, and a one-size-fits-all approach is not always the right way to handle things where quality and colour are concerned. How do you know if you need to tweak the defaults? It’s pretty easy: if you convert an image that has a lot of saturation in the darks, and find the colour gets sapped badly, then you should probably consider a custom CMYK black generation curve. This is accessible from the Color Settings dialog under the Edit menu:
The part that interests us today is the CMYK setting under Working Spaces:
The default setting is Medium and a literal translation of the curve reads like this: “Everything in your channels, up until 25% saturation will be done in CMY, and once the saturation starts to go above that, black kicks in and slowly takes over the proportion until you reach 100% darkness.” (Yes, I know K stands for “key” – thanks, design students.) This is a good scenario for images that are concentrated in the mid-tones, but it’s not good for images that are heavily balanced towards the blacks because, once you get over the 80% saturation threshold, black is making up a large proportion of the mix, sapping saturation. Most print people know that a 100% red is made with 100% magenta and 100% yellow – but notice how little yellow and magenta are at the full saturation end of the Medium black generation curve. Here’s an example of how that would affect a photo with a saturated red-to-black transition:
That’s the RGB original.
That’s the default Medium CMYK conversion. Notice how washed out it is and how the colour is not reaching the darker portion. The hue shift is just the colour being brought in-gamut for CMYK.
For this reason, I keep my black generation settings on Light:
The difference in translation is immediately different just by looking at the curve: “Use CMY all the way up to almost 50% saturation and then start to use black.” You can see that the ink total also increases even though we haven’t touched the ink limit. Here’s how it affects the conversion above:
It is noticeably better. There is no distinct edge to the colour falloff. But read on to see how I make a better curve for this.
You’re probably wondering “why not just make the total ink 400% (all CMYK plates at 100%) and never have to worry about this balance again?” This is the first thing print newbs do and it’s a problem for press printing. Paper on a web press breaks if you add too much ink and it also increases drying time, so it will smudge. Don’t mess with the total ink, unless you have received a total value from your press. The number goes higher as the paper weight goes up, it’s lower for web press/higher for sheet-feed presses, higher still for cover stock (typically I set cover stock ink limit at 320%). It’s a good idea to get this from the press you’re dealing with because a higher number means you have more colour to play with, meaning more saturation.Pro technique: Mixing CMYK conversions for optimal final images.
By now, you’re probably itching to see some real-world examples of how this affects images and I have an “Extreeeeeeeeeeeeeeeeeeme to the Max!” example for when a mix of two conversions is actually best.
Yes, I know the image is noisy – it’s not a final processed shot, just a pic my girlfriend took of me in Mexico that I thought fit the bill nicely for this post. I should have used a Light conversion (instead of medium) as my baseline CMYK top layer but I wanted to show how the default settings completely wash out details in the eyes due to the muddy, even mix of the four plates. If I was doing a final retouched image, I’d be mixing and blurring these two layers more and brushing in a bit of faked colour where needed. But this is a solid start.Black and White images: when black-heavy generation is needed.
When I am dealing with black and white fashion photography or something with greys, I set it to Heavy, since having too much CMY in your greyscale conversions is problematic to manage on press. If you’ve ever been to a press check, you know how much of a nightmare it can be to get the colours to match the colour proof, so letting black handle more of the greyscale balance means you have less potential for shifting colour values when the press man invariably screws everything up until half way through the print run. This is why it’s also essential to send an approved colour proof (not any old inkjet) to your press to use as a match if you get something done on a press. Without this in front of them, all the expensive colour calibrators and 30-bit monitors in your office won’t make any difference – you’ll get something with colours guessed by the people on press and no recourse when you say get 50,000 books that look terrible. It’s $50 well spent.
1 note | Permalink
Grab the plug-in here and be rid of this annoying behaviour forever.
22 notes | Permalink
GoP update now works with object names so it’s easier to use with named renders like my baked V-Ray Tuner output passes:
The V-Ray Tuner is small but significant. I made a batch bake section and a “DaveBake420” command that takes all your selected objects and renders out baked textures for all active render elements. It even names the rendered image according to the object name. It needs a few things but it’s already saved me a lot of time. Check it out:
Same links as before:
Now that I’ve covered a lot of 3D stuff, let’s talk about some printing stuff, for those who don’t know too much about print. My background is as a magazine art director, and I’ve dealt with some tricky CMYK scenarios through the years. Today was one of those.
This morning I was awoke at 5 a.m. by a call from a press check. It was from a rep who was concerned about some graphic lines occasionally thinning in a template graphic I was using. When you’re given a magazine template you’re invariably given these types of elements, so you don’t think much of them, especially something as simple as some grey lines. Well, it seems, these grey lines were “special” in the euphemistic sense of “not too bright.” Whoever made them had designed these very thin lines at exactly 45° and then screened them at 50% black:
When you have lines this thin, it’s also a bad idea to try and mix three colours. It’s just not going to line up on a web press that shifts more than you’d like it to. But the real problem here was the angle of the lines combined with screening. I’ll explain why.
For those of you who don’t know how press printing works, colour images are made up of halftoned plates (cyan, magenta, yellow, black) that mix thickness of dots in variation to create a full gamut of saturation and luminance:
Conventional haftoning is done with these angles (if they were all the same, you’d only see one colour): cyan 15°, magenta 75°, yellow 0°, black 45°. Notice the angle of black? It’s the exact same as the angle of the template graphic above. What happens when you have a thin line and halftoning is that when the plates are made for the press, the screen door effect of the halftone alters these lines, thinning and thickening them depending on the position of the screen. Watch what happens as the halftone screen moves over a 50% grey:
If the lines were 50% cyan, and drawn at 15°, we’d being seeing the same effect there. This moiré effect should show up in proofs but the publisher opted for inkjet Sherpa proofs, where line screen isn’t simulated. So keep this in mind when making your graphics. AND STOP WAKING ME UP.
More esoteric CMYK nuggets coming soon.
1 note | Permalink
Maya 2011 switched to Qt as a development platform, and Qt tends to lag a bit for Mac-specific features like application bundles or aliases. One of the casualties in Maya 2011 was the ability to set Photoshop as your image editor. It simply can’t be done without typing the path out:
With Mudbox, you can at least dig into the application bundle and set the Mudbox binary as your app. But this doesn’t work with Photoshop. The workaround is easy: set your image hander preferences in The Finder, per file type:
Make sure to click “Change All.” Do this for all types of image formats and, from then on, Maya will open your images in Photoshop. Although this takes a bit longer, this has the added benefit of letting you control which application is handled by which image type. You could have IFF files handled by FCheck, PSD files handled Photoshop and TGA files handled by Graphic Converter. Plus, you’ll never have to worry about setting it in another version of Maya again.
If you’re looking for a fix for aliases not working with Qt apps like Maya or Nuke, use UNIX symlinks and my Finder Service for OS X. To install it, just put it in your ~/Library/Services/ folder (make one if it doesn’t exist). From then on, Nuke and Maya’s Qt dialogs will show your symlinks in their dialogs, where they just ignore aliases:
I’m just working on a job for Bell Canada and had to insert a USB internet key they sell into fake laptop on a cover shot. I’m working on a cross between baked textures and a detailed model, which I’ve created in Maya. It’s not going to be very large, so I get away with this. Anyway, I just wanted to share some video of my Photoshop Pattern Stamp Tool workflow that I use for cleaning up the diffuse textures or cloning in 3D (similar to Mudbox).
Since my UVs are very clean (thank you, Headus UV Layout), the latter isn’t necessarily needed since I could just plop the image into the texture itself, using Photoshop’s very handy Create UV Overlay feature from the 3D menu to align the screen. This is the best method for stuff like graphics on a screen or graphic buttons, because you can make the textures into non-destructive Smart Object layers and scale them as much as you like without degrading the quality with every edit.
After some quick shader work in Maya, my fake keyboard and grill/ports are looking pretty good:
Now to fix the specularity on that flat black portion and drop it into our cover mockup:
For those wondering how I got my model into Photoshop, I’m sending it from Maya with my GoP MEL script available here. I have Mudbox but I find that Photoshop is better suited to texturing these finicky hard-surface models, where you’re jumping frequently between 3D and 2D texturing.
People will likely notice that I’ve began using YouTube for my videos. After an incredibly slow response to Vimeo’s accidental removal of most of our Ars Technica videos (my 3D on the Mac series has been crippled for the past month and a half because of this), I’m not bothering with Vimeo Plus anymore. YouTube also works on the iDevices with Tumblr, which isn’t the case with Vimeo yet. I’m not pleased about it but it’s been a fiasco trying to get good support from Vimeo. Money poorly spent.
Yesterday, I picked up a pack of Dosch models to make things a little easier for this visualization job. The problem was that they didn’t come with a PDF or rendered previews, so I had to use Adobe Bridge to wait for them to render previews, which was slow since they were the kind of models with 300,000 polygons for a coat hanger. Anyway, I knew that Photoshop CS3 and above can open 3D models (I made a texturing workflow for PS) so I figured it would be able to make previews as well. So I made an action (action file download – works on Mac and Windows) and set the Batch settings (these are crucial to having it work):
The Override Action “Save As” forces PS to use the image output format and settings used in the action and it will use the stem name from the 3D file. If you’re on OS X and want to avoid the Action Batch and this stuff, I made a Mac-compatible droplet – grab it here. Here’s the droplet in, uh, action:
After running the batch on all the 3DS files (PS can open 3DS, OBJ and DAE files) and letting it work a little while in the background, my previews were done:
The only thing to be careful of with this script is, when you use the use the File/Automate/Batch, it will process all content in the folder, so if there are JPEGs in there, they will be overwritten with my settings. It’s best to move the 3D files to another folder and process them from there.
Now, back to work on this ridiculous deadline…
I’m posting this tutorial and Photoshop CS5 action in response to this thread, where someone needed a good method for extending pixel edges of things like foliage for game art. This works also well for creating UV cleanup borders. Perform this script on a duplicate of your layer and put it below your existing one. It alters a 1px radius at the edge while extending, so you might not want that on your original layer. Grab the Photoshop CS5 action I made here.