Last night, Ricco Saenz added a great post to his blog, Second Sighting, entitled How to actually capture depth of field on your SL photos. In it, he succinctly describes a challenge that faces photographers: in using depth of field, we are trying to purposefully blur a particular part of the image, but only by saving the image at the current screen resolution do we obtain the "correct" result, because saving at a higher resolution makes adjustments to the depth of field that we can't easily anticipate or see in advance. I'd just like to amplify on one aspect of this (as Honour McMillan has also done here). (Edit: Ricco just gave me a heads up that Nalates Urriah has also posted, from a more technical perspective, on her always excellent blog here.)
Let's say I take a photo that's set to "current window"—meaning what I'm seeing on the screen right in front of my nose—in my case that would be usually around 2550 pixels wide. The top image (2556 pixels wide), which was taken at bonne chance, shows some telephone poles with wires strung along them, and, if you look closely, you can see that some of the wires are breaking up. (You can click on these images to enlarge them.) Then I took the same image at a custom setting, in this case 4000 pixels wide (the second image), and you can see that the wires look better.
In the third and fourth images, I've zoomed in on the wires and have enlarged them. The quality of the higher resolution image is far superior—significantly better than what I see on my screen, actually. Circling back to Ricco's blog post: When you're working with depth of field and high resolution, the viewer is going, in a sense, to do battle with itself: the depth of field settings are meant to blur, but the high resolution is intended to provide better detail. So, as Ricco points out, one sometimes has to shoot, then view the resulting image in Photoshop (or whatever your preference is), then shoot again, and so on, until the right balance of detail and depth of field is achieved.