November 16th, 2012,10:48 AM
Hey, just chiming in a little late, but here's some info that may be of help:
8 vs 10 bit. This actually isn't a measure of the size of your gamut. That is, the higher the bit rate does not necessarily mean a larger container for carrying whites and blacks (16 and 32 bit float being another topic). Think of it more like the resolution, or steppy-ness of your footage. 8-bit has three chanels that can each record a number from 0 to 255. 10-bit has 3 channels that record from 0 to 1023. So when a pixel on your cameras chip records, say, a red color of a wall, the codec recording the footage has to assign that a number, it could be 233,20,15 (R,G,B) or something like that in 8-bit, or 1022,256.33 in 10-bit. So 10-bit gives you smaller incremental "steps" to choose from. 8-bit, on the other hand, is going to have to make some fairly averaged assumptions about your color and either step up or down and find a good place for it.
The result visually is typically that 8-bit footage gets more banding in gradients - like a magic hour sky for example. Because you only have 256 steps, as your blending through all those colors, you literally hit steps where numbers have to change in large chunks, and that's where you see a banding line.
10-bit lessens this problem. Then of course, you can even go up from there, usually on the post side, with 16-bit, and 16 or 32 bit float. All even more precise, but very data intensive, so rarely used on the production side.
As for 444 vs 422. This has to do with your X and Y grid of pixels in an image. The first number represents the resolution of your luma vs the second two numbers, which are your chroma. So for every 4 pixels of luma, you have 4 pixels of both channels of Chroma in 444. in 422 you have half as many pixels of chroma than luma. The reason this exists is data compression. The human eye sees black and white much better than color, so smart scientists figured out that they can save bandwidth by lessening color info in an image. Of course, compositing programs and computers are not the human eye, so for them, every drop of resolution is important. Results can be steppy keys, bad edges with 422. Though 422 is way better than 4:1:1 or 4:2:0 which you'll find on a 5D or GoPro or the like. Most TV shows still finish 422, it's a cost consideration, the data is literally smaller, saving money on storage and transcode/transfer times. But as a VFX Super, I often push for 444 footage for heavy green screen sequences and such.
Colorspace is really where you will see your whites and blacks preserved. Regardless of the bit-depth, different manufacturers have created different logarithmic based algorithms (all based off of the filmic log to lin concept, a way to store films mass gamut in digital files) to store what your camera shoots in it's smaller container intelligently, allowing you to reverse the algorithm later on in post, where you have a larger container (remember, 16 or 32 bit), preserving lots of highlights and darks and such. Of course, even your average computer monitor is only 8-bit, so your viewing device becomes a bottle neck, you have to rack exposure to even see those extra details when you are working.
You can absolutely compress 8-but footage in a log format. The 5D has the Technicolor Cinestyle curve, that does a good job. Of course, it's not as good as 10-bit, because you still have less numbers to deal with, so data is less specific, theres a bigger margin of error in your recording of colors. But it is better for VFX to shoot in a log space, regardless of bit-depth. You'll just preserve more detail and get rid of any half-baked camera "look" (usually designed for the consumer to get an immediately gratifying image). Save the "look" process for color correction.
Anyway, I hope this helps.
VFX Creative Director // Encore Hollywood