Now, after some very helpful comments, I'm still not sure of the answer. I'm not confident I can infer the physical geometry of the chip all that well, and it seems like the chip can capture more resolution than the camera. I think I'll end up using 4:3 for now, and when I get my new G5 with iLife 5 I'll study the frame data size using both methods. The pixel count may be my best guide to which format makes the best use of the sensor. Despite the complexity of the 16:9 format (can't play back on a non-widescreen TV without some iDVD trickery) it does compensate slightly for the dismal wide angle lenses in today's cruddy camcorder market.
If you want to read more, here's the essay ...
Groan. I thought still camera digital vs. 35mm film aspect ratio wars were bad, but this is worse. I highly recommend this site as a resource on this topic, it's also a uniquely great guide to buying a camcorder. I'll start with old-fashioned TV and move on from there ...
Older TVs display images in a 4:3 (width x height) ratio - 1.33. Not coincidentally, this is the ratio of an 640x480 VGA display (4:3) and most consumer digital cameras. The TV came first, all else followed. Sort of like the wheeblase of roman carts allegedly determining road widths in some way.
Except that in the film world 35mm was already around, which is roughly 3:2 - 1.5. So printing consumer digital camera images to "standard" 4x6 prints is a pain, but the images display well on older 4:3 monitors (but not newer widescreen LCD displays!).
In the movie world things have varied over the past century, but even the 35 mm 1 3/8" (diagonal) has ruled:
In May 1889 Thomas Edison had ordered a Kodak camera from the Eastman Company and was apparently fascinated by the 70mm roll of film used. Thereupon W.K.L.Dickson of his laboratory ordered a roll of film of 1 3/8"(ca. 35 mm) width from Eastman. This was half the film size used in Eastman Kodak cameras. It was to be used in a new type of Kinetoscope for moving images on a strip of celluloid film, which could be viewed by one person at the time.But, you say, if movie film is 3:2 (1.5), why are modern movies shown as 16:9 (1.78) in theaters? All I can guess is they are filmed as 1.5 but are then somehow masked to be 16:9. If my brother were around he'd explain it to me.
Now to a reasonably modern mid-market camera -- the Optura 50. The sensor on the Optura 50 is 1/3.4. That, apparently, is a measurement in square inches -- so the sensor is 0.3 inches squared. That's in the "higher range" of single sensor cameras (which is why the Optura is less "zoomy" than the Elura -- if lens size is fixed a bigger sensor means less magnification. Tragically the Optura still has crappy wide angle coverage.
But what's the aspect ratio of the sensor? My main clue is the recording pixels for the still camera, which maxes out at 1632 x 1224 (1.33 or 4:3) [1]. That's our old TV 4:3 aspect ratio; so the sensor is shaped like an old fashioned TV. Given that aspect ratio and the surface are of 0.3 sq in then the working sensor is about 0.84 x 0.63 inches (sorry for all the US units!). (This assumes the still camera uses the entire sensor -- this is a reach and in retrospect is an unreliable assumption.)
So how does the camera manage to produce 16:9 letterbox output -- with all its associated editing and display problems (need iMovie 5.01, need to define project, lots of hassles everywhere) -- when the physical aspect ration is 4:3?! (I don't really know, but if you assume the sensor is phsyically .84x.63 inches then ...)
Ahh, that's the bad news. The camera must be using only a part of the sensor and adjusting the display appropriately. You can get a good idea of how much is being thrown away by looking at the size of the image on the swing-out LCD viewfinder -- in 16:9 widescreen mode there are black bands above and below the display. If my math is right then the camera uses the full width of the sensor but only 75% (0.47/0.63) of the height. So almost 25% of the sensor data is being disgarded in order to produce that spiffy letterbox effect.
Now if a camera had a true 16x9 sensor that would be a different story!
Update 3/20/05: And NOW, for a contrary opinion!! I'm leaning now to the conclusion that the real constraint is iMovie. So I'll stay 4:3 until I ready to try iMovie 5, then switch.
---- Footnote -- on resolution ---------
[1] The still images are 2 megapixels - idiotic toy camera. The spec sheet says the movie 1,230,000 pixels or 1280x960, so much less than the still camera. That's apparently typical for a mid-range consumer sensor; the image stabilizer uses up some of the pixels that are available for still camera use.
Now, consider how TVs work (this site is also useful). With 525 scan line NTSC video the usual image displayed is only a 480 lines high, of which 240 are displayed at any one time (alternating240 at 60/sec, so a 480 at 30/sec). In data terms this is roughly equivalent to a 640x480 display or about .3 megapixels.
Obviously the sensor can capture a higher resolution than NTSC can represent. So where does all the extra resolution go? Seems like a waste.
PS. This data was kindly posted in response to my query in an Apple support forum, its based on SONY data:
Maximum playback resolution for different camcorder video sources:
8MM - Up To 240 Lines of Resolution
8MM XR - Up To 280 Lines of Resolution
Hi-8 - Up To 400 Lines of Resolution
Hi-8 XR - Up To 440 Lines of Resolution
----- the NTSC signal standard maxes out above here -------------
D8 (Digital 8) - Up To 500 Lines of Resolution
Mini DV - Up To 530 Lines of Resolution
High Definition - Up To 1080 Interlaced Lines of Resolution
----------
Updated 3/19: trying to correct my most egregious errors.
No comments:
Post a Comment