PHYSICS, PSYCHOPHYSICS, and VISION for ADVANCED MOTION IMAGING
Presented by Charles Poynton & John Watkinson
In the development of motion imaging systems beyond 1080p HD, various proponents have proposed (and in some cases deployed) three schemes alleged to improve quality: (i) bit depths beyond the 10-bit colour components standard for studio HD, (ii) pixel counts (“4 K” and “8 K”) higher than the 1920×1080 of HD, and (iii) frame rates higher than 60 Hz. However, the tradeoffs among these various options are not well understood; it is not immediately clear into which of these three arenas additional bits should be placed in order to achieve maximum improvement in perceptual quality. Also, traditional image quality criteria for moving images have mainly been carried over from still imaging; few criteria are available to assess motion portrayal. Finally, motion portrayal is often described by terms such as “strobing” that have no consistent definition.
In this course, we address the fundamental physics constraints of motion image cameras and displays, and we reach into psychophysics to understand how the visual system interacts with the physics. We describe the properties of vision that relate to the choice of bit depth, and their dependence upon visual adaptation and absolute luminance (of the display and the viewing environment). The fundamental concept of eye tracking by the viewer is described. The key concept that links physics to vision is the optic-flow axis; that concept clarifies the mechanisms that cause loss of resolution in the presence of motion. Such losses are characterized by dynamic resolution.
Upon completing the course, attendees will:
- Understand the advantages of spatial oversampling in real sensors and displays
- Understand the principles of eye tracking in image portrayal
- Understand why dynamic resolution is an important metric for moving image analysis
- Correctly define and distinguish flicker, judder and strobing
- Understand the optic flow axis and its key role in imaging
- Understand how frame-repeat or multiple flashing damages resolution
- Be aware of the effects that impair resolution in the presence of motion in sensors, processes and displays
- Be able to determine the optimum frame rate for a given moving image portrayal system
Charles Poynton specializes in the physics, mathematics, and engineering of digital colour imaging systems, including HD and digital cinema (D-cinema). He is the author of Digital Video and HD Algorithms and Interfaces, recently published in its second edition. He is a Fellow of the Society of Motion Picture and Television Engineers (SMPTE). Twenty years ago, he chose the number 1080 (as in 1080p and 1920 x 1080) for HD and digital cinema standards, thereby establishing “square pixels” for HD; for this accomplishment, he was awarded SMPTE’s David Sarnoff Gold Medal. In 1998, he was responsible for creation of the Adobe RGB (1998) colourspace. He ordinarily keeps his passport up to date, but in any event is highly available by Skype.
John Watkinson is a Chartered Engineer, a Fellow of the AES and a world authority on digital audio and digital imaging. He has written some 25 books on associated subjects including data storage and compression. He presents seminars on the technologies of audio, television and cinematography. He has developed simple yet accurate models that describe the portrayal of moving pictures and has proposed dynamic resolution as a meaningful metric of motion picture performance. He has shown that the time axis, which has hitherto been largely ignored in moving images, is enormously important and that no progress will be made unless it is given appropriate consideration.