Augmented Perception of High Dynamic Range Images via Optimal Tone Mapping
Abstract
The human visual system (HVS) has a quite limited power in discriminating different light intensity levels. According to DICOM, the medical image standard, under ideal viewing conditions, radiologists can only distinguish up to 465 levels of gray. On the other hand, modern image acquisition devices can generate digital images of intensity resolution that easily exceeds 10 bits (1024). The intensity resolution of modern optoelectronic displays can reach 10 bits as well. Therefore, there exists a substantial gap between the achievable precision of devices and the discrimination capability of HVS in pixel amplitude. This gap presents a serious obstacle in applications where images of high dynamic range (HDR) are scrutinized and acted upon ultimately by human viewers; for instance, in medical imaging it is the radiologist not a computer that makes diagnostics and clinic decisions relying primarily on his/her visual examination of the input HDR image.
Tone mapping is required to display HDR images. Its objective is to augment human vision, so viewers can see details that are otherwise discernible to naked eyes. As tone mapping is a process of compressing the intensity dynamic range, it inevitably causes loss or/and distortion of information. The problem exists even for professional-grade HDR displays for the simple reason that the amplitude quantization precision of pixels exceeds the raw discrimination power of HVS. Therefore, the challenge of HDR tone mapping is how to make as much wanted information conspicuous as possible while preventing or minimizing information loss and/or visual artifacts. We cast the problem into a framework of constrained optimization and propose a linear program algorithm to solve it. Our approach represents a fundamental departure from the existing ad hoc HDR tone mapping techniques; it improves perceptual quality of displayed HDR images without artifacts of HDR compression (e.g., double edging, halos, embossment, etc.).
SpeakerProf. Xiaolin Wu | |
Date & Time3 Sep 2014 (Wednesday) 15:00 - 16:00 | |
VenueE11-4045 | |
Organized byDepartment of Computer and Information Science |
Biography
Xiaolin Wu, B.Sc., Wuhan University, 1982; Ph.D., University of Calgary, Canada, 1988. Dr. Wu started his academic career in 1988, and has since been on the faculty of University of Western Ontario, New York Polytechnic University (NYU-Poly), and currently McMaster University, where he is a professor at the Department of Electrical & Computer Engineering. His research interests include image processing, multimedia signal coding and communication, joint source-channel coding, multiple description coding, and network-aware visual communication. He has published over two hundred research papers and holds three patents in these fields. Dr. Wu is an IEEE fellow, past associated editors of IEEE Transactions on Image Processing and on Multimedia, a member of the IEEE Industrial DSP standing committee, and recipient of numerous international awards and honors, including NSERC senior industrial research chair, Velux Fellowship, Nokia Research Fellowship, Monsteds Fellowship, McMaster Distinguished Engineering Professor, UWO Distinguished Research Professorship, and the 2008 VCIP best paper award.