- High dynamic range creation

Natural scenes commonly present a wide dynamic range, and the human visual system is able to capture subtle details in both dark and bright areas. This is not the case for standard digital cameras, which are limited in the dynamic range they are able to represent. In particular, standard cameras are only able to capture different intervals of the luminance range at different exposure times, in particular, bright areas are captured at short exposure times, while dark areas are captured at longer exposure times. High dynamic range imaging aims at fusing different camera outputs to obtain a new image with information in both the dark and bright areas of the scene.

Related Publications:

[GVB 2015] -  The Intrinsic Error of Exposure Fusion for HDR Imaging, and a Way to Reduce it



- Image dehazing
Images obtained under adverse weather conditions, such as haze or fog, typically exhibit low contrast and faded colors, which may severely limit the visibility within the scene. Unveiling the image structure under the haze layer and recovering vivid colors out of a single image remains a challenging task, since the degradation is depth-dependent and conventional methods are unable to overcome this problem.

Related Publications:

[GVP 2016] -  Fusion-based variational image dehazing
[GVP 2015] -  Enhanced variational image dehazing

- Blind gamma estimation


Blind gamma estimation is the problem of estimating the gamma function that is applied to a linear image both for perceptual reasons and for the compensation of the non-linear behaviour of displays. Gamma values change both inter- and intra-camera. In the latter case, the change comes from the use of different scene settings.

Related Publications:
[VaB 2014] -  Simultaneous blind gamma estimation



- Color stabilization

We expect two pictures of the same scene, taken under the same illumination, to be consistent in terms of color. But if we have used different cameras to take the pictures, or just a single camera with automatic white balance (AWB) and/or automatic exposure (AE) correction, then the most common situation is that there are objects in the scene for which the color appearance is different in the two shots. This problem is even aggravated when we are using different cameras. The goal of color stabilization is to convert one of the images to look exactly as the other one in terms of color.

Related Publications:
[VaB 2014] -  Color stabilization across time and along shots of the same scene for one or several cameras of unknown specifications



- Color characterization

Color camera characterization, mapping outputs from the camera sensors to an independent color space such as XYZ, is an important step in the camera processing pipeline. We proposed a method that aims at minimizing the perceptual error of the characterization.

Related Publications:
[VCB 2014] - Perceptual color characterization of cameras


- Gamut Mapping

Gamut mapping transforms the colors of an input image to the colors of a target device to exploit the full color potential of the rendering device. This problem is highly relevant in industry as new displays with large gamut capabilities are reaching the market. We have proposed different solutions that are based on applying iterative schemes to a perceptually-inspired variational method .

Related Publications:

[ZVB 2017]- Gamut extension for cinema

[ZVB 2014] -  Gamut mapping in cinematography through Perceptual-based contrast modification


- Image denoising

Noise is present in images due to the inherent physical and technological limitations of the cameras. The presence of noise degrades the quality of the captured images. Therefore, image denoising is a must-have step in the digital imaging processing pipeline. However, very few attention has been paid in how to use color information for this goal. We have proposed a  color decomposition framework to pre-process an image before applying a typical denoising method.

Related Publications:

[VaB 2018] -  Angular-based preprocessing for image denoising


- Computational Colour Constancy

Colour constancy is the ability of the human visual system to perceive a stable representation of colour despite illumination changes. Computational colour constancy tries to emulate this ability, that is, tries to recover the illuminant of a scene from an acquired image.

Related Publications:

[VVB 2012] -  Color Constancy by Category Correlation
[VPV 2009] -  Color Constancy Algorithms: Psychophysical Evaluation on a New Dataset

 
Colour process in the brain

To simulate the process of colour in the brain will help in developping new imaging techniques. In this line,  we defined a new set of sensors able to fit a wide range of psychophysical data.

Related Publications:
[VOV 2012] -  A new spectrally sharpenend sensor basis to predict color naming, unique hues, and hue cancellation

 
- Sensor sharpening

Sensor sharpening tries to model different sensors in order to accomplish the Von Kries model. It is usually used as a preprocessign step for computational colour constancy. In particular,  we devised a method called Spherical Sampling that allowed us to effectively search for the optimal RGB sensor combination to perform color constancy.

Related Publications:

[VaB 2014] - Spectral sharpening of color sensors: Diagonal Color Constancy and Beyond
[FVS 2012] - Spectral Sharpening by Spherical Sampling


Copyright © CSS3_six_dark | design from css3templates.co.uk