Learning to See in the Dark Published at the Conference on Computer Vision and Pattern Recognition (CVPR 2018) Presented by: Nafiseh Vahabi
Challenges of fast imaging in low light Low photon count Low SNR Short-exposure images suffer from noise Long exposure can induce blur and is often impractical. This challenge is addressed by: Introduce a dataset of raw short-exposure low-light images, with corresponding long-exposure reference images Develop a pipeline for processing low-light images, based on end-to-end training of a fully- convolutional network.
Camera Setting The network operates directly on raw sensor data Camera setting : (Two cameras) Illumination < 0.1 lux at the camera Exposure time : 1/30 Aperture : f/5.6
SID (See in the Dark) Dataset Dataset contains 5094 raw short-exposure images, each with the reference long-exposure image.
Example of Dataset 0.2 < outdoor illumination < 5
Existing Methods
ConvNet Structure Two structure: Multi-scale context aggregation network U-net (default ) Memory consumption (full resolution image in GPU) The amplification ratio determined the brightness of output (set externally)
Result of Two Structures
Training Training from scratch using L1 and Adam optimizer Input: raw data, short exposure image Ground Truth: Corresponding long exposure image in RGB Network trained for each camera Amplification ratio is set to be a difference between the input and reference images Learning rate: 0.0001 Training proceeds to 4000 epochs
Evaluate the Proposed Methods
Evaluate the Proposed Methods
Conclusion Successful noise suppression Correct color transformation Limitation No humans or dynamic objects The result is imperfect and need to improve Application ratio set externally Improve runtime optimisation