Trends in CMOS Image Sensor Technology and Design Abbas El Gamal Department of Electrical Engineering Stanford University
CCD Image Sensors High QE and low dark current Serial readout: Slow readout Complex clocking and supply requirements High power consumption Cannot integrate circuitry on chip
Column Amplifiers / Caps CMOS Image Sensors Memory-like readout: Enables high speed operation Low power consumption Region of interest Integration Enable new applications: Embedded imaging High dynamic range Biometrics 3D imaging Reset Word Row Decoder Word Bit Pixel Bit Column Amplifiers / Caps Column ADC / Mux
Image Sensor Market 20,000 40,000 60,000 80,000 100,000 120,000 140,000 160,000 180,000 200,000 2001 2002 2003 2004 2005 2006 Year Thousands of Units CMOS CCD Source: In-Stat/MDR, 8/02
CMOS Image Sensors Today Most sensors: application-specific (optical mouse) low end (PC, toys) Fabricated in old (0.6-0.35mm) processes limited integration Lower performance than CCDs: Not used in digital cameras Some exceptions (Canon D30/D60) Most CMOS sensors shipped today are not very exciting and fall short of realizing the full potential of CMOS integration. Most are application specific or used in low end applications. They are fabricated in older CMOS technologies and thus have limited integration….
Technology and Design Trends Recent developments in: Silicon processing Color Filter Array and Microlens Miniaturized packaging Pixel design Camera-on-chip Promise to broaden CMOS image sensor applicability and enhance their performance
This Talk Silicon processing: Applications of modified processes: Sub-micron CMOS process modifications Triple-well photodetector Applications of modified processes: Integrated color pixel Multi-mega pixel sensors Camera-on-chip integration Pixel-level ADC – Digital Pixel Sensor High Frame Rate Sensors and Applications High dynamic range
Scaling CMOS image sensors have benefited from scaling: smaller pixels higher fill factor greater pixel functionality (PPS APS) Need 0.18mm and below process for camera-on-chip integration
Problems with standard CMOS Low photoresponsivity -- shallow junctions, high doping High junction leakage -- STI, salicide High transistor leakage – off-current, thin gate oxide Poor analog circuit performance Wong IEDM’96
Improving Photoresponsivity Color Filter Microlens Microlens Spacer Microlens Overcoat Color Filter Planarization Layer Deeper non-silicided lightly doped diode junctions (NW/PSUB, Ndiff/PSUB) High transmittance SiON materials Micro-lens and CFA integration SEM photograph of 3.3mm pixel Courtesy of TSMC
QE of 0.18m CMOS Photodiode Quantum Efficiency Wavelength (nm) Courtesy of TSMC/Pixim
Reducing Leakage Junction leakage reduction: Non-silicided double-diffused source/drain implants Hydrogen annealing Pinned-diode Transistor leakage reduction: Thick gate oxide transistors Thresholds adjusted to increase voltage swing Leakages of sub 1nA/cm2 achieved Wuu, IEDM, 2001
Drawbacks of Color Filter Array Loss of resolution aliasing Color cross-talk Increase microlens to photodetector distance Adds manufacturing steps and cost Using CFA colored photoresists is subject to much more process limitations
Triple-Well Photodetector (Foveon) Vn Vp Reset Row Select Column Out Blue Column Out Green Column Out Red Vcc P Substrate N Well P Well N Ldd
Triple-well Advantages: Challenges: No loss of resolution Elimination of photon loss due to CFA Elimination of color cross-talk Challenges: Larger pixel size – less pixels than standard sensors for same area High spectral overlap between three color channels Fabrication and circuit operation ?
Courtesy Foveon, Dick Lyon Spectral Response 1 Courtesy Foveon, Dick Lyon 0.9 Green Blue Red 0.8 Triple-Well 0.7 0.6 Relative Response 0.5 0.4 0.3 0.2 0.1 400 450 500 550 600 650 700 400 450 500 550 600 650 700 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Courtesy TSMC CFA Relative Response Wavelength (nm)
Integrated Color Pixel Light filters using patterned metal layers Integrating color selectivity in pixel. Show what we propose to do. Control wavelength selectivity with the metal lines. Make silicon thicker Catrysse, IEDM, 2001
Enabling CMOS Technology Smallest Period in CMOS Technology 0.13 um 0.18 um 0.25 um Why hasn’t this been done before for image sensors? Smallest Period for a given technology. Explain reduced wavelengths 0.13um: Min = 170*2*1.5 = 510 0.18um: Min = 230*2*1.5 = 690 0.25um: Min = 320*2*1.5 = 960 400 500 600 700 800 900 1000 Wavelength (nm)
Integrated Color Pixel Using metal patterns above each photodetector, wavelength selectivity can be controlled Needs 0.13um process or multiple layers in 0.18 for good selectivity in the visible range
1D ICPs under imaging conditions Pixels with increasing gap width 400 500 600 700 800 900 Wavelength (nm) 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 Transmittance Example of 1D ICPs with interesting transmittance under integrating sphere illumination. Selectivity is present in visible, but not ideal. Let’s keep representatives of all 4 periods shown earlier. However, let’s pick ICPs which have same width metal wire (approx. 260nm) and increasing gap width. Keeping metal width fixed and changing increasing period by increasing duty cycle seems a good design approach. Interesting spectral tuning under polarized, collimated illumination is not sufficient for imaging purposes. Under imaging conditions light is unpolarized and wide angle. To simulate this we use an integrating sphere. Of course, the intergrating sphere is an extreme example of wide angle (-180 to 180 degree).
Multiple Layers in 0.18mm CMOS One layer Two layers 400 500 600 700 800 Wavelength (nm) 0.6 0.5 0.4 0.3 0.2 0.1 Transmittance
Layer Alignment in 0.18mm CMOS Two layers (Aligned) Two layers (Offset) 400 500 600 700 800 Wavelength (nm) 0.6 0.5 0.4 0.3 0.2 0.1 Transmittance
Scaling to 0.13mm CMOS 0.6 0.5 0.4 0.3 0.18 mm 0.2 0.1 Transmittance 400 500 600 700 800 Wavelength (nm) 0.6 0.5 0.4 0.3 0.2 0.1 0.18 mm Transmittance 0.13 mm
Multi-Mega Pixel Sensors Memory-like readout of CMOS image sensors an advantage over CCDs (Kozlowski, et al, IEDM, 1999) Recent examples: Kodak DCS Pro 14n (13.7 Megapixels) Canon 1Ds (11 Megapixels) Foveon 10X (10 Megapixel Triple-Well)
Camera-on-Chip Integration Analog Proc & ADC Image Sensor ImageSensor & ADC ASIC Camera-on-chip Memory Memory ASIC PC-Board PC-Board Today Future
Camera-on-chip Applications
Digital Pixel Sensor (DPS) ADC Memory Developed at Stanford (under PDC program) ADC per pixel and all ADCs operate in parallel Advantages: Better technology scaling (integration) than APS Very high speed digital readout No column read noise or Fixed Pattern Noise
Sense Amplifiers and Latches DPS Block Diagram Sense Amplifiers and Latches Row Address Decoder Pixel Block
High Speed DPS Chip 0.18m CMOS 352 x 288 pixels (CIF) 9.4m x 9.4m pixels 37 transistors/pixel 3.8 million transistors 8 bit single slope ADC and memory / pixel 64 wide digital output bus at 167 MHz Kleinfelder, ISSCC, 2001
Pixel Schematic Sensor Comparator 8-bit Memory Data I/O Reset V Reset PG Tx RAMP Read Thick-oxide Sensor Comparator 8-bit Memory
ADC Operation 8 Counter (Gray Code) Latched Value RAMP Input Comp Out Gray Code Counter Digital Out Memory 8 + _ RAMP Input Comp Out 1 Memory Loading Memory Latched
Video Sequence at 10,000 FPS
Video Sequence at 700 FPS
High Frame Rate Applications High frame rate enables new still and video imaging applications: Dynamic range extension Motion blur prevention Optical flow estimation Motion estimation Tracking Super-resolution
Multiple-Capture Single-Image DSP Operate sensor at high frame rate Process high frame rate data on-chip Output data at standard rates Integration of sensor with embedded DRAM and DSPs enables low cost implementation (Lim‘01)
HDR via Multiple Capture
HDR Image Use Last-Sample-Before-Saturation Algorithm
HDR Example Two captures of same high dynamic range scene Courtesy of Pixim
DPS HDR Comparison CCD2 CCD1 DPS Courtesy of Pixim
HDR Image via Multiple Capture
Extending DR at Low Illumination For given exposure time, LSBS only extends DR at high illumination -- Read noise is not reduced Increasing exposure time limited by motion blur Liu, ICASSP, 2001 describe an algorithm for extending DR at low illumination and preventing motion blur Input Short exposure Long exposure
Last Sample Before Saturation SNR and DR Enhancement 60 50 40 30 20 10 0 Weighted Averaging Last Sample Before Saturation Single Capture DR=47dB DR=85dB SNR (dB) iph (fA) DR=77dB 10-1 100 101 102 103
65 Image Capture Example 0 ms 10 ms 20 ms 30 ms 40 ms 50 ms
High Dynamic Range Image LSBS Estimation / Motion Prevention
Integration Beyond Camera-on-chip Frame Memory SIMD Processor DPS Imaging Array Lim, SPIE, 2001
Summary
5mm pixel with 30% fill factor Transistors Per Pixel 5mm pixel with 30% fill factor 512 256 128 64 32 16 8 4 2 1 # Transistors 0.35 0.25 0.18 0.15 0.13 0.10 0.07 0.05 Technology (m) ITRS Roadmap
Motivation PPS/APS do not scale well with technology: Analog scaling problems Sensitive to digital noise coupling Modified 0.18mm CMOS enables camera-on-chip: low cost and power consumption Digital Pixel Sensor: Scales well and less sensitive to digital noise Can operate at high frame rate Integration + high frame rate can be used to enhance sensor performance beyond CCDs
Current Pixel Architectures Passive Pixel (PPS): Small pixel, large fill factor Slow readout, low SNR Reading is destructive Active Pixel (APS): Larger pixel, lower fill factor Faster readout, higher SNR Most popular architecture
Micro-lens and CFA Integration Color Filter Microlens Microlens Spacer Microlens Overcoat Color Filter Planarization Layer SEM photograph of 3.3mm pixel Courtesy of TSMC