Advertisement
Articles
Advertisement

Image Sensor Quantum Efficiency Versus Wavelength Optimization

Mon, 12/12/2011 - 1:31pm
Gareth Powell, Pierre Fereyre, e2v semiconductors SAS, France, www.e2v.com

The electro-optical qualities of image sensor pixels have a great dependence on their ability to efficiently convert light (photons) to an exploitable electrical signal. The overall tendency of the imaging industry has been to reduce the size of the pixel aperture in order to increase resolution. The consumer imaging segment is a spectacular example of this accelerated race to shrink pixels to a point where their width approaches the wavelength of visible light.

However, reduction of pixel surface is significantly slower in industrial and other machine vision segments due to factors like the need for multiple transistor pixel architectures to achieve global or snapshot shuttering, or extremely high full well capacitance in order to achieve the highest possible linear dynamic range or signal-to-noise-Ratio. Fundamentally though, a machine vision system’s reliance for successful real-time image processing on unprocessed raw data with high signal to noise ratio even at high acquisition speeds means that the pixel sensitivity can seldom be compromised. As processors are able to achieve higher speeds, invariably the need to treat higher resolution image data enables the expected system performance enhancement. As ever, this is expected at very limited additional cost. The need to reduce the size of pixels in order to retain the optical format (and usually expensive optics) between generations is a frequent constraint that is therefore also driving to some degree the more performance-conscious machine vision market.

Technology advances applied to improving the performance of traditional front-illuminated CMOS imagers are yielding surprising results.

e2v fig 1 day and nightWavelength Sensitivity of Human Vision

With reference to the human eye, night vision (termed scotopic) is mainly governed by the response of rods which are able to detect single photon flux but without any color information. Day vision conversely is photopic; governed mainly by the response of cones which are sensitive to color. Figure 1 shows the wavelength response of typical human vision in photopic (daylight) and scotopic (night) conditions, meaning that our vision becomes progressively more monochromatic at low-light levels.

Wavelength Sensitivity of Electronic Imaging

While the lens plays a pivotal role in supplying light to the electronic image sensor behind it, the imperfections of transforming light photons to electrical signals becomes the major limiting factor. In order to achieve imaging in the lowest light, it is essential to minimize noise and maximize signal sensitivity characteristics of the sensor. Our preoccupation now becomes SNR.

Similar to the lens aperture, the sensor’s individual pixels apertures or pixel surface area has a direct influence on its sensitivity. Simply put, the smaller the pixel, the less photons it collects during its integrating period, and therefore low-light performance can impose the lower limit on pixel size. Conversely, system cost, optics formats and image resolution requirements combine to force upper limits on the pixel size. The CMOS process and pixel architecture itself influences the photon to electron conversion efficiency termed quantum efficiency, and the effective area of this conversion zone in two dimensions has to be considered for light from UV to NIRb (near infra red).

E2V has been particularly successful in achieving very high quantum efficiencies (QE) on their innovative megapixel EV76C560 CMOS sensor and new derived products to be released this year. These derived products (EV76C660) have been further enhanced for improved Quantum Efficiency both in the visible range and also in the NIR. Greater than 80 percent peak QE for a five-transistor global shutter pixel of 5.3 µm x 5.3 µm pitch represents a considerable improvement over current state-of-the-art CMOS sensors (See figure 2a).

e2v fig 2 QE versus

Wavelength Optimization of QE

Based on the physical properties of silicon where longer wavelengths penetrate deeper into the photo-sensitive conversion zone, thick epitaxial material is often used to increase the photodiode diffusion depth for improved QE in the upper red and NIR wavelengths below 1100 nm. However, thick material usually leads to MTF degradation through increased photonic crosstalk. Image quality is a combination of MTF and QE (so-called Detective Quantum Efficiency) where consideration must be given to both the space-domain and the frequency-domain, MTF becomes the second most important parameter. Deep depletion photodiodes (Figure 2b).with proprietary silicon doping methods are innovations used to recover MTF for long wavelengths in the upper non-visible part of the spectrum. 

e2v fig 2b deep

 

 

 

 

 

 

 

 

The N Part of SNR

Noise sources in the image sensor; readout and interfacing electronics are multiple and varied, and frequently have a direct relationship with temperature. An example is the dark current, which becomes dominant at longer integrating times. Another temporal noise is the result of kT/C effects that leave random residual voltages on sensing and storage nodes in the pixel during the necessary clocking and resetting periods prior to integration.

Since it embeds its own amplifier, the pixel is a primary source of temporal noise. Using high gain or bandwidth limitation can remove part of the noise, but the consequence will be dynamic range reduction since the charge handling becomes limited. Photons are the quantum of energy converted into electrons within the photodiode where the noise floor is by nature a voltage. Each electron of noise at this level defines the level of detection of the image sensor since the minimum SNR is 1 corresponding to photons divided by the noise. The magic factor CVF (Charge to Voltage conversion Factor) defines the translation of noise in the voltage domain. Clearly, the higher the CVF the better.

e2v equation

Back Illumination

Different ways to more directly couple light into the pixel: maximizing both fill-factor and quantum efficiency have been developed for low light imaging. One such example currently gaining traction is Back Side Illumination (BSI). This technique involves flipping the imager wafer over and thinning the bulk silicon down to the thickness of the photodiode, and then continuing post processing (color filters, micro lens, etc.) on the thinned side of the wafer. As a result, the fill-factor of the pixel is greatly increased since the photodiode now resides e2v fig 3 FSI and BSIalmost on the surface of the pixel. In the conventional Front Side process, the photodiode is buried beneath metal layers that separate the active photon collection zone from the light entering at the surface of the pixel (see Figure 3). Due to the relationship between the horizontal pixel (x, y) aperture and the optical stack height (z) above the photodiode, it follows that the fill-factor increase is higher for smaller pixels than for larger ones. However, quantum efficiency benefits are still achieved by BSI even for larger pixels even though the fill-factor improvement is less. This is attributable to elimination of photon scattering and optical shielding effects; some other side-effects of the metal layers. Further improvements in spectral response can be achieved by using broadband or band-pass optimized anti-reflective coatings on the surface of the sensor array.

Although the mass produced CMOS imager industries are claiming BSI to be a ‘revolutionary’ technology, it is interesting to note that this BSI technique has been used in high performing CCD imagers for more than 20 years - an example that sources for innovation are often found in the past! 

Industrial Imaging Wavelength Dependant Applications

Outdoor smart-cameras can be of the fixed type, fixed but with mechanical or electronic pan/tilt/zoom, or mounted on manned or unmanned vehicles or aircraft; therefore mobile. Many applications exist from Intelligent Transport Systems, to intelligent surveillance cameras for border control, or airport perimeter security etc. that are expected to provide their essential service by night.

Figure 4 puts perspective on the magnitude of the day and night light scale in lux; a photopic unit of light related to the characteristics of the human eye.

e2v fig 4Cameras used for day and night vision are similar to the characteristics of human vision sensing as described earlier, where night time color processing is attenuated. Exposure synchronized NIR diode illuminators and a mechanism that removes the IR filter from the lens stack are also employed. QE at 900 nm of 50 percent (EV76C660) means much reduced cost of NIR illumination. The same cameras need to be able to work also in the diverse and wide dynamic conditions of daylight, which presents its own unique challenges to electronic imaging systems. Extracting exploitable images from shaded parts of scenes when a majority of the image is floodlit by natural or artificial sources (imagine license plate detection on cars at night for example) demands wide dynamic image sensing. This can also be accomplished at pixel level on well adapted CMOS imagers. Excellent low light sensitivity also means that there are no ‘grey areas’ between day and night. 

 Very high video frame rate imaging -- even in relatively high lighting environments -- paradoxically requires characteristics closer to that of low-light imaging because extremely short integration times need sufficiently high sensitivity combined with low noise.

Alarm systems to protect machine users in heavy industries or secure area intrusion detection often employ ‘invisible’ NIR lighting. High sensitivity at these wavelengths with good MTF response enables reliable alarm systems to be designed with lower illumination power.

Barcode identification and scanning applications predominantly use red illumination. The growing popularity of 2D barcode readers that use area image sensors benefit greatly from the enhanced performance of image sensors in the upper red and NIR. The improved SNR at these wavelengths translate to lower lighting needed for the minimum contrasts needed to automatically identify and decode printed barcodes. Handheld scanners therefore benefit from longer battery life. Fixed scanners distinctly benefit from the ability to increase the illumination wavelength to become less annoying for checkout staff for example.

The future of low-light and wavelength optimized imaging is looking brighter than ever! 

Advertisement

Share this Story

X
You may login with either your assigned username or your e-mail address.
The password field is case sensitive.
Loading