Recent growth in the automotive and security industries has increased the number of cameras designed for viewing both Near Infrared (NIR) and visible wavelengths of light. NIR illumination is invisible to the human eye and can light a dark scene without being visible or annoying. Because silicon sensors are sensitive out to 1100 nm — well beyond NIR wavelengths of interest — existing consumer systems require only a small modification, the removal of the infrared filter, to allow them to image NIR. This makes dual-band security cameras cost-effective and attractive to the industry.
The Response of Silicon Photodiode
Care is required when dual-band cameras are tested for image quality. In this post, we will outline some best practices, including a selection of light sources and charts for measuring sharpness of cameras in the visible and NIR bands.
Note: We chose to measure sharpness because a dramatic change between bands could greatly reduce the system’s effectiveness. A critical function in many automotive and security systems is object identification or detection. A security camera that cannot identify an intruder because the image is not sharp enough is not doing its job, therefore it is one of the most important factors to test.
The camera we will test is the Raspberry Pi Camera V2 NoIR version. It is ideal because its IR filter has been removed, and functions identically to the normal Raspberry Pi camera module, which leads us to believe that it has identical image processing.
As with any other test, the choice of lighting is crucial. In our setup, we will be testing under two specific lighting conditions: 5100K visible LED light and 850nm NIR LED light.
The first test is with the 5100K LED source.
The plot above shows the spectral output of the 5100K source. There is a strong peak near 450 nm, but the spectrum is fairly constant across the visual range (400 – 700 nm). For more information about this light source, see our ITI LED lightbox.
The second source is a lightbox engineered with NIR LEDs with the spectral output centered around 850 nm. The plot below shows this distribution.
Contact firstname.lastname@example.org to learn more about our NIR lighting options.
It is important to avoid overlapping spectral response because it could confuse results from the two sources. We need to define how the system performs in each band. If the light sources overlap, we won’t be able to point to which band caused any change in quality.
Chart & Material
We use an eSFR ISO 12233 on a black and white LVT film to measure sharpness. There are a number of ways to measure sharpness and Modulation Transfer Function (MTF) such as wedges and Siemens stars, but for our purpose, we will use the slanted edges on the eSFR chart. For a better understanding of the advantages and drawbacks of each feature when measuring MTF, see this blog post.
The chart and material on which it is printed or developed are two of the most overlooked aspects in testing at NIR wavelengths. Not all reflective and transmissive charts perform the same in NIR as they do for visible light. For example, there are two types of film charts: color, and black and white. Both types work well in the visible wavelengths with only subtle differences. However, the color film is nearly transparent at longer wavelengths. This is demonstrated below.
We use slanted-edges in the eSFR ISO (ISO 122334:2014) chart, printed on Black and White (B&W) LVT film as our preferred method for measuring sharpness in NIR. The choice of chart media is critical since NIR response for some media may be very different from the visible response. The B&W film has excellent NIR response (extending to shortwave IR). Color LVT film charts cannot be used because they are transparent in NIR wavelengths. This is demonstrated below.
We can further understand the properties of these charts by looking at the spectral transmittance for each material (illustrated below).
The above plot shows that the color film’s transmission increases dramatically at NIR wavelengths. In contrast, the B&W film maintains a similar transmission throughout visible and NIR, making it ideal for this test.
The table below compares the NIR suitability of several chart materials.
Black and White LVT film
Color LVT film
Chrome on Glass
*For inkjet, a special print process is required. All inkjet charts are suitable for visible testing but not all are suitable for NIR. More info available at request by emailing email@example.com.
We compare the sharpness of the same camera with visible and NIR illumination using a black and white film eSFR ISO chart. Images are analyzed using Imatest Master 5.1, with MTF50 (the spatial frequency where MTF drops to half its low-frequency value) as the image sharpness metric. Note that this is one of several summary metrics that could have been chosen. For more information, please visit our page, Sharpness, what is it and how is it measured?
We capture images of the same chart with the two different light sources (see below).
eSFR film with 5100K (visible) light source
eSFR film with the 850nm (NIR) light source
Even before running an analysis of these images, we inspect for qualitative differences. The immediate observation is that the visible image is much sharper.
(left) visible, (right) NIR
The above images show crops of the center square of both images. The visible image on the left has much sharper edges and a clearer focus aid feature in the center. The difference is important to quantify, so we run these images through the Rescharts module in Imatest Master for proper analysis.
Below are the Multi-ROI summary results output by Imatest.
Multi-ROI Summary for 5100K image
Multi-ROI Summary for 850 nm image
We choose a large number of ROIs to increase the data points for each image in order to get more representative results. These plots can show a variety of metrics for each edge (MTF20, Chromatic Aberration, etc.), but we are showing the primary result which is the weighted mean of MTF50 (summarized in the graph below). For more information about understanding and interpreting these plots, read our documentation on it.
The plot shows that there is a 52% change in mean MTF50 between the two light sources. This difference is quite significant and correlates well with our visual observations. There are several explanations: NIR focuses at a different location from visible light and has intrinsically lower sharpness due to its longer wavelength. There is more diffusion inside the silicon sensor. The difference may be reduced by a better choice of optics, better focusing, and improved sensors, but in general, we should expect the sharpness of NIR images do not equal that of visible light.
Before you test your cameras for NIR, there are a couple things you need to consider:
- The quality of the image can vary per band (visible & NIR)
- Not all chart and material types are visible in NIR. However, certain charts and materials when printed or developed for NIR can work for both visible and NIR testing
If you have additional questions about testing NIR, contact us at firstname.lastname@example.org.
Our engineers can help you set up your lab to test NIR. Submit a solution request and we will fit you with the right equipment.
Learn more about the development of standards for automotive camera systems.
The IEEE-SA P2020 is a working group for automotive imaging standards. Their goal is to define a set of standards to resolve the current ambiguity in the measurement of image quality in automotive imaging systems. Generally, today’s image evaluation approaches do not adequately address the unique needs of either the human or computer-vision based automotive applications; therefore IEEE-SA P2020 is working with people in the field, understanding the gaps in current standards, and creating a coherent set of key performance indicators by which automotive camera systems might consistently be evaluated.
Since the beginning of this working group, Imatest’s engineers continue to be heavily involved in the IEEE P2020 standard for automotive camera systems. The two areas of focus are Contrast Detection Probability and Flicker Measurement. We are actively working with major image sensor manufacturers to assess Contrast Detection Probability as a practical test measurement for evaluating the amount of contrast information present at various parts of the imaging chain. Additionally, we are the principal contributors towards the Flicker Measurement subgroup, and are preparing a publication to describe the effect further and why it is an essential consideration for ADAS and automated driving systems.
There is much to come as IEEE P2020 leads the creation of standards for automotive camera system testing. As automobiles evolve, autonomous vehicles mature, and the car becomes an artificial-intelligence platform connected to everyday life, it will be crucial to adapt and define proper evaluation metrics for the multi-sensor, multi-camera, and multi-application systems on an automobile. The IEEE P2020 working group involves leading camera and sensor manufacturers, leading automobile manufacturers, and leading imaging science engineers, and will leverage all the various backgrounds and experiences to create and adopt this standard for all automobile and part manufacturers. Download the most recent publication below from IEEE P2020.
If you have any questions for us on how this standard can help your projects, please email Ian Longton at email@example.com.
All rules about how the material can be used, copyrights, legal caveats etc. are contained within the document. It must only be shared in full so that these details are clearly indicated.
By Robert Sumner
With contributions from Ranga Burada, Henry Koren, Brienna Rogers and Norman Koren
Consistency is a fundamental aspect of successful image quality testing. Each component in your system may contribute to variation in test results. For tasks such as pass/fail testing, the primary goal is to identify the variation due to the component and ignore the variation due to noise. Being able to accurately replicate test results with variability limited to 1-5% will give you a more accurate description of how your product will perform.
Since Imatest makes measurements directly from the image pixels, any source that adds noise to the image can affect measurements. A primary source of noise in images is electronic sensor noise. Photon shot noise also contributes significantly in low-light situations. Other systemic sources of measurement variability, such as autofocus hysteresis, will not be addressed in this post.
Here are 5 tips to limit noise in your test results: (more…)
It is important to test your camera system in environments which reproduce lighting conditions similar to where you intend to use the camera in the real world. Failure to test a camera under low light conditions may lead to overstating the camera’s performance.
Image sensors collect light (signal) into pixel wells, then convert the resulting analog voltage levels into digital numbers. Dark current is where electrons are released from thermal activity which becomes indistinguishable from electrons released via photoresponse. The dark current that exists in uncooled image sensors leads to dark noise. At low light levels, exposures are longer to gather more light, which gives more time for dark-current electrons to be gathered. This leads to dark noise and read noise representing a larger portion of the overall response, which reduces the signal to noise ratio (SNR). Low-light conditions are most challenging for higher resolution sensors with small pixel pitches where the particle and wave nature of light can seriously impact the performance of your camera.
The definition of “low light” depends on the application. For mobile devices (compact camera modules) low light is defined by the IEEE CPIQ standard as 25 lux, which resembles a dimly lit indoor space representative of the worst cases where people expect their phones to function properly. As mobile devices get better in low light scenarios, customer expectations may shift. For security or automotive industries, the levels are much lower based on how dark the outdoor environment may get:
|Very Dark Day||107|
In order to achieve light levels similar to the darkest scenes, most lab lighting setups that contain fluorescent or LED sources may not be able to be directly dimmed to very low levels, so the dim light levels may be achieved by a variety of methods shown in the following table. Ultra-low-light levels can be achieved by combining these approaches:
|Increasing distance (d)||Between 1 / d and 1 / d2||Difficult to move large lighting setups, labs have limited available space|
|Reflecting off Munsel N5 painted walls||0.18||Most labs are painted with these sorts of walls, but rotating lights back and forth may be difficult to perform repeatably without a mechanical motion stage.|
|Reflecting off black painted walls||0.05|
|Fresnel reflection from beam splitter||
|Apparatus could impinge FoV and would also require repeatable rotation of lights.|
|Imatest Low-light filter (mask + neutral density filter)||0.0125||Manually attaches to these LED lights|
Eliminating additional sources of light within your lab is also important: Putting felt around doors to block light from other rooms, covering exit signs, covering indicator LEDs and power strips and installing baffling and light traps to block unwanted reflections.
Note: Verifying the low light levels can be challenging as many illuminance meters do not go to extreme low light levels.
Once you can reproduce low-light levels in your lab, you can perform tests to answer a number of questions about your system performance:
- Does the radiometric calibration for black level successfully reduce the effects of dark current without also eliminating useful signal that might impact the dynamic range of your system?
- Does the black level compensation work across the range of nominal operating temperatures that your sensor will experience?
- Does your ISP react appropriately to dark scenes by increasing the sensitivity (gain)?
- How consistently does your autofocus system (if you have one) perform under low light?
- What happens to moving objects captured under low light conditions?
- For security cameras with near-infrared ‘night-vision’ support: does the Infrared cutoff filter engage and disengage at the appropriate light level?
- Do the applied gamma curve(s) and tone mapping help to boost the perceived quality of the dark scene or assist the observer’s ability to recognize objects in the dark?
- Does the noise reduction strike the right balance of reducing visual noise without also blurring the appearance of relevant textures (skin and foliage) within the image?
A comprehensive tuning and testing regimen will involve performing a full battery of objective tests in a wide range of lighting conditions. By increasing the range of light levels you can reproduce in your lab, you can provide the most challenging condition in which you can validate the performance of your camera system.
Google Pixel 2 XL
In this post, we will be using the Contrast Resolution Chart and Imatest Master to measure the dynamic range of a Google Pixel 2 XL. The dynamic range of a camera is the reproducible tonal range in an imaging system. Put simply, it is the range between the darkest black and the brightest white of an image and is typically measured in decibels (dB). It is an important image quality factor in many applications from machine vision to mobile cameras and more recently, automotive camera systems. In this use case, we will be evaluating both the HDR+ (default) and HDR off modes of the Pixel 2 XL; however the procedure can be used to test any camera system’s dynamic range. You can read more details about Imatest’s dynamic range test solutions.
Testing a camera’s dynamic range can be difficult and requires careful setup. In this section, we will review all of the relevant components of performing a sufficient dynamic range test.
Contrast Resolution Chart
One of the major components to testing dynamic range and the first consideration is the chart. There are a variety of dynamic range test charts available but the one we will be using for this test is the Contrast Resolution chart. There are several unique features to this chart that make it ideal for testing a system like the Pixel 2 XL. The first is the radial arrangement of the patches. This is important because it helps control for lens fall off. Second is the dynamic range of the target itself. The chart must have a higher dynamic range than the system or else you end up measuring the chart and not the camera. The Contrast Resolution chart has a dynamic range of ~95 dB which is sufficient for almost all systems with a lens. The third and most important reason for choosing this chart is the Contrast resolution dynamic range calculation. The inner light and dark gray patches on this target allow for a more robust calculation of the dynamic range of tone mapped images; which can adversely affect the quality of other dynamic range calculations.
ITI Lightbox with Contrast Resolution Chart
The next component to consider is the light source. For our test, we will be using the 10,000 lux ITI Lightbox 5100K. This lightbox is a good choice for our purposes because of its uniformity (>95%). The analysis of the chart assumes a constant amount of light being transmitted across and through the chart. The contrast resolution dynamic range calculation works because the chart has known density steps between the patches. If the light source has low uniformity than the assumption is not true and the results can be impacted.
The third consideration is the environment you will test in. It is important to test dynamic range in a sufficiently dark environment limiting the amount of stray light that might enter the camera system as much as possible. The extra light entering the system can increase the amount of flare light (sometimes called veiling glare). Flare is the stray light that fogs images and is caused by reflections between surfaces of lens components and the inside barrel of the lens. Stray light can cause reduced contrast in dark parts of the image which in turn can make for erroneous dynamic range measurements. See our recent study: “Measuring the impact of flare light on dynamic range” for more information. Our testing environment will have only one controlled light source to illuminate the chart and no other sources of light (e.g., overhead lighting, LEDs on electronics, etc.).
A common source of erroneous measurements due to the testing environment are reflections on the chart itself. Because the darkest patches of the target let through such little light, it is easy for a non-negligible amount of stray light to be reflected off the front side of the chart. This extra light will make it seem as though more light is present in the patch area than is actually being transmitted through the chart and will adversely influence the analysis. To mitigate the reflections, we use a custom shroud to block surfaces that can reflect light from the lightbox. Unfortunately, the camera itself is the most likely culprit for unwanted light due to the potential for light to bounce off of it. The Pixel 2 XL in our case, is no exception. The particular model we are using has a white back which reflects lots of light back at the chart’s front surface. Since we are aware of this, we have taken extra care to shield all of the phone except for the lens from the light to stop any unwanted light from bouncing back at the chart.
Example of a development board reflection off the front of the chart
More on the shroud
Depending on the system, and the environment of the test, a shroud may not be necessary. However, it provides a consistent lighting environment for every test and should be considered for benchmarking and comparing different systems. For more information on reflections and shroud materials, see this post. For information on custom testing environments, please contact firstname.lastname@example.org .
Contrast Resolution chart viewed through the native camera app
Ten images were captured with the Pixel 2 XL with the above setup. The phone was set to HDR+ mode. For each image, the exposure was adjusted on the device by tapping the area on the screen where the upper gray background of the chart was present (directly to the left of the second row). We did this to ensure similar exposure in all of the images. The acrylic shroud with matte black interior was placed around the lightbox with the phone placed as close as possible to the opening on the shroud. For this test, the lightbox was set to the 5100K color temperature setting and minimum brightness (~30 lux). We chose these settings because they are representative of a realistic low-light environment in which the camera phone may be used. Please note, the light settings could vary greatly depending on the application. Once enough images were taken, we inputted them into our software, Imatest Master, for proper analysis.
The following analysis of these images was done in Imatest Master 5.0.
Each image was analyzed in the Multitest module. Opening the Multitest Setup, we made sure to check that “3. Contrast Resolution” was selected for the chart type.
The plot we were most interested in was the Noise/SNR plot. Under this section in the setup dialog, we selected “3. SNR dB vs. Inp Density (RGBY)” and “3. Exposure in DB (-20*target density) DR in dB” from the drop downs. We also checked “Image plot” so we could visualize the results on the images themselves. See here for more information analyzing the Contrast Resolution chart.
The data was saved in JSON format and the relevant metrics were then parsed and averaged. The data we are interested was saved in the field “Contrast_Resolution_Dyn_Rng_dB”. The field contains 4 values, the Contrast Resolution dynamic range in decibels at SNR = 0, 6, 12, 20.
The plot below shows the Contrast Resolution Dynamic Range for the two modes averaged over ten captures. The standard error is included. We aggregated this data from several runs of the Multitest module in Imatest Master. As you can see in the plot below, there was a significant increase in dynamic range when using HDR+ mode versus HDR off.
Chart comparison of the HDR-off vs. HDR+ results
NOTE: If you are using Imatest software, this information can be viewed for individual images using the Noise Plot in the Multicharts module.
Since we checked “Image Plot”, we can also visualize the results on the image of the Contract Resolution chart. The two plots below show an exaggerated image plot so that the noise in each patch can be visually examined.
HDR Off Image Plot (Constant xyY)
HDR+ Image Plot (Constant xyY)
A notable difference in the plots can be seen in patch 11 (isolated images below). We can see that in the HDR+ plot (right), the details of the inner gray patches are still visible where they are not in the HDR off plot (left).
Patch 11 visual comparison (Left: HDR Off, Right: HDR+)
By following this procedure, using the Contrast Resolution chart and Imatest Master, we were able to analyze the dynamic range of our camera effectively. As we can see in the above diagrams, the summary plot results indicate that the HDR+ mode increased the dynamic range in the images by 5 dB on average. We are also able to visually see the increased range by looking at patches in the Image Plots view.
Ideally, we want to tailor our test setup as much as possible to the application being tested. In this particular case, the subject of our testing is the different HDR processing of the Pixel 2 XL. We have limited information on the specific algorithms being used when HDR is turned off versus HDR+ and their impact on image quality under different lighting situations and with different target subjects, so we are limited in the conclusions we can draw. Thus, the results reported above do not represent all use cases. A more exhaustive study would need to be done to produce results that would reflect the broad range of possible use cases.
For more information, or to learn more about how to properly set up your testing environment, talk to a solution expert.
The obsolete ISO 12233:2000 standard defines a resolution test target with a high contrast ratio, These are typically produced at the maximum dynamic range of a printer which can be anywhere from 40:1 to 80:1. The high contrast can lead to clipping of the signal which leads to overstated invalid MTF values.
Some camera manufacturers who want better MTF results may take advantage of this anomaly to overstate the quality of the cameras they produce. This is why it is critical to validate cameras with a proper measurement system that includes a low-contrast target. (more…)
The dynamic range of recent HDR image sensors, defined as the range of exposure between saturation and 0 dB SNR, can be extremely high: 120 dB or more. But the dynamic range of real imaging systems is limited by veiling glare (flare light), arising from reflections inside the lens, and hence rarely approaches this level. Veiling glare measurements, such as ISO 18844, made with black cavities on white fields, result in large numbers that are difficult to relate to dynamic range. Camera dynamic range is typically measured from grayscale charts, where veiling glare depends on the design and layout of the chart, leading to inconsistent results. (more…)
As of Imatest 5.0, Imatest Master now features image acquisition capabilities. Previously, image acquisition capabilities were supported by Imatest IS, which has been discontinued as a separate product. This provides all of our customers with access to the acquisition library. The library supports direct acquisition from a wide range of frame grabbers and cameras, as well as industry standard interfaces. Direct image acquisition cuts out several steps in the image quality testing process and allows for in-the-loop testing with Imatest.
Learn more about Imatest’s image acquisition capabilities.
Sensor Evaluation Boards
- Graphin EasyLab
- Camera Link
- GigE Vision
- DCAM IEEE 1394b
- Blackmagic Design
- Allied Vision Tec
- Matrox Imaging
- Microsoft Kinect
- National Instruments
- Toshiba Teli
- Matrix Vision
- Teledyne DALSA
As predicted by astronomers years in advance, a peculiar cosmic event will occur on the morning of August 21st. Passing directly in front of the sun, the moon will cast a shadow racing across North America at supersonic speeds. From Salem, Oregon to Charleston, South Carolina, the 70 mile wide shadow will darken everything in its path. Outside the path of totality, all of North America will still be able to observe a partial solar eclipse!
If you can manage it, making your way into the path of totality will be an amazing experience. A little preparation goes a long way! The talented developers at Vox have created this interactive map to illustrate the magnitude of the eclipse from your location. It will inform you of the shortest distance to the path of totality!
If you’d like to capture an image of the solar corona like the one below from the National Parks Service, you must be in the path of totality. We would recommend following this shooting guide from the American Astronomical Society. Some general tips include: using a telephoto lens, setting focus manually, and capturing with optimal exposure settings. This exposure calculator by Xavier M. Jubier is a good place to start. Solar filters should be used for partial-eclipse stages, and the sun offers nearly 12 hours a day for you to practice finding good camera settings! This image is a composite of several exposures and involves hours of post-processing on a computer.
Important: If you are observing the sun on ANY day, practice safety protection by wearing a pair of ISO 12312-2 compliant glasses. Viewing the sun with non-ISO compliant glasses can cause significant eye-damage.