Stray Light (Flare) Documentation

Stray Light Normalization

Current Documentation
View previous documentation
View legacy documentation
All documentation versions


Stray light (flare) documentation pages

Introduction: Intro to stray light testing and normalized stray lightOutputs from Imatest stray light analysis | History

Background: Examples of stray lightRoot Causes | Test overview | Test factors | Test ConsiderationsGlossary

Calculations: Metric image | Normalization methodsLight source mask methods | Summary Metrics | Analysis Channels | Saturation

Instructions: High-level Imatest analysis instructions (Master and IT) | Computing normalized stray light with Imatest | Motorized Gimbal instructions

Settings: Settings list and INI keys/values | Standards and Recommendations | Configuration file input

Page Contents

This page provides a description of Imatest’s Normalization Methods.

Normalization “factors out” the level of the light source used in testing. It allows for easier comparison between tests performed with different hardware.

Normalization Methods

The Imatest stray light source analysis has several options or methods available for normalizing the images under test. 

None

Description

With the None normalization method, the test images are not normalized.

This is the simplest of the normalization methods (don’t normalize). It has two main benefits:

    • Easy to get started (no extra info is needed).
    • Stray light metric will be in units of digital number (DN) and similar intuition to noise levels can be used.
    • Can directly analyze and see how the stray light manifests itself in the images after any image signal processing. 

Inputs

    • None

Normalization Factor

The normalization factor for this method is always 1.

Metric Range

Ranges of result values for None normalization
Calculation Best Possible Measurement Worst Possible Measurement
Transmission 0 Maximum digital number in an image
Attenuation +∞ One divided by the maximum digital number in an image

Level

Description

With the Level normalization method, the user enters the level in units of digital number that is used to normalize the images under test.

A normalization level can be computed using the methods described in the Normalization Compensation section. An example level calculation is provided here

Inputs

    • Normalization Level in Digital Numbers [DN]

Normalization Factor

The normalization factor for this method is the normalization level entered by the user. 

Metric Range

Ranges of result values for Level normalization
Calculation Best Possible Measurement Worst Possible Measurement
Transmission 0 1
Attenuation +∞ 1

Note: the above levels assume that the user-provided level will produce stray light in a 0-1 range for a transmission calculation. It is possible to choose a level that breaks this assumption.

Direct Reference Image

Description

With the Direct Reference Image normalization method, a reference image is used for computing the point source rejection ratio (PSRR) metric or extended source rejection ratio (ESRR) [1]. The level from the direct image of the source within the reference image is used to normalize the images under test. This image may require a lower light level (e.g., with the use of ND filters) and/or shorter camera exposure time than that which was used for the test images so that the direct image of the source is not saturated in the image. The normalization can be compensated to produce a “test image-equivalent” normalization factor by using the normalization compensation settings (ND filter settings and ratio settings).

Note: the assumption is that the light source used for this is collimated.

Note: the “direct” part of the name refers to no optics that change the divergence (i.e., lenses, diffusers) of the beam are used for the reference image.

See the Normalization Compensation section for background information about the reference image and compensation settings. 

Inputs

    • Reference image filename (fully qualified path to image file)
    • Aggregation method: how to aggregate the “direct image pixels” into a single number
    • Reference image compensation factors: Used to compensate the base normalization factor derived from the reference image  
      • Camera reference image compensation factors  
        • Integration time ratio: The ratio of the reference integration time to the analysis integration time (i.e., the value for the reference image divided by the value for the analysis image)
        • Gain ratio: The ratio of the reference gain to the analysis gain (i.e., the value for the reference image divided by the value for the analysis image)
      • Light source/setup compensation factors
        • ND measurement type (none, density, or transmission)
        • ND measurement value (density or transmission value of the ND filter, if used for the reference image)
        • Light level ratio: The ratio of the reference light level to the analysis light level (i.e., value for reference image divided by the value for analysis image)

Assumptions

    • The direct image of the source is not saturated in the reference image
    • The correct compensation settings are used to compute a “test image-equivalent” normalization factor (e.g., factoring in the difference in light level used for the reference image)

Normalization Factor

The normalization factor is derived from the mean level of the direct image of the source within the reference image. This level is then compensated using the input compensation factors (ND filter settings and ratio settings). See the Normalization Compensation Background section below for details on the method. 

    1. Capture an image of the source.
      1. Note that the image of the source should not be saturated.
      2. If it is, adjust the level of the source (directly or with ND filters) and/or the exposure settings of the camera until the image of the source is not saturated.
      3. Record the parameters used to generate the reference image.
    2. Create the mask of the source to identify the pixels in the image that correspond to the direct image of the light source.
    3. Apply the aggregation method (mean, median) to the reference image pixels inside the mask to compute the base normalization factor.
    4. Compensate the base normalization factor using the Normalization Compensation settings (ND filter settings and ratio settings) to compute a compensated normalization factor. See the Normalization Compensation Background section below for more details. This is the final normalization factor in units of digital number (DN). Note that if compensation settings were used, this number would likely be above the saturation level of the image.  

Metric Range

Ranges of result values for Reference Image normalization
Calculation Best Possible Measurement Worst Possible Measurement
Transmission 0 1
Attenuation +∞ 1

Note that due to numerical and algorithmic issues values may appear outside of the bounds above.

Lambertian Reference Image

Description

With the Lambertian Reference Image normalization method, a reference image of a Lambertian diffuser between the source and the camera is used [2]. This image may require a lower light level (e.g., with the use of ND filters) and/or shorter camera exposure time than that which was used for the test images so that the direct image of the source is not saturated in the image. The normalization can be compensated to produce a “test image-equivalent” normalization factor by using the normalization compensation settings (ND filter settings and ratio settings). The Lambertian Reference Image normalization is used to compute the IEEE-P2020 Pre-Release Flare Attenuation metric.

Inputs

    • Reference image filename (fully qualified path to image file)
    • Aggregation method: how to aggregate the “direct image pixels” into a single number
    • Camera reference image compensation factors: Used to compensate the base normalization factor derived from the reference image  
      • Integration time ratio: The ratio of the reference integration time to the analysis integration time (i.e., the value for the reference image divided by the value for the analysis image)
      • Gain ratio: The ratio of the reference gain to the analysis gain (i.e., the value for the reference image divided by the value for the analysis image)
    • Light level measurements:
      • The measured irradiance (or illuminance) of the collimated beam for the test image(s): \(E_{source}\)
      • The measured radiance (or luminance) of the Lambertian diffuser of the test image: \( L_{ref}\)

Assumptions

    • The direct image of the source is not saturated in the reference image
    • The correct compensation settings are used to compute a “test image-equivalent” normalization factor (e.g., factoring in the difference in light level used for the reference image)
    • The diffuser is Lambertian

Normalization Factor

The normalization factor is derived from the mean level of the direct image of the source within the reference image. This level is then compensated using the input compensation factors (ND filter settings and ratio settings). See the Normalization Compensation Background section below for details on the method. 

    1. Capture an image of the source.
      1. Note that the image of the source should not be saturated.
      2. If it is, adjust the level of the source (directly or with ND filters) and/or the exposure settings of the camera until the image of the source is not saturated.
      3. Record the parameters used to generate the reference image.
    2. Create the mask of the source to identify the pixels in the image that correspond to the direct image of the light source.
    3. Apply the aggregation method (mean, median) to the reference image pixels inside the mask to compute image-level reference: \(R_{ref}\).
    4. Compute the base factor: \(\text{Base Factor}=\frac{\pi\cdot L_{ref}}{E_{source}\cdot R_{ref}}\)
    5. Compensate the base normalization factor using the Normalization Compensation settings (ND filter settings and ratio settings) to compute a compensated normalization factor. See the Normalization Compensation Background section below for more details. This is the final normalization factor in units of digital number (DN). Note that if compensation settings were used, this number would likely be above the saturation level of the image. 

Metric Range

Ranges of result values for Reference Image normalization
Calculation Best Possible Measurement Worst Possible Measurement
Transmission 0 1
Attenuation +∞ 1

Note that due to numerical and algorithmic issues values may appear outside of the bounds above.

Reference Image Compensation

A key assumption of the test is that the level used to normalize the test image data (i.e., the normalization factor) is below saturation. Saturation, or the dynamic range of the camera under test, inherently limits the dynamic range of the test itself. Normalizing by saturation level provides relatively meaningless results because the level of saturated data is ambiguous. If normalizing by saturation, the resulting metric image will have values ranging from 0 to 1 where 1 corresponds to the saturation level. For many cameras, the direct image of the source may need to be saturated in order to induce stray light in the image. In this case, we can use the information from separate well-exposed reference images where the light level has been attenuated such that the direct image of the source is not saturated. Depending on the controls available to the camera, there are different techniques that can be used to produce an unsaturated image of the source and then a “test image-equivalent” (compensated) normalization factor, such as [3]

  • Adjust the exposure time \(t\)
  • Adjust the system gain \(\rho\)
  • Adjust the source light level \(L\)
  • Use neutral density (ND) filters for the reference image \(f_{ND}\)

These techniques can be used individually or combined to form a compensation factor \(C\) that would serve as a multiplier for the base normalization factor \(N_{base} \) to compute a compensated normalization factor \(N_{compensated} \) (which is finally used to normalize the test image data). In this case, the base normalization factor \(N_{base} \) is the image level (digital number or pixel value) of the direct image of the source from the reference image [3]. The overall process can be described with a few simple equations: 

where \(N_{compensated} = N_{base}\cdot C\)

The compensation factor may be:

Note that these techniques assume that the camera data is linear or linearizable and that the reciprocity law holds [3]. The following section provides an example of normalized stray light calculation with the use of a compensation factor.

Figure 2: An on-axis direct reference image showing an unsaturated direct image of the light source. The image was captured using a shorter camera exposure time than the test images and with an ND 2.5 filter in front of the light source. In the separate test images (not shown), the direct image of the source is saturated and significantly blooming to reveal stray light.

Light Source/Setup Compensation

Light Source/Setup Compensation refers to adjustments to the light source/setup to change the level of the light reaching the camera for the reference image compared to the test image. This may be accomplished via:

  • Adjusting the light level (\(L\)) of the source
  • Inserting a neutral density (ND) filter between the source and camera

The light source compensation factor is given by

\(C_{source} = \frac{L_{test}}{L_{reference}}\cdot\frac{1}{f_{ND}} \)

The ND factor, \(f_{ND}\) is

  • 1, if no ND filter is used.
  • \(\tau\), if ND filters are used and the transmission \(\tau\) of the filter is used for the measurement. \(\tau\) is in the range 0-1.
  • \(10^{-D}\), if ND filters are used and the total density \(D\) of the filter is used for the measurement.

Camera Compensation

Camera Compensation refers to adjustments to the camera configuration to change the measured digital numbers for the reference image compared to the test image. This may be accomplished via:

  • Adjusting the exposure time \(t\) of the camera
  • Adjusting the system gain (ISO speed) \(\rho\) of the camera

The camera compensation factor is given by:

\(C_{camera} = \frac{t_{test}}{t_{reference}}\cdot\frac{\rho_{test}}{\rho_{reference}}\)

 

References

[1] Bruce Bouce, et. al, “GUERAP II – USER’S GUIDE”. Perkin-Elmer Corporation. 1974. AD-784 874.

[2] Elodie Souksava, Thomas Corbier, Yiqi Li, François-Xavier Thomas, Laurent Chanas, Frédéric Guichard, “Evaluation of the Lens Flare” in Proc. IS&T Int’l. Symp. on Electronic Imaging: Image Quality and System Performance XVIII,  2021,  pp 215-1 – 215-7, https://doi.org/10.2352/ISSN.2470-1173.2021.9.IQSP-215

[3] Jackson S. Knappen, “Comprehensive stray light (flare) testing: Lessons learned” in Electronic Imaging, 2023, pp 127-1 – 127-7, https://doi.org/10.2352/EI.2023.35.16.AVM-127