Imaging Tech – imatest http://www.imatest.com Image Quality Testing Software & Test Charts Mon, 21 Aug 2017 03:52:34 +0000 en-US hourly 1 https://wordpress.org/?v=4.8.1 Challenges in Automotive Image Quality Testing http://www.imatest.com/2017/06/challenges-automotive-image-quality-testing/ http://www.imatest.com/2017/06/challenges-automotive-image-quality-testing/#respond Mon, 19 Jun 2017 15:35:34 +0000 http://www.imatest.com/?p=19145 Imatest's Norman Koren presents his vision for challenges in automotive image quality testing including resolving low-contrast scenarios.

The post Challenges in Automotive Image Quality Testing appeared first on imatest.

]]>
Imatest’s Norman Koren presents his vision for challenges in automotive image quality testing.

  • Review the challenges for human observers and/or machine vision algorithms to resolve low-contrast objects over a wide range of background brightness
  • How to distinguish low contrast patches over the full dynamic range of a test chart
  • Comparing the use of hyperbolic wedges in ISO16505 vs. slanted edges to measure MTF10 in automotive applications
  • Misunderstandings about low contrast slanted-edges

This video was previously recorded at Autosens Detroit 2017, the world’s leading vehicle perception conference. See more more at www.auto-sens.com

Related Content

For more information on Imatest’s solutions for testing image quality in the automotive industry, please visit our solutions page. 

Image Quality Testing for the Automotive Industry [Webinar]

Three Companies Changing the Autonomous Driving Landscape [Article]

The post Challenges in Automotive Image Quality Testing appeared first on imatest.

]]>
http://www.imatest.com/2017/06/challenges-automotive-image-quality-testing/feed/ 0
Closing the Loop: Distortion Correction http://www.imatest.com/2017/04/distortion-correction/ http://www.imatest.com/2017/04/distortion-correction/#respond Tue, 25 Apr 2017 20:49:19 +0000 http://www.imatest.com/?p=18706 Imatest’s charts and software allow you to measure the characteristics and parameters of imaging systems. Quite often these measurements simply indicate the limits of system performance and expected image quality. But some Imatest results let you improve image quality— and subsequent images taken by the same system— by correcting measured aberrations. No new components need […]

The post Closing the Loop: Distortion Correction appeared first on imatest.

]]>
Imatest’s charts and software allow you to measure the characteristics and parameters of imaging systems. Quite often these measurements simply indicate the limits of system performance and expected image quality.

But some Imatest results let you improve image quality— and subsequent images taken by the same system— by correcting measured aberrations. No new components need to be purchased; no judgement calls need to be made. All it takes is some math and computation, which you can apply in our own external program or with an Imatest module. This is an aspect of image processing pipeline tuning, which is usually done by a dedicated Image Signal Processing (ISP) chip in a device to transform raw sensor data into an appropriate image. 

At Imatest, we informally call this “closing the loop”, because it completes the cycle from the camera-under-test to measurement, back to the camera (in the form of an adjustment). 

Today, we’re going to illustrate how to use radial distortion measurements from Imatest to correct for optical distortion (without buying a new lens).

 

Radial Geometric Distortion

Geometric distortion, for the purposes of this post, is roughly defined as the warping of shapes in an image compared to how those shapes would look if the camera truly followed a simple pinhole camera model. (Consequently, we’re not talking here about perspective distortion). The most obvious effect of this is that straight lines in the scene become curved lines in the image.

Geometric distortion is not always a bad thing- sometimes curvilinear lenses are chosen on purpose for artistic effect, or a wide angle lens is used and the distortion is ignored because that’s what viewers have come to expect from such situations. However, subjective user studies have shown that the average viewer of everyday images has limits on the amount of distortion they are willing to accept before it reduces their perception of image quality. 

Characterizing (and correcting for) distortion is also necessary for more technical applications which require precise calibration, such as localization of a point in 3-D space in computer vision or for stitching multiple images together for panoramic or immersive VR applications. 

This geometric distortion is almost always due to lens design, and because of that (and how lenses are constructed), it is typically modeled as being (1) purely radial and (2) radially symmetric. 

Purely radial distortion means that no matter where in the image field we consider a point, the only relevant aspect of that point for determining the distortion it has undergone is how far from the center of the image it is. (For the sake of simplicity, we will assume here that the center of the image is the optical center of the system, though in general this needs to be measured in conjunction with or prior to radial distortion.) Assuming geometric distortion is radial is extremely helpful in reducing the complexity of the problem of characterizing it, because instead of a 2-dimensional vector field over two dimensions (x- and y- displacement at each pixel location) we only have to determine a 1-dimensional over one dimension (radial displacement at each radius). 

By using the SFRPlus, Checkerboard, or Dot Pattern modules, Imatest can measure radial distortion in a camera system from an image of the appropriate test charts. 

 

Distortion Coefficients in Imatest

Imatest can return functional descriptions of two different types of radial distortion. Both are described by polynomial approximations of the distortion function, but the two polynomials represent different things. In many cases, they are functionally equivalent and one can convert from one form function to the other. (For simplicity, we ignore here the tan/arctan approximation Imatest can provide and note that when it comes to distortion correction it can be applied in the same way with a change only to the forward mapping step.) 

In the rest of this post, we will use the following conventions:

  • \(r_d\) is the distorted radius of a point, i.e. its distance from center in the observed (distorted) image
  • \(r_u\) is the undistorted radius, of the point, the distance from center it would have appeared at in an undistorted image
  • The function \(r_d = f(r_u)\) is called a forward transformation because it takes an undistorted radius value and converts it to a distorted radius. That is, it applies the distortion of the lens to the point. 
  • The function \(r_u = f^{-1}(r_d)\) is called an inverse transformation because, in contrast to a forward transformation, it undoes the distortion introduced by the lens.
  • \(P(\cdot)\) indicates a polynomial function 

SFRPlus and Checkerboard modules return the polynomial coefficients that describe the inverse transformation which corrects the distortion, \(r_u = f^{-1}(r_d)\), highlighted in Rescharts below. 

 

Dot Pattern module returns the polynomial coefficients for a different parameterization of radial distortion, known as Local Geometric Distortion (LGD), or sometimes known as optical distortion. This is the description of radial distortion used by the standards documents of ISO 17850 and CPIQ.

LGD is defined as the radial error relative to the true error, as a percentage (i.e., multiplied by 100):

\[LGD = 100*\frac{r_d – r_u}{r_u}\]

By considering LGD to be a polynomial function of radius in the distorted image, \(P(r_d)\), we can re-arrange the sides of this equation to yield a more useful equation, a rational polynomial form of a distortion-correcting inverse transformation. Thus the dot pattern results can be used in the same way as the SFRPlus/Checkerboard results (though we will be be directly replacing this rational polynomial with a regular polynomial fit approximation in the code example).

\[r_u = \frac{r_d}{P(r_d)/100 + 1} = f^{-1}(r_d)\]

 

Distortion Correction by Re-Sampling

The pixel array of an image sensor essentially takes a grid of regularly-spaced samples of the light falling on it. However, the pattern of light falling on it has already been distorted by the lens and so while the sensor regularly samples the light this light, these are effectively not regular samples of the light as it appeared before entering the lens. Our computational solution for remedying this can be described as follows: 

We create a new undistorted, regularly spaced grid (a new array of pixels). At each of those “virtual sensor” pixel locations we re-sample image data from the observed image at the location in that image where the sensor pixel would have projected in the absence of distortion. So the distorted image is re-sampled by a grid which undergoes the same distortion, but the sampled results are then presented regularly spaced again- effectively undoing the distortion. This is illustrated below.

Each of the intersection points of the upper grid lines represents a pixel location in our generated, undistorted image (the pixel locations in our “virtual sensor”). Obviously, we have reduced the number of “pixels” here to increase legibility. The lower part of the image represents the distorted image with the sampling grid overlaid on it after the grid has been distorted the same way. The regularly spaced array locations above will be populated with data sampled irregularly from the distorted image below, as indicated by the distorted grid intersection locations.

As a further visual aid, the red arrows descend from the grid intersections in the upper image to the corresponding grid intersections in the lower one. These can be contrasted to the ending locations of the blue arrows, which indicate there the pixel samples would be if undistorted. (Obviously, if the pixel sample locations were not distorted, i.e. the blue arrow locations were used, then the output image would be sampled regularly from the distorted image, and would itself be distorted.)

 

 

An Example

The following example of how to do this re-sampling is provided in MATLAB code, below. You can also download the code and example images at the bottom of this post. The code is merely a particular implementation, though- the concepts can be extracted and applied in any programming language.

Note that below, we use the convention of using suffixes ‘_d’ and ‘_u’ to identify variables which are related the distorted and undistorted images/coordinates respectively, and use capitalized variables, such as RHO, to identify matrices of the same size as the test and output images (a property that will be used implicitly below). 

(0) Load the image of an SFRPlus chart into Imatest and analyze it to determine the inverse transformation coefficients (shown here measured in the Rescharts interactive module). (Alternatively, load an image into Dot Pattern module and retrieve the LGD coefficients from there and convert them into inverse transformation coefficients, and then follow along with the remaining steps.)  Load these into MATLAB.

inverseCoeffs = [0.2259 0 1 0]; % distortion coefficients reported by SFRPlus
im_d = double(imread('sfrplus_distortion.jpg'));
width = size(im_d, 2);
height = size(im_d, 1);
channels = size(im_d, 3);

(1) Define the spatial coordinates of each of the pixel locations of this observed (distorted) image, relative to the center of the image. For example, since this test image is 4288×2872 pixels, the upper left pixel coordinate is (-2143.5, -1435.5).

xs = ((1:width) - (width+1)/2);
ys = ((1:height) - (height+1)/2);
[X, Y] = meshgrid(xs,ys);

(2) Convert these coordinates to polar form so we can manipulate only the radial components (called RHO here). We also normalize and then scale the radial coordinates so that the center-to-corner distance of the undistorted image will ultimately be normalized to 1. 

[THETA, RHO_d] = cart2pol(X, Y);
normFactor = RHO_d(1, 1); % normalize to corner distance 1 in distorted image
scaleFactor = polyval(inverseCoeffs, 1); % scale so corner distance will be 1 after distortion correction
RHO_d = RHO_d/normFactor*scaleFactor;

(3) NOTE: As a subtle point, the pair of variables THETA and RHO_d actually define spatial coordinates two ways: explicitly and implicitly. They define explicit coordinates in their values, i.e. in that (THETA(1,1), RHO(1,1)) defines the angular and radial coordinate of the upper left corner pixel of the image. However, they also implicitly define a set of coordinates simply by being 2-D arrays, which have a natural ordering and structure. Even if we change the value of the (1,1) entry of these two arrays, they are both still the upper left corner entry of each array. The explicit coordinate of the point has changed, but the implicit one has remained the same.

We now apply the measured distortion to the radial coordinates, so that the explicit radial distance matches the radial distance of that point in the observed image. As pointed out above, this distorted location in the observed image is now tied to the undistorted location in the image array via the implicit location in the array. We are using the implicit array element locations as the true coordinates of the undistorted image, and the explicit array values as a map to the point in the distorted image to pull the samples from. 

Note that we don’t actually have the forward transformation polynomial yet, we have the inverse polynomial as returned by Imatest. This can be inverted by fitting a new (inverse of the inverse) polynomial, as in the provided invert_distortion_poly.m file. 

forwardCoeffs = invert_distortion_poly(inverseCoeffs); 
RHO_u = polyval(forwardCoeffs, RHO_d); 
% Convert back to cartesian coordinates so get the (x,y) distorted sample points in image space 
[X_d, Y_d] = pol2cart(THETA, RHO_u*normFactor); 

(4) We now have X_d, Y_d arrays whose implicit coordinates are those of the undistorted image and whose explicit values indicate the sampling points in the observed image associated with them. We can use these directly as query (sampling) points in the interp2() function.

% Re-sample the image at the corrected points using the interp2 function. Apply to each color
% channel independently, since interp2 only works on 2-d data (hence the name). 
im_u = zeros(height,width,channels); % pre-allocate space in memory
for c = 1:channels
   im_u(:,:,c) = interp2(X, Y, im_d(:,:,c), X_d, Y_d);
end

That’s it! Now we can view the undistorted fruits of our labor. Notice the straightened lines on top and bottom, in particular. Also note that there are black areas around the edges of this undistorted image- of course, there was no information in the original image to use to meaningfully fill in there. 

 

Of course, we can now undistort scenes besides just test chart images. Now that we have used the test chart and Imatest to characterize the distortion caused by the camera system itself, we can undo that distortion in any other image it takes. Since the supposed-to-be-straight lines of architecture are a very common source of noticeable distortion, we demonstrate this on a photo of our office building in Boulder, CO on a day with diffuse lighting (i.e., gloom). 

 

These example images and a more verbose version of the MATLAB code are available here – distortion_correction_example.zip 5.4 MB

You can measure the distortion in the images yourself in Imatest, or use the supplied values in the distortion_correct_ex.m file. We hope that this post has helped illustrate how this Imatest measurement can be immediately useful for incorporating into your pipeline to improve your images.

The Imatest Radial Geometry module (added to Imatest 5.0, August 2017)

If you don’t want to code your own, you can add or correct distortion using the Radial Geometry module, which works on single images or batches of images (using settings from the most recent single image run). Full details, including a description of the settings, are in the Radial Geometry instructions.

Here is the input window after reading a distorted image of an SFRplus test chart image.

Radial Geometry opening window.
The image is from a dcraw-converted image (with no distortion-correction).

Parameters may be obtained from Imatest runs: SFRplus is shown in the following example.

SFRplus Setup results for above image, showing distortion calculations

Here is the corrected result.

Corrected image using parameters from SFRplus Setup

The post Closing the Loop: Distortion Correction appeared first on imatest.

]]>
http://www.imatest.com/2017/04/distortion-correction/feed/ 0
The Effects of misregistration on the dead leaves cross-correlation texture blur analysis http://www.imatest.com/2017/01/the-effects-of-misregistration-on-the-dead-leaves-cross-correlation-texture-blur-analysis/ http://www.imatest.com/2017/01/the-effects-of-misregistration-on-the-dead-leaves-cross-correlation-texture-blur-analysis/#respond Thu, 19 Jan 2017 04:31:40 +0000 http://www.imatest.com/?p=17791 This paper was given as part of the Electronic Imaging 2017 Image Quality and System Performance XIV and Digital Photography and Mobile Imaging XIII sessions. When: Tuesday, January 31, 2017, at 12:10 pm By: Robert Sumner with support from Ranga Burada, Noah Kram Abstract: The dead leaves image model is often used for measurement of the spatial frequency […]

The post The Effects of misregistration on the dead leaves cross-correlation texture blur analysis appeared first on imatest.

]]>
This paper was given as part of the Electronic Imaging 2017 Image Quality and System Performance XIV and Digital Photography and Mobile Imaging XIII sessions.

When: Tuesday, January 31, 2017, at 12:10 pm

By: Robert Sumner with support from Ranga Burada, Noah Kram

Abstract: The dead leaves image model is often used for measurement of the spatial frequency response (SFR) of digital cameras, where response to fine texture is of interest. It has a power spectral density (PSD) similar to natural images and image features of varying sizes, making it useful for measuring the texture-blurring effects of non-linear noise reduction which may not be well analyzed by traditional methods. The standard approach for analyzing images of this model is to compare observed PSDs to the analytically known one. However, recent works have proposed a cross-correlation based approach which promises more robust measurements via full-reference comparison with the known true pattern. A major assumption of this method is that the observed image and reference image can be aligned (registered) with sub-pixel accuracy.

Read Full Paper:

Effects of misregistration on the dead leaves cross-correlation texture blur analysis

The post The Effects of misregistration on the dead leaves cross-correlation texture blur analysis appeared first on imatest.

]]>
http://www.imatest.com/2017/01/the-effects-of-misregistration-on-the-dead-leaves-cross-correlation-texture-blur-analysis/feed/ 0
Measuring MTF with wedges: Pitfalls and best practices http://www.imatest.com/2017/01/measuring-mtf-with-wedges-pitfalls-and-best-practices/ http://www.imatest.com/2017/01/measuring-mtf-with-wedges-pitfalls-and-best-practices/#respond Thu, 19 Jan 2017 03:22:49 +0000 http://www.imatest.com/?p=17789 This paper was given as part of the Electronic Imaging 2017 Autonomous Vehicles and Machine session. When: Monday, January 30, 2017, at 10:10 am By: Norman Koren with support from Henry Koren, Robert Sumner Abstract:  The ISO 16505 standard for automotive Camera Monitor Systems uses high contrast hyperbolic wedges instead of slanted-edges to measure system […]

The post Measuring MTF with wedges: Pitfalls and best practices appeared first on imatest.

]]>
This paper was given as part of the Electronic Imaging 2017 Autonomous Vehicles and Machine session.

When: Monday, January 30, 2017, at 10:10 am

By: Norman Koren with support from Henry Koren, Robert Sumner

Abstract:  The ISO 16505 standard for automotive Camera Monitor Systems uses high contrast hyperbolic wedges instead of slanted-edges to measure system resolution, defined as MTF10 (the spatial frequency where MTF = 10% of its low frequency value). Wedges were chosen based on the claim that slanted-edges are sensitive to signal processing. While this is indeed the case, we have found that wedges are also highly sensitive and present a number of meas­ure­ment challenges: Sub-pixel location variations cause unavoidable inconsistencies; wedge saturation makes results more stable at the expense of accuracy; MTF10 can be boosted by sharpening, noise, and other artifacts, and may never be reached. Poor quality images can exhibit high MTF10. We show that the onset of aliasing is a more stable performance indicator, and we discuss methods of getting the most accurate results from wedges as well as misun­derstandings about low contrast slanted-edges, which cor­relate better with system performance and are more repre­sentative of objects of interest in automotive and security imaging.

Full Text: Measuring MTF with Wedges: Pitfalls and best practices

Slides: Wedge_measurements_N_Koren_2017

Imatest upgrades based on the paper

The recommended metric to replace MTF10 (min(MTF10, onset of aliasing, Nyquist frequency)) is displayed in the Wedge MTF plot as well as the (new in Imatest 5.0) Multi-Wedge plot, shown below.

Multi-Wedge plot for eSFR ISO, including recommended metric.

The post Measuring MTF with wedges: Pitfalls and best practices appeared first on imatest.

]]>
http://www.imatest.com/2017/01/measuring-mtf-with-wedges-pitfalls-and-best-practices/feed/ 0
Testing a macro lens using Checkerboard and Micro Multi-slide http://www.imatest.com/2016/12/testing-macro-lens-using-checkerboard-micro-multi-slide/ http://www.imatest.com/2016/12/testing-macro-lens-using-checkerboard-micro-multi-slide/#respond Tue, 13 Dec 2016 17:37:27 +0000 http://www.imatest.com/?p=17382 Imatest’s Checkerboard module is our new flagship module for automated analysis of sharpness, distortion and chromatic aberration from a checkerboard (AKA chessboard) pattern. The big benefit of using the checkerboard is that there are looser framing requirements than with other kinds of test targets. While checkerboard lacks the color and tone analysis provided by SFRplus and eSFR ISO, these features are not available on the high precision chrome on glass substrate, so the checkerboard is the optimal pattern for this test.

The post Testing a macro lens using Checkerboard and Micro Multi-slide appeared first on imatest.

]]>
Testing a 1-5x Macro Canon MP‑E 65mm Lens

Imatest’s Checkerboard module is our new flagship module for automated analysis of sharpness, distortion and chromatic aberration from a checkerboard (AKA chessboard) pattern. The big benefit of using the checkerboard is that there are looser framing requirements than with other kinds of test targets. While checkerboard lacks the color and tone analysis provided by SFRplus and eSFR ISO, these features are not available on the high precision chrome on glass substrate, so the checkerboard is the optimal pattern for this test.

The Imatest Micro Multi-Slide contains high precision checkerboard patterns with many different scales of frequencies. This makes a single target capable of effectively testing a wide range of magnifications.

We obtained the below cycles per object mm by dividing the LW/PH obtained from the image by 2*image height in mm.

Magnification Image height Center MTF50 Best Aperture
1x 23.71 mm 29.61 cycles / object mm f/5.6
2x 11.86 mm 57.04 cycles / object mm f/5.6
3x 7.90 mm 74.55 cycles / object mm f/5.6
4x 6.07 mm 81.52 cycles / object mm f/4
5x 4.74 mm 87.61 cycles / object mm f/4

 

Thanks to this study, we now know to select the f/4 aperture at 5x magnification to maximize the resolving power of this lens. Detailed results, capture and analysis procedures are available below.


1x Magnification

This report is shown in line widths per picture height (LW/PH):

1x

1x-3d


2x Magnification

2x

2x-3d


3x Magnification

3x

3x-3d


4x Magnification

4x

4x-3d


5x Magnification

At the limit of this lens’ magnification we obtain the most spatial detail on the target at f/4

5x

5x-3d


Capture Procedure

We use this camera setup to take high precision photographs of the test targets on a LED lightbox:  

micromulti_setup_scaled

We mounted the camera on an adjustable arm and used a Manfrotto 454 Micro Positioning plate for adjusting distance and focus. We masked off the extra parts of the lightbox using opaque material to prevent additional stray light from increasing flare.

We used the following framing for the various magnifications:

micromulti_site_blog_customized

To get good sharpness measurements we selected an area of the slide that had under 10 vertical squares in the image. This yields a reasonably large SFR region for the most reliable calculations and prevents there from being an excessive number of regions.

We centered the chart on the 3-dot mark in the center of the frequency zone. For optimal slanted-edge analysis, we rotated the chart by about 5° according to the ISO 12233 standard. Our current checkerboard routines can automatically detect the checkerboard, but they look for complete rows and columns before including the set of regions. Which means that rotation can make the framing a little more difficult, and if a corner of a complete row/column of squares gets clipped off, then the automatic region detection will skip some regions you may have wanted to test, producing less than optimal region availability around the periphery of your image. We will be improving these selection routines in future releases of our software.

If we wanted more detail about lateral chromatic aberration and distortion (which are very low for this lens) we would have analyzed the dot pattern regions of the chart.

We used 5 megapixel downsampled JPG’s from the camera to perform this analysis, which gives us the following table of image sizes:

Magnification Image height Image pixel size
1x 23.71mm 13µm
2x 11.86mm 6.5µm
3x 7.90mm 4.33µm
3.9x 6.07mm 3.33µm
5x 4.74mm 2.6µm

Analysis Procedure

We performed our Checkerboard Setup and selected all regions:

setup

After initially determining our range of expected MTF values, we disabled auto-scaling on our 3D plots and set our range to the total:

micromultiscaling

We used Imatest Batchview to coalesce the large volume of tests in order to produce the above bar graphs.  We also used ImageMagick to assemble the nifty animated GIF’s from the collections of 3D plots:

convert -delay 100 mag1/Results/*3D.png 1x-3D.gif

We hope that you find this write-up to be helpful in testing your own equipment. You can purchase Micro-multi Slide on our store or contact charts@imatest.com for customizing a test target that fits your unique requirements.  See our Macro Solutions for other close-range testing items.

The post Testing a macro lens using Checkerboard and Micro Multi-slide appeared first on imatest.

]]>
http://www.imatest.com/2016/12/testing-macro-lens-using-checkerboard-micro-multi-slide/feed/ 0
Using Sharpness to Measure Your Autofocus Consistency http://www.imatest.com/2016/11/using-sharpness-to-measure-your-autofocus-consistency/ http://www.imatest.com/2016/11/using-sharpness-to-measure-your-autofocus-consistency/#respond Tue, 15 Nov 2016 16:57:50 +0000 http://www.imatest.com/?p=17253 By Ranga Burada Autofocus plays a major role in many camera system applications with variable focus, including consumer electronic devices. Camera systems must be able to focus at a variety of distances. Optical systems on cameras only allow a certain range of distances from the camera to be in focus at once (this is often known […]

The post Using Sharpness to Measure Your Autofocus Consistency appeared first on imatest.

]]>
By Ranga Burada

Autofocus plays a major role in many camera system applications with variable focus, including consumer electronic devices. Camera systems must be able to focus at a variety of distances. Optical systems on cameras only allow a certain range of distances from the camera to be in focus at once (this is often known as the depth of field, or depth of focus). The distance from the camera where objects will be most in focus, effectively the center of this range, is the focus distancethe role of the autofocus system in a camera is to set this point accurately every time.

We refer to autofocus consistency as the ability of a camera to focus on a given point correctly, repeatedly. To determine if a point is in focus, we measure the sharpness of an object (specifically, a test chart) at that distance. By taking many images of the chartand letting the autofocus system reset each time and try to focus on the chart anewwe can tell if the camera system is focusing consistently or not. By examining the MTF50 values calculated from these imagesa common objective image quality metric which correlates well with perceived sharpnesswe can tell if sharpness varied between captures, and thus if focus accuracy on the chart varied.

Imatest Autofocus Consistency Module

The Imatest Autofocus Consistency module analyzes the sharpness (specifically MTF50) results from a set of images captured at a fixed distance from an Imatest sharpness chart, such as SFRplus chart, eSFR-ISO chart, or the new AutoFocus chart.  The user can then generate MTF50 values from these images using the SFR, SFRplus, or eSFR-ISO modules in Imatest. The Autofocus Consistency module is a post-processor that runs on the outputs of these analyses and consolidates them into a more useful form. You can find a more detailed description of the test procedure here.

 

MTF50 Value

 

In the above plot, each x-axis position indicates the distance from chart to camera. The colored data marks spread vertically at each position indicate the MTF50 values calculated from the images captured at that chart distance. The consistency of the autofocus system at a given distance is indicated by the tightness of the spread of MTF50 values for images taken at that distance. The narrower this spread, the more consistent the autofocus system is. In order to determine if the system’s consistency depends on distance (perhaps it has an easy time focusing on nearby points, but tends to fail for faraway ones), this analysis is repeated at many test distances, as in the plot above.

To learn more about maintaining consistency while measuring sharpness with MTF values, visit Increasing the Repeatability of Your Sharpness Tests.

The post Using Sharpness to Measure Your Autofocus Consistency appeared first on imatest.

]]>
http://www.imatest.com/2016/11/using-sharpness-to-measure-your-autofocus-consistency/feed/ 0
Increasing the Repeatability of Your Sharpness Tests http://www.imatest.com/2016/08/increasing-repeatability-of-sharpness-tests/ http://www.imatest.com/2016/08/increasing-repeatability-of-sharpness-tests/#respond Thu, 18 Aug 2016 21:28:31 +0000 http://www.imatest.com/?p=15905 By Robert Sumner With contributions from Ranga Burada, Henry Koren, Brienna Rogers and Norman Koren Consistency is a fundamental aspect of successful image quality testing. Each component in your system may contribute to variation in test results. For tasks such as pass/fail testing, the primary goal is to identify the variation due to the component and […]

The post Increasing the Repeatability of Your Sharpness Tests appeared first on imatest.

]]>
By Robert Sumner
With contributions from Ranga Burada, Henry Koren, Brienna Rogers and Norman Koren

Consistency is a fundamental aspect of successful image quality testing. Each component in your system may contribute to variation in test results. For tasks such as pass/fail testing, the primary goal is to identify the variation due to the component and ignore the variation due to noise. Being able to accurately replicate test results with variability limited to 1-5% will give you a more accurate description of how your product will perform.

Since Imatest makes measurements directly from the image pixels, any source that adds noise to the image can affect measurements. A primary source of noise in images is electronic sensor noise. Photon shot noise also contributes significantly in low-light situations. Other systemic sources of measurement variability, such as autofocus hysteresis, will not be addressed in this post.  

In order to reduce variation in your sharpness results and increase test repeatability, you should take steps to decrease the amount of noise in your image.

Here are 5 tips to limit noise in your test results:

Maximize your samples

Since most sources of noise are independent across both different exposures and pixel locations within an exposure, their influence can be effectively canceled by averaging multiple samples.

In order to exploit the temporal aspect of noise, you can combine multiple images of the same scene using the “Combine files for signal averaging” option when selecting multiple image files for analysis using Imatest’s Fixed Modules. This trick works for all analyses in Imatest.

Similarly, for analyses based on Regions of Interest (ROI) which are scale-invariant, such as the preferred MTF-measuring slanted edge technique, increasing the area of the ROI around the slanted edge you analyze in an image increases the number of independent random samples used. In general, you want to select as large a window around a slanted edge as possible given the chart and staying within the desired region of the image field (since MTF generally varies around the field). This trick does not work for tests that have a fixed feature size in an image, such as Siemens-stars and hyperbolic wedges.

Ensure adequate signal level via chart contrast (but not too much)

For sharpness measurements, the signal that you want to measure is related to the amount of contrast in the image. The more contrast you have, the higher the signal to noise ratio (SNR) and the less effect noise will have on results. There can be too much of a good thing, however: as the pixels in your sensor reach the point of saturation, clipping, or non-linear response regions, unrealistic increases in sharpness will occur.  The ISO 12233:2014 standard specifies a test chart edge printed at 4:1 contrast in order to prevent this saturation for most systems.

The slanted edge signal level is intimately tied to the contrast of the light and dark sides of the edge. There is an optimal range of contrast for slanted edges that you should try to achieve in order to obtain reliable results. This partially involves choosing a chart appropriate for your test setup.

Increasing the contrast in the outer regions of a test chart, which are most impacted by shading, is one technique for increasing signal level. When ordering charts from Imatest, you can request customizations such as this as appropriate.

Be aware of the effects of processing an image

Some devices tested with Imatest can produce raw images that have not been processed by software after capture. In such cases, Imatest can provide accurate measurements of the combined system of lens and sensor. Whenever a camera device processes an image prior to input to Imatest the effects of that processing can be observed, studied, and understood, but cannot be ignored.

When an image is converted to an 8-bit (24-bit color) JPEG from a higher bit-depth sensor, noise increases slightly due to quantization. The noise increase can be worse (“banding” can appear) if extensive image manipulation (dodging and burning) is required. It is often best to convert to 16-bit (48-bit color) files. Processing also often includes sharpening, which can increase the relative power of noise at higher frequencies.

A final caveat on processed images is that many consumer cameras (especially mobile device cameras) use non-linear noise reduction (such as bilateral filtering), which may smooth out images noise on slanted edge targets but also reduce texture detail. (Side note: Averaging multiple images as suggested above will not work when non-linear processing such as this is involved.) In such a case, slanted edge measurements may not tell the whole story of sharpness, and a random (texture analysis) chart may be more appropriate.

Ensure a better exposure

Lower light environments often necessitate higher ISO speeds in order to get a good exposure, which leads to increased sensor noise and variation. Ensuring a good photographic exposure can reduce both photon shot noise and the relative effect of sensor noise in an image. The two primary ways of increasing exposure values (though be careful to keep the light areas below the saturation level of your sensor, as mentioned above!) are:

  1. Increase the amount of light reflected by the chart by increasing the brightness of your light source
  2. Increasing the exposure time to gather more light, as long as the camera and target are both stationary

Select a repeatable measurement

The shape of an MTF curve gets perturbed in the presence of noise. It is often impractical to compare two full MTF curves or to include one in a report, so engineers often reduce the information about the curve to one or two summary metrics. These are meant to convey the most important information about the curve in a single number. Common examples are:

  • MTF10, MTF30, and MTF50: the frequency values at which the MTF curve reaches 10%, 30%, and 50% of its normalized (DC) value respectively
  • MTF50P: the frequency value at which the MTF curve reaches 50 of its maximum value (which can be greater than the value of 1 found in DC if sharpening is present)
  • MTF at ¼ and ½ Nyquist: The MTF value at one half and one quarter the Nyquist sampling rate (0.125 and 0.25 cycles/pixel, respectively).
  • MTF Area: The area under the MTF curve from DC to 0.5 cycles/pixel, usually with the curve normalized to peak at a value of 1. (Less common.)

These are illustrated below on a synthetic, noise-free MTF curve example. The MTF Area value is the integral of the light red region under the curve.

MTF Curve

The value of each of these metrics will change slightly for each different realization of noise (i.e., each photograph you take of the slanted edge), but some metric values tend to be less stable (have more variance) in the presence of noise. It is important to make sure you are using a metric that embodies the MTF characteristic you care about but is also repeatable considering the amount of noise you might expect to encounter.

Shown below is a set of 10 different MTF curves calculated (using Imatest’s SFR module) from a set of simulated slanted-edge images. Our simulation process involved generating a slanted edge at 5 degrees (bilinear interpolation), applying a Gaussian blur kernel, adding white gaussian pixel-wise noise (a different instance per curve below), and applying sharpening using an unsharp masking technique. Overlaid on the family of MTF curves are boxplots (handy plots that succinctly represent the important statistics of an entire population, in this case, 100 simulations using the above process) corresponding to the different summary metrics.

repeatability

The most important aspect of these boxes for this discussion is the length of each of them, which represents how much variance each metric shows over the population of noisy edge images. (Lengths of vertically oriented metrics have been compensated for different axis scales in this image to allow visual comparison with horizontally plotted ones.) Note that MTF50 and MTF50P have smaller amounts of variance than the similarly common MTF30, MTF10. MTF at ½ and ¼ Nyquist vary in a different scale since they have different units than the previously mentioned metrics, with the latter being significantly more affected by noise. MTF Area has the least variance, though this is also on a different scale and has a different relationship to sharpness. (We will study the suitability of MTF Area, which is a very promising though not commonly used metric, in the future post.)

The plots below further show how the different MTF metrics vary at different levels of sharpening and noise. The standard deviation of each, σmetric, is calculated for each metric at each noise level over 100 random instances. The original simulated slanted edge test image was valued between [0, 255] and with 4:1 contrast.

no sharpening line graph

moderate sharpening line graphStrong sharpening line graph

 

 

 

 

The above figures display the expected general trends of increasing variability for all metrics, both with increasing noise and increasing levels of sharpening. Interestingly, the ordering of the metrics by variability is essentially constant across all levels of noise and sharpening. Another interesting note is that MTF10 and MTF at ½ Nyquist are especially sensitive to sharpening- their variability jumps the most when sharpening is applied. These two metrics are also generally the most variable overall, while MTF Area is the most consistent.

When choosing a metric to use to report the sharpness of an imaging system, it is important to keep in mind how susceptible reported values are to variation due to random noise. By using a more stable summary metric value, you can ensure repeatability of your results in future tests.

 

 

The post Increasing the Repeatability of Your Sharpness Tests appeared first on imatest.

]]>
http://www.imatest.com/2016/08/increasing-repeatability-of-sharpness-tests/feed/ 0
Imatest Support for CPIQ Metrics http://www.imatest.com/2016/03/imatest-support-for-cpiq-metrics/ http://www.imatest.com/2016/03/imatest-support-for-cpiq-metrics/#comments Thu, 17 Mar 2016 00:03:01 +0000 http://www.imatest.com/?p=15239 What is CPIQ? IEEE-SA working group P1858 created the CPIQ standard. CPIQ seeks to standardize image quality test metrics and methodologies across the mobile device industry, correlate objective test results with human perception, and combine this data into a meaningful consumer rating system. CPIQ serves as a way to assess and communicate image quality to the vast […]

The post Imatest Support for CPIQ Metrics appeared first on imatest.

]]>
What is CPIQ?

IEEE-SA working group P1858 created the CPIQ standard. CPIQ seeks to standardize image quality test metrics and methodologies across the mobile device industry, correlate objective test results with human perception, and combine this data into a meaningful consumer rating system.

CPIQ serves as a way to assess and communicate image quality to the vast majority of consumers who are unsure how to judge and compare device image quality.

See the full CPIQ Overview.

Imatest support for CPIQ

Imatest 4.4 supports all CPIQ v1 measurements, including:

Future revisions of the standard will cover white balance, autofocus, video quality, dynamic range and many more relevant image quality factors.

Download Imatest 4.4, see our Change Log for more details.

The post Imatest Support for CPIQ Metrics appeared first on imatest.

]]>
http://www.imatest.com/2016/03/imatest-support-for-cpiq-metrics/feed/ 1
Color difference ellipses http://www.imatest.com/2015/09/color-difference-ellipses/ http://www.imatest.com/2015/09/color-difference-ellipses/#respond Tue, 22 Sep 2015 22:23:51 +0000 http://www.imatest.com/?p=13479 Starting with Imatest 4.2, Imatest's two-dimensional color displays— CIELAB a*b*, CIE 1931 xy chromaticity, etc.— in Multicharts, Multitest, Colorcheck, SFRplus, and eSFR ISO can display ellipses that assist in visualizing perceptual color differences. You can select between MacAdam ellipses (of historical interest), or ellipses for ΔCab (familiar but not accurate), ΔC94, and ΔC00 (recommended).

The post Color difference ellipses appeared first on imatest.

]]>
Imatest has several two-dimensional displays for comparing test chart reference (ideal) colors with measured (camera) colors, where reference colors are represented by squares and measured values are represented by circles. The two most familiar representations— CIELAB a*b* and CIE 1931 xy chromaticity— are shown below. They are for the Colorchecker image, also shown below, where the upper-left of each patch is the reference color and the lower-right is the camera color.

ab_plot_no_ellipses xy_plot_no_ellipses
How different are the reference and camera colors in the Colorchecker image on the right, represented in these diagrams? split_colors_forellipses

Color differences and MacAdam ellipses

When these representations are viewed, the question naturally arises, “how different are the reference and camera colors?” Color differences can be quantified by several measurements— ΔEab,  ΔE94,  and ΔE00 (where 00 is short for 2000), where ΔE measurements include chroma (color) and luminance (brightness). If brightness (L*) is omitted, these measurements are called ΔCab,  ΔC94,  and ΔC00, where C stands for chroma (which includes a color’s hue and saturation). In this post we will discuss chroma differences. ΔCab = (a*2+b*2)1/2 (sometimes called ΔC) is the simple Euclidean (geometric) distance on the a*b* plane. It’s familiar but not accurate.CIExy1931_MacAdam_488W

Starting with Imatest 4.2, Imatest’s two-dimensional chroma displays— CIELAB a*b*, CIE 1931 xy chromaticity, CIE u’v’ chromaticity, Vectorscope, and CbCr— can display ellipses that assist in visualizing perceptual color differences.

These ellipses were developed from the MacAdam ellipses, shown on the right.

MacAdam ellipses shown in the
CIE 1931
chromaticity diagram,
magnified 10X (from Wikipedia)

The MacAdam ellipses were developed from a set of experiments performed at the University of Rochester in 1942, in which an observer tried to match pairs of colors, one fixed and one variable. The ellipse parameters are based on statistical variations in the matching, which are closely related to Just Noticeable Differences (JND). Twenty-five colors (whose xy values are shown as • in the illustration) were used.

Almost all images of MacAdam ellipses show the ellipses for the original 25 colors. In imatest we use a sophisticated interpolation routine to determine the ellipse parameters for color test charts. This is quite reliable since the gamut of the original color set extends well beyond the gamut of the widely-used sRGB color space (the standard of Windows and the internet), as well as most other color spaces used in imaging. At least 10 of the original colors are outside the sRGB gamut. sRGB is used for all examples in this post.

The ellipses are a visual indicator of the magnitude of perceived color (chroma) difference. The longer the ellipse axis, the greater the distance on the a*b* plane for a given color difference.

Here are the MacAdam ellipses for the X-Rite Colorchecker, in xy (from xyY) and u’v’ (from Lu’v’) representations, displayed in Multicharts. u’v’ is supposed to be more perceptually uniform than xy, and that is not evident inside the sRGB gamut. This may be one reason why u’v’ hasn’t gained much traction in the imaging industry.

xy_macadam_colorchecker
xy Colorchecker MacAdam ellipses,
magnified 10X
uv_macadam_colorchecker
xy Colorchecker MacAdam ellipses,
magnified 10X

The MacAdam ellipses are not widely used in imaging (though they seem to have traction in LED lighting). Instead, color differences based on CIELAB (L*a*b*) color space, which was developed in 1976 with the intent of being much more perceptually uniform than xyY, are used. Color differences are usually presented as ΔEab,  ΔE94,  or ΔE00. In all ΔE (color difference) equations, the Luminance (L*) term can be easily removed, leaving chroma differences ΔC, which can be displayed in in two dimensions.

Color difference ellipses in Imatest

All Imatest modules that can produce two-dimensional color different plots can display color difference ellipses. Colorcheck, SFRplus, and eSFR ISO display them in the a*b* plot. Multicharts and Multitest display them in a*b*, xy, u’v’, Vectorscope, and CbCr plots. In addition to MacAdam ellipses, which are not generally used for color difference measurements (but are interesting for comparison), the following color difference metrics are presented. The illustrations below are for a Colorchecker (shown above) analyzed in Multicharts.

ΔCab (plain ΔC – 1976)

ΔCab is simple geometric distance in the a*b* plane of CIELAB (L*a*b*) color space. When CIELAB was designed, the intent was that ΔC = 1 would correspond to one Just Noticeable Difference (JND). This may hold for colors with very low chroma (a*2+b*2)1/2, but it fails badly as chroma increases, which is why ΔE94  and ΔE00 were developed. ΔEab and ΔCab are familiar and often mentioned in imaging literature, but they should never be used for important measurements.

ab_DeltaCab_colorchecker
a*b* Colorchecker ΔCab ellipses (circles!),

magnified 4X
xy_DeltaCab_colorchecker
xy Colorchecker ΔCab ellipses,

magnified 4X, zoomed in

ΔC ellipses are always show magnified 4X (10X would be too large for clear display). We show the xy plot zoomed in so more detail is visible.

ΔC94 (ΔC 1994)

ΔE94 was developed to compensate for the deficiencies of ΔEab. Circles become ellipses with their major axes aligned with the radius from the origin (a* = b* = 0). It is more more accurate than ΔEab, especially for strongly chromatic colors.

ab_DeltaC94_colorcheckera*b* Colorchecker ΔC94 ellipses,
magnified 4X
xy_DeltaC94_colorchecker
xy Colorchecker ΔC94 ellipses,

magnified 4X, zoomed in

ΔC00 (ΔC 2000)

ΔE00 was developed as a further refinement over ΔE94. It has an extremely complex equation. Though not perfect, it’s the best of the current color difference formulas, and is recommend when color differences need to be quantified. It’s quite close to ΔE94, except for some blue colors. Until we added these ellipses to Imatest, there was no convenient way to visually compare the different ΔE measurements!

ab_DeltaC00_colorcheckera*b* Colorchecker ΔC00 ellipses,
magnified 4X
xy_DeltaC00_colorcheckerxy Colorchecker ΔC00 ellipses,
magnified 4X, zoomed in

Displaying the ellipses in Imatest

In Imatest 4.2 the ellipses are not displayed by default. We may change this in a later version. The table below shows how to control ellipse display.

Multicharts
Multitest

Multicharts and Multitest share ini file settings.

Ellipse settings for both are set in the Color ellipses dropdown menu in Multicharts. Standard ellipse magnification (10x for MacAdam ellipses, 4X for Delta-C ellipses) is recommended.

multicharts_ellipse_selection
Colorcheck The selection is in the middle-left of the Colorcheck settings window. colorcheck_ellipse_selection
SFRplus
eSFR ISO
In (Rescharts) SFRplus and eSFR ISO, ellipse settings are located in the Ellipses dropdown menu, displayed to the right of the bottom of the a*b* plot, when it is selected. sfrplus_ellipse_selection

The post Color difference ellipses appeared first on imatest.

]]>
http://www.imatest.com/2015/09/color-difference-ellipses/feed/ 0
Measuring Multiburst pattern MTF with Stepchart http://www.imatest.com/2015/04/multiburst-stepchart/ http://www.imatest.com/2015/04/multiburst-stepchart/#respond Wed, 01 Apr 2015 22:05:15 +0000 http://www.imatest.com/?p=11138 Measuring MTF is not a typical application for Stepchart— certainly not its primary function— but it can be useful with multiburst patterns, which are a legacy from analog imaging that occasionally appear in the digital world. The multiburst pattern is not one of Imatest’s preferred methods for measuring MTF: see the MTF Measurement Matrix for […]

The post Measuring Multiburst pattern MTF with Stepchart appeared first on imatest.

]]>
Measuring MTF is not a typical application for Stepchart— certainly not its primary function— but it can be useful with multiburst patterns, which are a legacy from analog imaging that occasionally appear in the digital world. The multiburst pattern is not one of Imatest’s preferred methods for measuring MTF: see the MTF Measurement Matrix for a concise list. But sometimes customers need to analyze them. This feature is available starting with Imatest 4.1.3 (March 2015).

Here is a crop of a multiburst pattern, generated by a Tektronix device:

multiburst_crop
Crop of Multiburst pattern: click on image for full-size pattern that you can download.

Running Stepchart with the multiburst pattern

Open Stepchart, then read the multiburst image. Make a rough region selection for the multiburst pattern— it will be refined as shown below. Then press Yes (not Express mode) to bring up the Stepchart settings window.

stepchart_multiburst_settingsStepchart settings for Multiburst MTF measurement

The key settings are circled in red. Automatic (Zone detection) should be unchecked and 6 patches should be selected using the slider (for this pattern; some multiburst patterns have as few as 5 zones). Under Results (Fig. 1, Lower plot), select 4. Noise uncorrected for patch nonuniformity, norml. to 1 in patch 1. Only Plot 1. Pixels, noise is relevant to the multiburst image. When settings are complete, click OK. If this is the first run for this size of image, the fine ROI adjustment window will appear.

multiburst_stepchart_fineadjFine ROI adjustment window for Multiburst pattern in Stepchart. May be enlarged.

Results

The standard deviation of the pixels in the patch, which is normally used to measure noise, is proportional to the MTF of the pattern in the patch (as long as it’s above the noise). To obtain an accurate result the patch nonuniformity correction is turned off (by setting Results (Fig. 1, Lower plot) to 4. Noise uncorrected…, as indicated above.

The key results are in the first Stepchart figure, most importantly in the lower plot, which contains the MTF for the patches.

stepchart_multiburst_resultsStepchart Multiburst results: MTF is in lower plot

There are several ways we could enhance this calculation, but we will only do them if their is sufficient interest.

  • The standard deviation method is very tolerant to image misalignment, but it’s quite sensitive to noise. There are more optimum techniques, but they require that the pattern be specified as either vertically or horizontally oriented and the image be carefully aligned.
  • The spatial frequency of the sine pattern in the patches is not yet calculated because it’s often specified the instrumentation used to generate the multiburst pattern. It would require some effort to add it.

The post Measuring Multiburst pattern MTF with Stepchart appeared first on imatest.

]]>
http://www.imatest.com/2015/04/multiburst-stepchart/feed/ 0