Correcting Misleading Image Quality Measurements

We discuss several common image quality measurements that are often misinterpreted, so that bad images are falsely interpreted as good, and we describe how to obtain valid measurements.

Sharpness, which is measured by MTF (Modulation Transfer Function) curves, is frequently summarized by MTF50 (the spatial frequency where MTF falls to half its low frequency value).

(more…)

Read More

Describing and Sampling the LED Flicker Signal

High-frequency flickering light sources such as pulse-width modulated LEDs can cause image sensors to record incorrect levels. We describe a model with a loose set of assumptions (encompassing multi-exposure HDR schemes) which can be used to define the Flicker Signal, a continuous function of time based on the phase relationship between the light source and exposure window. (more…)

Read More

Validation Methods for Geometric Camera Calibration

Camera-based advanced driver-assistance systems (ADAS) require the mapping from image coordinates into world coordinates to be known. The process of computing that mapping is geometric calibration. This paper provides a series of tests that may be used to assess the goodness of the geometric calibration

(more…)

Read More

Measuring camera Shannon information capacity with a Siemens star image

Shannon information capacity, which can be expressed as bits per pixel or megabits per image, is an excellent figure of merit for predicting camera performance for a variety of machine vision applications, including medical and automotive imaging systems.

(more…)

Read More

Verification of Long-Range MTF Testing Through Intermediary Optics

Measuring the MTF of an imaging system at its operational working distance is useful for understanding the system’s use case performance. (more…)

Read More

Reducing the cross-lab variation of image quality metrics

Abstract

As imaging test labs seek to obtain objective performance scores of camera systems, many factors can skew the results. (more…)
Read More

Automating CPIQ analysis Using Imatest IT and Python

The Python interface to Imatest IT provides a simple means of invoking Imatest’s tests. This post will show how Imatest runs can be automated, then the results of those tests can be collected and easily processed. For this example, we will run five Imatest modules across three light levels, then extract CPIQ quality loss metrics.

Development Setup

Image Collection

For each device, we capture the following set of images of different test charts at different light levels:   (Note: full details will be available with the publication of the standard in March of 2017)

  • DeviceX_eSFR_5000KLED_1000lux.JPG
  • DeviceX_eSFR_TL84_100lux.JPG
  • DeviceX_eSFR_Tung_10lux.JPG
  • DeviceX_Dot_5000KLED_1000lux.JPG
  • DeviceX_Dot_TL84_100lux.JPG
  • DeviceX_Dot_Tung_10lux.JPG
  • DeviceX_SG_5000KLED_1000lux.JPG
  • DeviceX_SG_TL84_100lux.JPG
  • DeviceX_SG_Tung_10lux.JPG
  • DeviceX_Coins_5000KLED_1000lux.JPG
  • DeviceX_Coins_TL84_100lux.JPG
  • DeviceX_Coins_Tung_10lux.JPG
  • DeviceX_Unif_100lux.JPG

Running All The Tests

Once we have these images collected in a folder named “DeviceX”, we define our lighting conditions, initialize the Imatest IT library and run our processing function on this and a number of other devices:

led5k = "5000KLED_1000lux"               # Light Levels
tl84 = "TL84_100lux"
tung = "Tung_10lux"
light_conditions = [led5k, tl84, tung]
imatest = ImatestLibrary()
phones = ['DeviceX',                     # Just for example...
          'DeviceY',
          'DeviceZ']
for phone in phones:
    calc_phone(phone)

imatest.terminate_library()

The main processing script invokes the five modules, collects their data outputs and saves them to a JSON file:

def calc_phone(phone):
    print('Calculating Metrics for '+phone+ ' *****************************')
    uniformity_results = uniformity(phone) # Uniformity
    multitest_results  = multitest(phone) # Multitest
    dotpattern_results = dotpattern(phone) # Dot Pattern
    esfriso_results    = esfriso(phone) # eSFR ISO
    random_results     = random(phone) # Random

    # Output data to disk
    file_name = os.path.join(images_dir, phone, phone + '.json')
    with open(file_name, 'w') as outfile:
        full_data = {'multitest'  : multitest_results,
                     'dotpattern' : dotpattern_results,
                     'esfriso' : esfriso_results,
                     'random' : random_results,
                     'uniformity' : uniformity_results}
        json.dump(full_data, outfile, indent=4, separators=(',',': '))

    return full_data

We call the Imatest eSFR ISO module for each specified lighting condition:

# eSFR ISO (ISO 12233:2014) is used to test Visual Noise (VN) 
# and Spatial Frequency Response (SFR)
def esfriso(base):
    global images_dir, ini_file, light_conditions, op_mode
    output = {}
    for light_source in light_conditions:
        print('Testing ' + base + ' eSFR ISO at ' + light_source)
        input_file = os.path.join(images_dir, base, base+'_eSFR_'+light_source+'.jpg')
        if os.path.exists(input_file) != True:
            raise Exception('File ' + input_file + 'not found!')
        print('opening ' + input_file)
        result = imatest.esfriso_json(input_file=input_file, root_dir=root_dir, 
                                      op_mode=op_mode, ini_file=ini_file)
        data = json.loads(result)
        output[light_source] = data['esfrisoResults'];

return output

On the fly INI file changes

The most difficult part of this processing is to manually update region selections for the calls to Multitest, which does not yet have automatic region detection for the Colorchecker SG target (coming soon…).

In this case, we are we replacing the [sg] roi, nwid_save, and nht_save keys with the values stored in this multicharts-rois.json file:

{
"DeviceX_SG_5000KLED_1000lux.JPG": "468 417 2122 1578 468 1588 2102 397",
"DeviceX_SG_TL84_100lux.JPG":"482 376 2142 1571 482 1561 2157 376",
"DeviceX_SG_Tung_10lux.JPG":"386 336 2016 1492 401 1497 2021 321",
"width":"2592",
"height":"1936"
}

We use this script to modify INI files to dynamically insert different settings:

import ConfigParser

def get_config(path): # Used make ConfigParser case sensitive
    config = ConfigParser.ConfigParser()
    config.optionxform=str
    try:
        config.read(os.path.expanduser(path))
        return config
     except Exception, e:
        log.error(e)

def setROI(roi,width,height):
    global ini_file
    imatestini = get_config(ini_file)

    imatestini.set('ccsg', 'roi', roi)
    imatestini.set('ccsg', 'nwid_save', width)
    imatestini.set('ccsg', 'nht_save', height)
    with open(tempINI, 'w') as outfile:
        imatestini.write(outfile)
    return

These functions are then called using the values loaded from the JSON file inside of our multicharts function call:

    roi_overrides = {}
    roi_overrides_lower = {}
    roi_override_file = os.path.join(images_dir,base,'multicharts-rois.json')
    if os.path.exists(roi_override_file):
        with open(roi_override_file,'r') as fh: # read ROI override file
            roi_overrides = json.load(fh)
        for override, rois in roi_overrides.iteritems(): # convert to lowercase
            roi_overrides_lower[override.lower()] = rois
    else:
        raise Exception('ROI override file ' + roi_override_file + ' not found.')
 
    if image_file.lower() in roi_overrides_lower:
         setROI(roi_overrides_lower[image_file.lower()], roi_overrides['width'], roi_overrides['height'])
         selectedINI = tempINI
    else:
         raise Exception('Missing ROI override for ' + image_file)

Extracting Scores

Finally, we extract Key Performance Indicators (KPI), which are all in perceptual quality loss (QL) in units of just noticeable differences (JND).  This script is a bit long for this post, so view the source on Github if you are interested./

Here is the resulting set of calculated metrics in JSON format:

{ 'DeviceX': 
    { 'CL_QL': { '5000KLED_1000lux': 0.9142,
                 'TL84_100lux': 0.1803,
                 'Tung_10lux': 2.565},
      'CU_QL': 0.07498,
      'LCD_QL': 0,
      'LGD_QL': 0,
      'SFR_QL': { '5000KLED_1000lux': 0.0,
                  'TL84_100lux': 0.1969,
                  'Tung_10lux': 0.3946},
      'TB_QL': { '5000KLED_1000lux': 1.2,
                 'TL84_100lux': 6.25,
                 'Tung_10lux': 9.1},
      'VN_QL': { '5000KLED_1000lux': 1.113,
                 'TL84_100lux': 2.409,
                 'Tung_10lux': 8.531}},
      'Combined_QL': 11.0245
}

Once test procedures and combined score calculations are finalized by the IEEE Conformity Assessment Program, these outputs will be combined into a simple camera phone rating score.

The full python source code for this example is posted on GitHub.

To add these automation capabilities to your existing Imatest Master license, upgrade to Imatest Ultimate. You can also request a free 30-day trial of Imatest IT to see how this works for you before you purchase.

 

Related Webinars

Join us March 1 for Automating Lab and Manufacturing Processes: Defining image quality parameters on automated test equipment. Register now

Read More