Machine Vision Systems for All Applications

Sub menu and contents start here

  • Ask the Experts
  • Price Information
  • Free Trial Unit
  • Downloads

Machine Vision Technology


Principle of Image Processing

Pattern Search

Pattern search is a tool which will scan the incoming image for a pattern that has been stored in the system (reference pattern). The XY position, angle and correlation value (% match) of the detected pattern is obtained and output.

This section explains the Pattern Search algorithm used in the CV Series.

Sub-pixel processing

Usual image processing is performed in units of 1 pixel, while the sub-pixel processing method performs position detection in units down to 0.001 pixels. This enables high-accuracy position detection, expanding the application range to precise part location and dimension measurement.



Normalized correlation method

Accurate pattern matching without being affected by changes in brightness.

The gray scale pattern matching method recognizes each pixel of the reference image pattern as one of 256 levels of gray, and it compares this data with the information of the image on the screen to detect the position. However, with this method accurate position detection is sometimes difficult because the absolute value of the gray scale data is easily affected by variations in ambient light.

The normalized correlation method allows for stable pattern matching without being affected by ambient light. As the following pictures show, the average brightness of the whole image is subtracted from the brightness (gray scale data) of each pixel for both the reference image and input image. This is called normalization, which eliminates the difference in the brightness of both whole images. Then, the image is located at the position where the patterns of the reference and input images best match (i.e. highest correlation), and the position of the target pattern in the image is accurately detected.



Edge detection

By setting the edge detection window on the image screen, you can locate the section where the brightness changes within the image and recognize it as an edge. This method is effective for detecting the absolute coordinate of an edge or for dimensional inspection of workpieces.



Principle of edge detection

An "edge" is essentially the boundary between the bright and dark areas appearing in the image. Within an image, an edge will be placed over any area that has a change in contrast that exceeds the contrast limits previously set by the user. The following three step process is used when detecting an edge:

Apply Projection Processing

Apply projection processing to the image within the measurement area. Projection processing means scanning the image in the direction perpendicular to the predetermined direction of detection, and then obtaining the average intensity of each projection line. The waveform that is formed from the average intensity of the projection lines is called a projection waveform.



Perform Differentiation

Differentiation is performed on the projection waveform. The portions of the waveform with the greatest changes in intensity have the largest differential value.

What is differentiation?

Differentiation determines the amount of change in intensity according to the 0-255 gray scale. This allows the edge to be detected based on the relative change instead of the absolute value of the intensity.

Example:
The differentiation result for a portion of the image with no change in intensity is 0. The result for a section with a change from white (255) to black (0) is - 255.

Correct the Differentiation Waveform

To stabilize the edge detection for actual production lines, correct the differentiation waveform so that the maximum absolute value of the differentiation is always 100%. Then, points on the differentiation waveform that exceed the specified "edge sensitivity (%)" are considered detected edges. Since the edges are detected based on relative change in intensity rather than the absolute value of intensity, edge detection is possible on actual production lines where illumination changes frequently.

Labeling processing

Labeling is the operation for converting a captured image into a binary image and then recognizing adjoined pixels of the same color as a cluster. After labeling, the pixel clusters are called a "label".

New Topics and Guidebooks
Image Processing Handbook: Trend Edge Tool Capture a 21 Megapixel Image for Vision Inspection! Image Processing Handbook: Pattern Search Tool General Catalog
Image Processing Handbook: Trend Edge Tool From edge inspection for burrs and chips, to position detection of circles and edges, the Trend Edge Tool can be used in various applications. Capture a 21 Megapixel Image for Vision Inspection! Check out our new 21 megapixel camera for the XG-8000 series! Image Processing Handbook: Pattern Search Tool Master your understanding of Pattern Search and make the most out of this vision tool! General CatalogNew General Catalog Improved Format Design. Get a copy now!

| Home | Technology | Applications | Products | Downloads | Vision Topics | Support/Services | Contact Us | Sitemap | Privacy Policy | Terms of Use | View Keyence on YouTube

Produced by Produced by KEYENCE