Sub menu and contents start here
Machine Vision Technology
Principle of Image Processing
Pattern search is a tool which will scan the incoming image for a pattern that has been stored in the system (reference
pattern). The XY position, angle and correlation value (% match) of the detected pattern is obtained and output.
Usual image processing is performed in units of 1 pixel, while the sub-pixel processing method performs position detection in units down to 0.001 pixels. This enables high-accuracy position detection, expanding the application range to precise part location and dimension measurement.
Normalized correlation method
Accurate pattern matching without being affected by changes in brightness.
The gray scale pattern matching method recognizes each pixel of the reference image pattern as one of 256 levels of gray, and it compares this data with the information of the image on the screen to detect the position. However, with this method accurate position detection is sometimes difficult because the absolute value of the gray scale data is easily affected by variations in ambient light.
The normalized correlation method allows for stable pattern matching without being affected by ambient light. As the following pictures show, the average brightness of the whole image is subtracted from the brightness (gray scale data) of each pixel for both the reference image and input image. This is called normalization, which eliminates the difference in the brightness of both whole images. Then, the image is located at the position where the patterns of the reference and input images best match (i.e. highest correlation), and the position of the target pattern in the image is accurately detected.
By setting the edge detection window on the image screen, you can locate the section where the brightness changes within the image and recognize it as an edge. This method is effective for detecting the absolute coordinate of an edge or for dimensional inspection of workpieces.
Principle of edge detection
An "edge" is essentially the boundary between the bright and dark areas appearing in the image. Within an image, an edge will be placed over any area that has a change in contrast that exceeds the contrast limits previously set by the user. The following three step process is used when detecting an edge:
Apply Projection Processing
Apply projection processing to the image within the measurement area. Projection processing means scanning the image in the direction perpendicular to the predetermined direction of detection, and then obtaining the average intensity of each projection line. The waveform that is formed from the average intensity of the projection lines is called a projection waveform.
Differentiation is performed on the projection waveform. The portions of the waveform with the greatest changes in intensity have the largest differential value.
Correct the Differentiation Waveform
To stabilize the edge detection for actual production lines, correct the differentiation waveform so that the maximum absolute value of the differentiation is always 100%. Then, points on the differentiation waveform that exceed the specified "edge sensitivity (%)" are considered detected edges. Since the edges are detected based on relative change in intensity rather than the absolute value of intensity, edge detection is possible on actual production lines where illumination changes frequently.
Labeling is the operation for converting a captured image into a binary image and then recognizing adjoined pixels of the same color as a cluster. After labeling, the pixel clusters are called a "label".
New Topics and Guidebooks