THE MACHINE VISION GLOSSARY
0-9
1D Vision System
2D Vision System
2D (or 2-D) Vision Systems are the most common vision systems on the market. 2D Cameras are commonly referred to as "area scan" cameras. These cameras will produce a standard image, such as the image taken from your cell phone or a web cam.
2D vision systems are used for most inspections. They are commonly found in industrial automation and autonomous guidance applications.
2D Profiler
3D Vision System
A 3D Vision System can percieve 3 Different Dimensions. In addition to perceiving a 2D image like an area scan, the 3D Vision System can alos perceive depth of field. 3D Vision Systems usualy utilize stereo vision (multiple cameras) or structured lights.
A
Aberration
Aperture
Aspect Ratio
B
Bandpass Filters
Bandpass Filters are used to selectively transmit a portion of the spectrum while rejecting all other wavelengths. Bandpass Filters are ideal for a variety of applications, such as fluorescence microscopy, spectroscopy, clinical chemistry, or imaging. These filters are typically used in the life science, industrial, or R&D industries.
Barcode
A barcode (also bar code) is a machine-readable representation of information in a visual format on a surface.
Bayer Patter
A Bayer pattern, also called Bayer filter is an ordinary square raster of color filters for red, green and blue (RGB) which is used for digital image sensors in vision cameras and consumer cameras. The pattern is named after the inventor Bryce E. Bayer from Eastman Kodak. Every color pixel has a RGB value, where the values of the surrounding pixels are used to generate the RGB value of a single pixel. The decoding of the Bayer pattern is called de-Bayering
Blob Discovery (Blob Tool)
Inspecting an image for discrete blobs of connected pixels (e.g. a black hole in a grey object) as image landmarks. These blobs frequently represent optical targets for machining, robotic capture, or manufacturing failure
Bit Depth
Bit depth is a measurement used to indicate the number of bits used to code the color / gray value of a pixel. The higher the bit depth is, the more values can be coded. A monochrome image in theory only needs a bit depth of 1 bit. However, it can only use the color black and white, no grey values. A normal monochrome picture therefore normally uses 8 bits and can hereby have 256 different shades. A color picture needs between 8 and 24 bits, depending on the quality of the image. With a higher bit depth is the image information often saved as RGB (red, green and blue), because not all different colors need to be saved in a separate table then.
Bits
A pixel contains information about color and brightness. The amount of information that a pixel contains differs and is expressed in bits. A 1-bits pixel is the minimum, which only has two options: on and off. This will result in a monochrome image. An 8-bit pixel entails 256 different shades, a 16-bit pixel 65.336 shades and a 24-bit-pixel 16.7 million shades.
Bitmap
A raster graphics image, digital image, or bitmap, is a data file or structure representing a generally rectangular grid of pixels, or points of color, on a computer monitor, paper, or other display device.
C
Camera
A camera is a device used to take pictures, either singly or in sequence. A camera that takes pictures singly is sometimes called a photo camera to distinguish it from a video camera.
Camera Link
Camera Link is a serial communication protocol designed for computer vision applications based on the National Semiconductor interface Channel-link. It was designed for the purpose of standardizing scientific and industrial video products including cameras, cables and frame grabbers. The standard is maintained and administered by the Automated Imaging Association, or AIA, the global machine vision industry's trade group.
Charge Coupled Device (CCD)
A charge-coupled device (CCD) is a sensor for recording images, consisting of an integrated circuit containing an array of linked, or coupled, capacitors. CCD sensors and cameras tend to be more sensitive, less noisy, and more expensive than CMOS sensors and cameras.
CMOS
CMOS ("see-moss")stands for complementary metal-oxide semiconductor, is a major class of integrated circuits. CMOS imaging sensors for machine vision are cheaper than CCD sensors but more noisy.
CoaXPress
CoaXPress (CXP) is an asymmetric high speed serial communication standard over coaxial cable. CoaXPress combines high speed image data, low speed camera control and power over a single coaxial cable. The standard is maintained by JIIA, the Japan Industrial Imaging Association.
Color
The perception of the frequency (or wavelength) of light, and can be compared to how pitch (or a musical note) is the perception of the frequency or wavelength of sound.
Color Temperature
"White light" is commonly described by its color temperature. A traditional incandescent light source's color temperature is determined by comparing its hue with a theoretical, heated black-body radiator. The lamp's color temperature is the temperature in kelvins at which the heated black-body radiator matches the hue of the lamp.
Computer Vision
The study and application of methods which allow computers to "understand" image content.
Contrast
In visual perception, contrast is the difference in visual properties that makes an object (or its representation in an image) distinguishable from other objects and the background.
C-Mount
Standardized adapter for optical lenses on CCD - cameras. C-Mount lenses have a back focal distance 17.5 mm vs. 12.5 mm for "CS-mount" lenses. A C-Mount lens can be used on a CS-Mount camera through the use of a 5 mm extension adapter. C-mount is a 1" diameter, 32 threads per inch mounting thread (1"-32UN-2A.)
CS-Mount
Same as C-Mount but the focal point is 5 mm shorter. A CS-Mount lens will not work on a C-Mount camera. CS-mount is a 1" diameter, 32 threads per inch mounting thread.
D
Data Matrix
Depth Of Field
Depth Perception
Diaphragm
Dynamic Range
E
Edge Detection
Edge Detection marks the points in a digital image at which the luminous intensity changes sharply. It also marks the points of luminous intensity changes of an object or spatial-taxon silhouette.
Electromagnetic Interference (EMI)
Radio Frequency Interference (RFI) is electromagnetic radiation which is emitted by electrical circuits carrying rapidly changing signals, as a by-product of their normal operation, and which causes unwanted signals (interference or noise) to be induced in other circuits.
F
Firewire
Fill Factor
The fill factor of an image sensor is the ratio of the light sensitive area from a pixel of the total area. For pixels without micro lenses is the fill factor the ratio of the photodiode area to the total pixel surface. However, the use of micro lenses increases the effective fill factor, often to a 100%, by converging light of the total pixel area to the photodiode. In another occasion the fill factor of an image can be reduced.
Field Of View (FOV)
The field of view (FOV) is the part which can be seen by the machine vision system at one moment. The field of view depends from the lens of the system and from the working distance between object and camera
Flat-field Correction
Flat-Field Correction is a technique used to improve quality in digital imaging. It cancels the effects of image artifacts caused by variations in the pixel-to-pixel sensitivity of the detector and by distortions in the optical path. It is a standard calibration procedure in everything from personal digital cameras to large telescopes.
Focus
An image, or image point or region, is said to be in focus if light from object points is converged about as well as possible in the image; conversely, it is out of focus if light is not well converged. The border between these conditions is sometimes defined via a circle of confusion criterion.
Frame Grabber
Frame Rate
G
Gamma Correction
Gamut
Grayscale
A grayscale digital image is an image in which the value of each pixel is a single sample. Displayed images of this sort are typically composed of shades of gray, varying from black at the weakest intensity to white at the strongest, though in principle the samples could be displayed as shades of any color, or even coded with various colors for different intensities.
GUI
Global Shutter
Gigabit Ethernet (GigE) Camera
H
Histogram
Histogram (Color)
In computer graphics and photography, a color histogram is a representation of the distribution of colors in an image, derived by counting the number of pixels of each of given set of color ranges in a typically two-dimensional (2D) or three-dimensional (3D) color space. A histogram is a standard statistical description of a distribution in terms of occurrence frequencies of different event classes; for color, the event classes are regions in color space
HSV Color Space
I
Image Sensor
An image sensor is the general term for an electronic component which incorporates multiple light sensitive components and is able to capture images electronically. It converts an optical image into an electronic signal. The most used image sensors are the CCD-chip and CMOS-chip. Image sensors are applied in a variance of cameras, both for video and digital photography. The sensor converts incoming light into a digital image.
Image Sensor Sensitivity
The amount of light exposed which a camera can convert into electrons determines the image sensor sensitivity. This depends on pixel size and the technique that was used to make the image sensor. Traditionally CCD sensors used to be more sensitive to light than CMOS sensors. However, over the last years this has turned around. The Sony IMX sensors are very light sensitive and hereby we can recommend the Sony Pregius image sensors very much. Such as the IMX 265.
Infrared Sensor Format
Before purchasing a vision camera, it is of importance to know which sizes of image sensors are available. The image sensor of a camera has great influence over the quality of your images. The image sensor format is indicated in inches. It has to be noted that inches cannot be converted to real image sensor formats. This descends from the format of old television tubes. The image sensor format is needed to calculate the lens. The most used sensor formats in machine vision are: 1/4 inch, 1/3, 1/2 inch, 2/3 inch and 1 inch. For C-mount cameras the sensor format varies from 1/4 inch up to 1.1 inch.
Interface
An interface is a method whereby two systems can communicate with each other. An interface converts information from one system into understandable and recognizable information to another system. The display is a fashion of an interface between user and computer. It converts digital information from the computer into textual or graphical shape. For machine vision cameras, the interface is the type of connection between the camera and the PC. This can be GigaBit Ethernet, USB2 or USB3.
Interlacing
Interlacing or interlaced scanning is the technique of capturing moving images on a display. With a camera, whereby the quality of the image is improved without using more bandwidth. With interlaced scanning is an image divided in two fields. The one field exists of all even lines (scanlines), and the other out of all odd lines. Alternately both fields are refreshed. By Interlacing is the amount of image information halved. So, when the whole image is drawn in one go it is called progressive scanning. When an image is compiled in two separate times it is called interlacing. Compiling two interlaced frames together can result into a “cam effect”. This is due to when using a moving image difference exist between the two frames. Two frames are then compiled which differ 1/50th of a second. This results into two different snapshots of the same image. A display needs to compensate for this matter, this is called deinterlacing.
I/O
I/o stands for input / output. Signals being received are considered as input, signals being sent are output. A vision camera has multiple I/O ports for communication. The signal is high or low. The output signal of the vision camera can for example be used to trigger a light source, sending a trigger signal to another vision camera to synchronize both cameras or sending a signal to a PLC. The input ports are for example used to trigger the camera to capture an image, or to read the status of a button that is connected to the input port.
J
JPEG
K
Kell Factor
L
Laser
Lens
A lens is a device that causes light to either converge and concentrate or to diverge, usually formed from a piece of shaped glass. Lenses may be combined to form more complex optical systems as a Normal lens or a Telephoto lens.
Lens Controller
Lighting
Lighting refers to either artificial light sources such as lamps or to natural illumination.
Line Scan Camera
Line scan imaging uses a single line of sensor pixels (effectively one-dimensional) to build up a two-dimensional image. The second dimension results from the motion of the object being imaged. Two-dimensional images are acquired line by line by successive single-line scans while the object moves (perpendicularly) past the line of pixels in the image sensor.
M
Metrology
Machine Vision
Machine vision is the ability of a computer to see, have vision. Machine vision is comparable to computer vision, but then in an industrial or practical application. A computer requires a machine vision camera to see, this camera then collects data by capturing images of a certain product, process etc. The data that must be collected is specified beforehand by the software used on the vision system. Data will be sent to a robot controller or computer after the data collection phase, which will then execute a certain function.
Motion Blur
Motion Perception
N
Neural Network
Normal Lens
In machine vision a normal or entrocentric lens is a lens that generates images that are generally held to have a "natural" perspective compared with lenses with longer or shorter focal lengths. Lenses of shorter focal length are called wide-angle lenses, while longer focal length lenses are called telephoto lenses.
O
OpenCV
Optical Character Recognition (OCR)
Optical Resolution
Optical Transfer Function
P
Pattern Recognition
Pixel
A pixel is one of the many tiny dots that make up the representation of a picture in a computer's memory or screen.
Pixel Blur
In image sensors the process of pixel binning refers to neighbouring pixels combining their electric charge, to change into one super-pixel, and hereby reducing the amount of pixels. This increases the signal to noise ratio (SNR). There are three kinds of pixel binning which are horizontal, vertical and full binning. Pixel binning often happens with 4 pixels (2x2) at a time. However, some image sensors are also able to combine up to 16 (4x4) pixel at the same time. Hereby the sensor increases the signal to noise ratio by 4, reducing the sample density (therefore resolution) by 4.
Pixelation
Progressive Scanning
Q
Quantum Efficiency
R
Region of Interest (ROI)
Resolution
Resolution in the area of digital image processing is a term used to describe the number of pixels of an image. The higher number of pixels, the higher the resolution is. Resolution is expressed in the amount of pixels horizontal and vertical or the total amount of pixels of a sensor, expressed in Megapixels. An image can have a resolution of 1280x1024pixels and this is also expressed as a 1,3 Megapixel resolution.
RGB
The RGB color model utilizes the additive model in which red, green, and blue light are combined in various ways to create other colors.
Rolling Shutter
A rolling shutter sensor has a different method of image capturing compared to the global shutter. Namely, it exposes different lines at different times as they are being read out. Each line is being exposed in a row, each line is being fully readout before the next line is up. With a rolling shutter sensor, it requires the pixel unit only two transistors to transport an electron, reducing the amount of heat and noise. Relative to the global shutter sensor is the structure of rolling shutter simplified and therefore cheaper. The downside of the rolling shutter however is that not every line is exposed simultaneously, which will cause distortion when trying to capture moving objects.
S
S-Video
Signal to Noise Ratio (SNR)
The signal to noise ratio (SNR) is a measurement used to measure the quality of a signal in which a disturbing noise is present. The signal to noise ratio measures the power of the desired signal relative to the power of the present noise. The higher this value, the larger the difference between the signal and the noise, making it possible to retrieve weak signals better. As a result, a sensor with a high SNR value is better able to capture images in low light situations.
Shading Correction
Shading correction or flat field correction is used to correct vignetting of the lens or dust particles on the image sensor. Vignetting is darkening of the image corners when compared to the centre of the image. Using shading correction / flat field correction requires the same optical setup by which the original images have been captures during the calibration of the shading correction. So, with the same lens, diaphragm, filter, and same positioning. Also, the focus which was used when making the calibration image has to be the same.
Shutter
A shutter is a device that allows light to pass for a determined period of time, for the purpose of exposing the image sensor to the right amount of light to create a permanent image of a view.
Shutter Speed
Smart Camera
A smart camera is an integrated machine vision system which, in addition to image capture circuitry, includes a processor, which can extract information from images without need for an external processing unit, and interface devices used to make results available to other devices.
Structured Light 3D Imaging
The process of projecting a known pattern of illumination (often grids or horizontal bars) on to a scene. The way that these patterns appear to deform when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene.
SVGA
Super Video Graphics Array, almost always abbreviated to Super VGA or just SVGA is a broad term that covers a wide range of computer display standards.
SWIR
Short-wave infrared (SWIR) light is typically defined as light in the 0.9 – 1.7μm wavelength range, but can also be classified from 0.7 – 2.5μm. Since silicon sensors have an upper limit of approximately 1.0μm, SWIR imaging requires unique optical and electronic components capable of performing in the specific SWIR range.
SWIR is similar to visible light in that photons are reflected or absorbed by an object, providing the strong contrast needed for high resolution imaging. Ambient star light and background radiance (nightglow) are natural emitters of SWIR and provide excellent illumination for outdoor, nighttime imaging.
T
Telecentric Lens
Telephoto Lens
Lens whose focal length is significantly longer than the focal length of a normal lens.
Thermal Imaging
TIFF
U
USB
V
VESA
VGA
Video Graphics Array (VGA) is a computer display standard first marketed in 1987 by IBM.
2D Profiler
Wide Angle Lens
In photography and cinematography, a wide-angle lens is a lens whose focal length is shorter than the focal length of a normal lens.