THE MACHINE VISION GLOSSARY

The Machine Vision Glossary defines the most commonly used terms in machine vision. Machine vision (MV) is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Remember, if you ever need help specifying the proper machine vision components (lighting, lensing, flters, cameras, etc) then do not hesitate to contact Machine Vision Direct's application engineers at (331) 684-7466 or email Support@MachineVisionDirect.com

Jump to Glossary Section by Letter

0-9 | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

0-9

1D Vision System
A 1D (or 1-D) Vision System is generally comprised of a camera that only looks at a 1-Dimension array of pixels. 1D Vision Systems are more commonly referred to as Line Scan Systems. 

1D vision systems are commonly used when there is an continuous material flow such as web inspection, extrusion inspection, etc.
2D Vision System

2D (or 2-D) Vision Systems are the most common vision systems on the market. 2D Cameras are commonly referred to as "area scan" cameras. These cameras will produce a standard image, such as the image taken from your cell phone or a web cam. 

2D vision systems are used for most inspections. They are commonly found in industrial automation and autonomous guidance applications.

2D Profiler
A 2D Profiler is a type of system that can identify depth changes across a 1-Dimensional line. 2D Profilers generally are comprised of a laser line generator and a CMOS sensor to measure the profile of an object using laser triangulation.

Laser triangulation is a process where a singular line of light is emitted onto a target and the reflected light is recieved using a 2D CMOS to evaluate the height of the object at numerous points across the line.
3D Vision System

A 3D Vision System can percieve 3 Different Dimensions. In addition to perceiving a 2D image like an area scan, the 3D Vision System can alos perceive depth of field. 3D Vision Systems usualy utilize stereo vision (multiple cameras) or structured lights.

A

Aberration
The diameter of the aperture stop of a photographic lens. The aperture stop can be adjusted to control the amount of light reaching the film or image sensor.
Aperture
Optically, defocus refers to a translation along the optical axis away from the plane or surface of best focus. In general, defocus reduces the sharpness and contrast of the image. What should be sharp, high-contrast edges in a scene become gradual transitions.
Aspect Ratio
The aspect ratio of an image is its displayed width divided by its height (usually expressed as "x:y").

B

Bandpass Filters

Bandpass Filters are used to selectively transmit a portion of the spectrum while rejecting all other wavelengths. Bandpass Filters are ideal for a variety of applications, such as fluorescence microscopy, spectroscopy, clinical chemistry, or imaging. These filters are typically used in the life science, industrial, or R&D industries.

Barcode

A barcode (also bar code) is a machine-readable representation of information in a visual format on a surface.

Bayer Patter


A Bayer pattern, also called Bayer filter is an ordinary square raster of color filters for red, green and blue (RGB) which is used for digital image sensors in vision cameras and consumer cameras. The pattern is named after the inventor Bryce E. Bayer from Eastman Kodak. Every color pixel has a RGB value, where the values of the surrounding pixels are used to generate the RGB value of a single pixel. The decoding of the Bayer pattern is called de-Bayering

Blob Discovery (Blob Tool)

Inspecting an image for discrete blobs of connected pixels (e.g. a black hole in a grey object) as image landmarks. These blobs frequently represent optical targets for machining, robotic capture, or manufacturing failure

Bit Depth

Bit depth is a measurement used to indicate the number of bits used to code the color / gray value of a pixel. The higher the bit depth is, the more values can be coded. A monochrome image in theory only needs a bit depth of 1 bit. However, it can only use the color black and white, no grey values. A normal monochrome picture therefore normally uses 8 bits and can hereby have 256 different shades. A color picture needs between 8 and 24 bits, depending on the quality of the image. With a higher bit depth is the image information often saved as RGB (red, green and blue), because not all different colors need to be saved in a separate table then.

Bits

A pixel contains information about color and brightness. The amount of information that a pixel contains differs and is expressed in bits. A 1-bits pixel is the minimum, which only has two options: on and off. This will result in a monochrome image. An 8-bit pixel entails 256 different shades, a 16-bit pixel 65.336 shades and a 24-bit-pixel 16.7 million shades.

Bitmap

A raster graphics image, digital image, or bitmap, is a data file or structure representing a generally rectangular grid of pixels, or points of color, on a computer monitor, paper, or other display device.

C

Camera

A camera is a device used to take pictures, either singly or in sequence. A camera that takes pictures singly is sometimes called a photo camera to distinguish it from a video camera.

Camera Link

Camera Link is a serial communication protocol designed for computer vision applications based on the National Semiconductor interface Channel-link. It was designed for the purpose of standardizing scientific and industrial video products including cameras, cables and frame grabbers. The standard is maintained and administered by the Automated Imaging Association, or AIA, the global machine vision industry's trade group.

Charge Coupled Device (CCD)

A charge-coupled device (CCD) is a sensor for recording images, consisting of an integrated circuit containing an array of linked, or coupled, capacitors. CCD sensors and cameras tend to be more sensitive, less noisy, and more expensive than CMOS sensors and cameras.

CMOS

CMOS ("see-moss")stands for complementary metal-oxide semiconductor, is a major class of integrated circuits. CMOS imaging sensors for machine vision are cheaper than CCD sensors but more noisy.

CoaXPress

CoaXPress (CXP) is an asymmetric high speed serial communication standard over coaxial cable. CoaXPress combines high speed image data, low speed camera control and power over a single coaxial cable. The standard is maintained by JIIA, the Japan Industrial Imaging Association.

Color

The perception of the frequency (or wavelength) of light, and can be compared to how pitch (or a musical note) is the perception of the frequency or wavelength of sound.

Color Temperature

"White light" is commonly described by its color temperature. A traditional incandescent light source's color temperature is determined by comparing its hue with a theoretical, heated black-body radiator. The lamp's color temperature is the temperature in kelvins at which the heated black-body radiator matches the hue of the lamp.

Computer Vision

The study and application of methods which allow computers to "understand" image content.

Contrast

In visual perception, contrast is the difference in visual properties that makes an object (or its representation in an image) distinguishable from other objects and the background.

C-Mount

Standardized adapter for optical lenses on CCD - cameras. C-Mount lenses have a back focal distance 17.5 mm vs. 12.5 mm for "CS-mount" lenses. A C-Mount lens can be used on a CS-Mount camera through the use of a 5 mm extension adapter. C-mount is a 1" diameter, 32 threads per inch mounting thread (1"-32UN-2A.)

CS-Mount

Same as C-Mount but the focal point is 5 mm shorter. A CS-Mount lens will not work on a C-Mount camera. CS-mount is a 1" diameter, 32 threads per inch mounting thread.

D

Data Matrix
A two dimensional Barcode
Depth Of Field
In optics, particularly photography and machine vision, the depth of field (DOF) is the distance in front of and behind the subject which appears to be in focus.
Depth Perception
The visual ability to perceive the world in three dimensions. It is a trait common to many higher animals. Depth perception allows the beholder to accurately gauge the
Diaphragm
In optics, a diaphragm is a thin opaque structure with an opening (aperture) at its centre. The role of the diaphragm is to stop the passage of light, except for the light passing through the aperture.
Dynamic Range
The dynamic range is the ratio of the brightest light to the weakest light that can be perceived by a camera. The value is measured in dB. The higher the value in dB, the larger the difference between the brightest light and weakest light can be. The dynamic range is important when you want to capture an image of an object with a big contrast. You want the dark and bright parties both to be captured well.

E

Edge Detection

Edge Detection marks the points in a digital image at which the luminous intensity changes sharply. It also marks the points of luminous intensity changes of an object or spatial-taxon silhouette.

Electromagnetic Interference (EMI)

Radio Frequency Interference (RFI) is electromagnetic radiation which is emitted by electrical circuits carrying rapidly changing signals, as a by-product of their normal operation, and which causes unwanted signals (interference or noise) to be induced in other circuits.

F

Firewire
FireWire (also known as i. Link or IEEE 1394) is a personal computer (and digital audio/video) serial bus interface standard, offering high-speed communications. It is often used as an interface for industrial cameras.
Fill Factor

The fill factor of an image sensor is the ratio of the light sensitive area from a pixel of the total area. For pixels without micro lenses is the fill factor the ratio of the photodiode area to the total pixel surface. However, the use of micro lenses increases the effective fill factor, often to a 100%, by converging light of the total pixel area to the photodiode. In another occasion the fill factor of an image can be reduced.

Field Of View (FOV)

The field of view (FOV) is the part which can be seen by the machine vision system at one moment. The field of view depends from the lens of the system and from the working distance between object and camera

Flat-field Correction

Flat-Field Correction is a technique used to improve quality in digital imaging. It cancels the effects of image artifacts caused by variations in the pixel-to-pixel sensitivity of the detector and by distortions in the optical path. It is a standard calibration procedure in everything from personal digital cameras to large telescopes.

Focus

An image, or image point or region, is said to be in focus if light from object points is converged about as well as possible in the image; conversely, it is out of focus if light is not well converged. The border between these conditions is sometimes defined via a circle of confusion criterion.

Frame Grabber
An electronic device that captures individual, digital still frames from an analog video signal or a digital video stream.
Frame Rate
The number of frames per second is a measurement which indicates how fast a device captures frames or how fast it processes these frames. The term is often used in films, computer graphics, cameras, and displays. The number of frames per seconds can be expressed in frames per second or in hertz (Hz).

G

Gamma Correction
Gamma correction is a non-linear operation to correct a moving image’s light intensity, illumination or brightness. The amount of gamma correction does not only change brightness, but also the RGB ratio. Gamma is the contrast of proportionality between brightness of the display and the video signal
Gamut
In color reproduction, including computer graphics and photography, the gamut, or color gamut /ˈɡæmət/, is a certain complete subset of colors.
Grayscale

A grayscale digital image is an image in which the value of each pixel is a single sample. Displayed images of this sort are typically composed of shades of gray, varying from black at the weakest intensity to white at the strongest, though in principle the samples could be displayed as shades of any color, or even coded with various colors for different intensities.

GUI
A graphical user interface (or GUI, sometimes pronounced "gooey") is a method of interacting with a computer through a metaphor of direct manipulation of graphical images and widgets in addition to text.
Global Shutter
If the images sensor is global shutter, every pixel will start its exposure and end its exposure at the same time. This requires large amounts of memory. Since the complete image can be saved in the memory after the exposure time ends, the data can be read gradually. Manufacturing global shutter sensors is a complex process and is more expensive than making rolling shutter sensors. The main benefit of global shutter sensors is that they are able to capture high-speed moving objects/ products without having distortion. Global shutter sensors can also be used in a wider range of applications.
Gigabit Ethernet (GigE) Camera
The GigE interface for use in industrial image processing is defined through the GigE Vision standard, which was officially adopted in mid-2006. Among the greatest benefits of GigE camera are their fast data throughput rates (up to 120 MB/s) and long maximum cable lengths of up to 100 meters. These factors make Gigabit Ethernet a universally deployable digital interface. Unlike several older interface technologies, GigE does not require a frame grabber, which in turn significantly reduces costs. Another benefit of this interface is the ability to combine multiple cameras easily: Multi-camera applications are simple to set up.

H

Histogram
In statistics, a histogram is a graphical display of tabulated frequencies. A histogram is the graphical version of a table which shows what proportion of cases fall into each of several or many specified categories. The histogram differs from a bar chart in that it is the area of the bar that denotes the value, not the height, a crucial distinction when the categories are not of uniform width (Lancaster, 1974). The categories are usually specified as non-overlapping intervals of some variable. The categories (bars) must be adjacent.
Histogram (Color)

In computer graphics and photography, a color histogram is a representation of the distribution of colors in an image, derived by counting the number of pixels of each of given set of color ranges in a typically two-dimensional (2D) or three-dimensional (3D) color space. A histogram is a standard statistical description of a distribution in terms of occurrence frequencies of different event classes; for color, the event classes are regions in color space

HSV Color Space
The HSV (Hue, Saturation, Value) model, also called HSB (Hue, Saturation, Brightness), defines a color space in terms of three constituent components
 - Hue, the color type (such as red, blue, or yellow)
 - Saturation, the "vibrancy" of the color and colorimetric purity
 - Value, the brightness of the color

I

Image Sensor

An image sensor is the general term for an electronic component which incorporates multiple light sensitive components and is able to capture images electronically. It converts an optical image into an electronic signal. The most used image sensors are the CCD-chip and CMOS-chip. Image sensors are applied in a variance of cameras, both for video and digital photography. The sensor converts incoming light into a digital image.

Image Sensor Sensitivity

The amount of light exposed which a camera can convert into electrons determines the image sensor sensitivity. This depends on pixel size and the technique that was used to make the image sensor. Traditionally CCD sensors used to be more sensitive to light than CMOS sensors. However, over the last years this has turned around. The Sony IMX sensors are very light sensitive and hereby we can recommend the Sony Pregius image sensors very much. Such as the IMX 265.

Infrared Sensor Format

Before purchasing a vision camera, it is of importance to know which sizes of image sensors are available. The image sensor of a camera has great influence over the quality of your images. The image sensor format is indicated in inches. It has to be noted that inches cannot be converted to real image sensor formats. This descends from the format of old television tubes. The image sensor format is needed to calculate the lens. The most used sensor formats in machine vision are: 1/4 inch, 1/3, 1/2 inch, 2/3 inch and 1 inch. For C-mount cameras the sensor format varies from 1/4 inch up to 1.1 inch.

Interface

An interface is a method whereby two systems can communicate with each other. An interface converts information from one system into understandable and recognizable information to another system. The display is a fashion of an interface between user and computer. It converts digital information from the computer into textual or graphical shape. For machine vision cameras, the interface is the type of connection between the camera and the PC. This can be GigaBit Ethernet, USB2 or USB3.

Interlacing

Interlacing or interlaced scanning is the technique of capturing moving images on a display. With a camera, whereby the quality of the image is improved without using more bandwidth. With interlaced scanning is an image divided in two fields. The one field exists of all even lines (scanlines), and the other out of all odd lines. Alternately both fields are refreshed. By Interlacing is the amount of image information halved. So, when the whole image is drawn in one go it is called progressive scanning. When an image is compiled in two separate times it is called interlacing. Compiling two interlaced frames together can result into a “cam effect”. This is due to when using a moving image difference exist between the two frames. Two frames are then compiled which differ 1/50th of a second. This results into two different snapshots of the same image. A display needs to compensate for this matter, this is called deinterlacing.

I/O

I/o stands for input / output. Signals being received are considered as input, signals being sent are output. A vision camera has multiple I/O ports for communication. The signal is high or low. The output signal of the vision camera can for example be used to trigger a light source, sending a trigger signal to another vision camera to synchronize both cameras or sending a signal to a PLC. The input ports are for example used to trigger the camera to capture an image, or to read the status of a button that is connected to the input port.

J

JPEG
JPEG (pronounced jay-peg) is a most commonly used standard method of lossy compression for photographic images. 

K

Kell Factor
Kell Factor It is a parameter used to determine the effective resolution of a discrete display device.

L

Laser
In physics, a laser is a device that emits light through a specific mechanism for which the term laser is an acronym: light amplification by stimulated emission of radiation.
Lens

A lens is a device that causes light to either converge and concentrate or to diverge, usually formed from a piece of shaped glass. Lenses may be combined to form more complex optical systems as a Normal lens or a Telephoto lens.

Lens Controller
A lens controller is a device used to control a motorized (ZFI) lens. Lens controllers may be internal to a camera, a set of switches used manually, or a sophisticated device that allows control of a lens with a computer.
Lighting

Lighting refers to either artificial light sources such as lamps or to natural illumination.

Line Scan Camera

Line scan imaging uses a single line of sensor pixels (effectively one-dimensional) to build up a two-dimensional image. The second dimension results from the motion of the object being imaged. Two-dimensional images are acquired line by line by successive single-line scans while the object moves (perpendicularly) past the line of pixels in the image sensor.

M

Metrology
Metrology is the science of measurement. There are many applications for machine vision in metrology.
Machine Vision

Machine vision is the ability of a computer to see, have vision. Machine vision is comparable to computer vision, but then in an industrial or practical application. A computer requires a machine vision camera to see, this camera then collects data by capturing images of a certain product, process etc. The data that must be collected is specified beforehand by the software used on the vision system. Data will be sent to a robot controller or computer after the data collection phase, which will then execute a certain function.

Motion Blur
Motion blur is the phenomenon that objects on a photo or video image appear blurry, as a result from the movement of an object and/or the camera. Motion blur often occurs when a shutter time is used which is too long. The projected image of the object on the image sensor should not move more then half a pixel during the exposure time. To calculate the maximum exposure time we have the following example. The field of view is 1000x600mm and the Machine Vision Camera has 1000x600pixels resolution. This means 1pixel/1mm. If an object moves with 1m/second this will be 1000mm/second. Motion blur will be noticed if the object moves with more then half a pixel, that is 0,5 * 1pixel/1mm= 0.5mm. The maximum exposure time is (max object movement=0.5mm) / (object speed = 1000mm) = 0.0005seconds = 0.5ms. In this case the max exposure time to eliminate motion blur is 0.5x1000=500us
Motion Perception
Motion perception is the process of inferring the speed and direction of objects and surfaces that move in a visual scene given some visual input.

N

Neural Network
A Neural Network is an interconnected group of artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network.
Normal Lens


In machine vision a normal or entrocentric lens is a lens that generates images that are generally held to have a "natural" perspective compared with lenses with longer or shorter focal lengths. Lenses of shorter focal length are called wide-angle lenses, while longer focal length lenses are called telephoto lenses.

O

OpenCV
OpenCV is a library of programming functions mainly aimed at real-time computer vision. Originally developed by Intel, it was later supported by Willow Garage then Itseez. The library is cross-platform and free for use under the open-source Apache 2 License
Optical Character Recognition (OCR)
Optical Character Recognition (usually abbreviated to OCR), involves computer software designed to translate images of typewritten text (usually captured by a scanner) into machine-editable text, or to translate pictures of characters into a standard encoding scheme representing them in (ASCII or Unicode).
Optical Resolution
Describes the ability of a system to distinguish, detect, and/or record physical details by electromagnetic means. The system may be imaging (e.g., a camera) or non-imaging (e.g., a quad-cell laser detector).  
Optical Transfer Function
The optical transfer function (OTF) of an optical system such as a camera, microscope, human eye, or projector specifies how different spatial frequencies are handled by the system. It is used by optical engineers to describe how the optics project light from the object or scene onto a photographic film, detector array, retina, screen, or simply the next item in the optical transmission chain. A variant, the modulation transfer function (MTF), neglects phase effects, but is equivalent to the OTF in many situations.

P

Pattern Recognition
This is a field within the area of machine learning. Alternatively, it can be defined as the act of taking in raw data and taking an action based on the category of the data. It is a collection of methods for supervised learning. Match Tools are usually Pattern Recognition Tools
Pixel

A pixel is one of the many tiny dots that make up the representation of a picture in a computer's memory or screen.

Pixel Blur

In image sensors the process of pixel binning refers to neighbouring pixels combining their electric charge, to change into one super-pixel, and hereby reducing the amount of pixels. This increases the signal to noise ratio (SNR). There are three kinds of pixel binning which are horizontal, vertical and full binning. Pixel binning often happens with 4 pixels (2x2) at a time. However, some image sensors are also able to combine up to 16 (4x4) pixel at the same time. Hereby the sensor increases the signal to noise ratio by 4, reducing the sample density (therefore resolution) by 4.

Pixelation
In computer graphics, pixelation is an effect caused by displaying a bitmap or a section of a bitmap at such a large size that individual pixels, small single-colored square display elements that comprise the bitmap, are visible.
Progressive Scanning
Progressive scanning is the technique to display, save or pass along moving images, where a frame does not exist of multiple fields, but all rows are refreshed in order. This is the opposite of the interlaced scanning method which is used with older CCD sensors. Progressive scanning is used on all CCD sensors that are used in our machine vision cameras.

Q

Quantum Efficiency
Quantum efficiency refers to incident photon to converted electron (IPCE) ratio, of a photosensitive device such as the image sensor of a machine vision camera. 

R

Region of Interest (ROI)
Region of Interest (ROI) of a machine vision camera is the area / part of the image sensor that is read-out. For example a vision camera has an image sensor with a resolution of 1280x1024pixels. You are only interested in centerpart of the image. You can set a ROI of 640x480 pixels inside the camera. Then, only that part of the image sensor will capture light and transmit the data. Setting a ROI in the vision camera will increase the frames per second because only a part of the image sensor will be read out, reducing the amount of data to transmit per caputered image and allowing the camera to make more images per second.  
Resolution

Resolution in the area of digital image processing is a term used to describe the number of pixels of an image. The higher number of pixels, the higher the resolution is. Resolution is expressed in the amount of pixels horizontal and vertical or the total amount of pixels of a sensor, expressed in Megapixels. An image can have a resolution of 1280x1024pixels and this is also expressed as a 1,3 Megapixel resolution.

RGB

The RGB color model utilizes the additive model in which red, green, and blue light are combined in various ways to create other colors.

Rolling Shutter

A rolling shutter sensor has a different method of image capturing compared to the global shutter. Namely, it exposes different lines at different times as they are being read out. Each line is being exposed in a row, each line is being fully readout before the next line is up. With a rolling shutter sensor, it requires the pixel unit only two transistors to transport an electron, reducing the amount of heat and noise. Relative to the global shutter sensor is the structure of rolling shutter simplified and therefore cheaper. The downside of the rolling shutter however is that not every line is exposed simultaneously, which will cause distortion when trying to capture moving objects.

S

S-Video
Separate video, abbreviated S-Video and also known as Y/C (or erroneously, S-VHS and "super video") is an analog video signal that carries the video data as two separate signals (brightness and color), unlike composite video which carries the entire set of signals in one signal line. S-Video, as most commonly implemented, carries high-bandwidth 480i or 576i resolution video, i.e. standard definition video. It does not carry audio on the same cable.
Signal to Noise Ratio (SNR)

The signal to noise ratio (SNR) is a measurement used to measure the quality of a signal in which a disturbing noise is present. The signal to noise ratio measures the power of the desired signal relative to the power of the present noise. The higher this value, the larger the difference between the signal and the noise, making it possible to retrieve weak signals better. As a result, a sensor with a high SNR value is better able to capture images in low light situations.

Shading Correction

Shading correction or flat field correction is used to correct vignetting of the lens or dust particles on the image sensor. Vignetting is darkening of the image corners when compared to the centre of the image. Using shading correction / flat field correction requires the same optical setup by which the original images have been captures during the calibration of the shading correction. So, with the same lens, diaphragm, filter, and same positioning. Also, the focus which was used when making the calibration image has to be the same.

Shutter

A shutter is a device that allows light to pass for a determined period of time, for the purpose of exposing the image sensor to the right amount of light to create a permanent image of a view.

Shutter Speed
In machine vision the shutter speed is the time for which the shutter is held open during the taking an image to allow light to reach the imaging sensor. In combination with variation of the lens aperture, this regulates how much light the imaging sensor in a digital camera will receive.
Smart Camera

A smart camera is an integrated machine vision system which, in addition to image capture circuitry, includes a processor, which can extract information from images without need for an external processing unit, and interface devices used to make results available to other devices.

Structured Light 3D Imaging

The process of projecting a known pattern of illumination (often grids or horizontal bars) on to a scene. The way that these patterns appear to deform when striking surfaces allows vision systems to calculate the depth and surface information of the objects in the scene.

SVGA

Super Video Graphics Array, almost always abbreviated to Super VGA or just SVGA is a broad term that covers a wide range of computer display standards.

SWIR

Short-wave infrared (SWIR) light is typically defined as light in the 0.9 – 1.7μm wavelength range, but can also be classified from 0.7 – 2.5μm. Since silicon sensors have an upper limit of approximately 1.0μm, SWIR imaging requires unique optical and electronic components capable of performing in the specific SWIR range.

SWIR is similar to visible light in that photons are reflected or absorbed by an object, providing the strong contrast needed for high resolution imaging. Ambient star light and background radiance (nightglow) are natural emitters of SWIR and provide excellent illumination for outdoor, nighttime imaging.

T

Telecentric Lens
Compound lens with an unusual property concerning its geometry of image-forming rays. In machine vision systems telecentric lenses are usually employed in order to achieve dimensional and geometric invariance of images within a range of different distances from the lens and across the whole field of view.
Telephoto Lens

Lens whose focal length is significantly longer than the focal length of a normal lens.

Thermal Imaging
Thermal imaging, a type of Infrared imaging where a heat map is generatred based on temperatures as opposed to visible wavelengths
TIFF
Tagged Image File Format (abbreviated TIFF) is a file format for mainly storing images, including photographs and line art.

U

USB
Universal Serial Bus (USB) provides a serial bus standard for connecting devices, usually to computers such as PCs, but is also becoming commonplace on cameras.

V

VESA
The Video Electronics Standards Association (VESA) is an international body, founded in the late 1980s by NEC Home Electronics and eight other video display adapter manufacturers. The initial goal was to produce a standard for 800×600 SVGA resolution video displays. Since then VESA has issued a number of standards, mostly relating to the function of video peripherals in IBM PC compatible computers.
VGA

Video Graphics Array (VGA) is a computer display standard first marketed in 1987 by IBM.

2D Profiler
A class of microprocessors aimed at accelerating machine vision tasks.
Wide Angle Lens

In photography and cinematography, a wide-angle lens is a lens whose focal length is shorter than the focal length of a normal lens.

W

Wide Angle Lens
In photography and cinematography, a wide-angle lens is a lens whose focal length is shorter than the focal length of a normal lens.

X

X-Rays
A form of electromagnetic radiation with a wavelength in the range of 10 to 0.01 nanometers, corresponding to frequencies in the range 30 to 3000 PHz (1015 hertz). X-rays are primarily used for diagnostic medical and industrial imaging as well as crystallography. X-rays are a form of ionizing radiation and as such can be dangerous

Y

Y-Cable
A Y-cable or Y cable is an electrical cable containing three ends of which one is a common end that in turn leads to a split into the remaining two ends, resembling the letter "Y". Y-cables are typically, but not necessarily, short (less than 12 inches), and often the ends connect to other cables. Uses may be as simple as splitting one audio or video channel into two, to more complex uses such as splicing signals from a high density computer connector to its appropriate peripheral .

Z

Zoom Lens
A mechanical assembly of lenses whose focal length can be changed, as opposed to a prime lens, which has a fixed focal length. See an animation of the zoom principle below.