Image compression


Image compression is a type of data compression applied to digital images, to reduce their cost for storage or transmission. Algorithms may take advantage of visual perception and the statistical properties of image data to provide superior results compared with generic data compression methods which are used for other digital data.
images saved by Adobe Photoshop at different quality levels and with or without "save for web"

Lossy and lossless image compression

Image compression may be lossy or lossless. Lossless compression is preferred for archival purposes and often for medical imaging, technical drawings, clip art, or comics. Lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Lossy methods are especially suitable for natural images such as photographs in applications where minor loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression that produces negligible differences may be called visually lossless.
Methods for lossy compression:
Methods for lossless compression:
The best image quality at a given compression rate is the main goal of image compression, however, there are other important properties of image compression schemes:
Scalability generally refers to a quality reduction achieved by manipulation of the bitstream or file. Other names for scalability are progressive coding or embedded bitstreams. Despite its contrary nature, scalability also may be found in lossless codecs, usually in form of coarse-to-fine pixel scans. Scalability is especially useful for previewing images while downloading them or for providing variable quality access to e.g., databases. There are several types of scalability:
Region of interest coding. Certain parts of the image are encoded with higher quality than others. This may be combined with scalability.
Meta information. Compressed data may contain information about the image which may be used to categorize, search, or browse images. Such information may include color and texture statistics, small preview images, and author or copyright information.
Processing power. Compression algorithms require different amounts of processing power to encode and decode. Some high compression algorithms require high processing power.
The quality of a compression method often is measured by the peak signal-to-noise ratio. It measures the amount of noise introduced through a lossy compression of the image, however, the subjective judgment of the viewer also is regarded as an important measure, perhaps, being the most important measure.

History

started in the 1940s with the introduction of Shannon–Fano coding, the basis for Huffman coding which was developed in 1950. Transform coding dates back to the late 1960s, with the introduction of fast Fourier transform coding in 1968 and the Hadamard transform in 1969.
An important development in image data compression was the discrete cosine transform, a lossy compression technique first proposed by Nasir Ahmed in 1972. DCT compression became the basis for JPEG, which was introduced by the Joint Photographic Experts Group in 1992. JPEG compresses images down to much smaller file sizes, and has become the most widely used image file format. Its highly efficient DCT compression algorithm was largely responsible for the wide proliferation of digital images and digital photos, with several billion JPEG images produced every day as of 2015.
Lempel–Ziv–Welch is a lossless compression algorithm developed by Abraham Lempel, Jacob Ziv and Terry Welch in 1984. It is used in the GIF format, introduced in 1987. DEFLATE, a lossless compression algorithm developed by Phil Katz and specified in 1996, is used in the Portable Network Graphics format.
Wavelet coding, the use of wavelet transforms in image compression, began after the development of DCT coding. The introduction of the DCT led to the development of wavelet coding, a variant of DCT coding that uses wavelets instead of DCT's block-based algorithm. The JPEG 2000 standard was developed from 1997 to 2000 by a JPEG committee chaired by Touradj Ebrahimi. In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead uses discrete wavelet transform algorithms. It uses the CDF 9/7 wavelet transform for its lossy compression algorithm, and the LeGall-Tabatabai 5/3 wavelet transform for its lossless compression algorithm. JPEG 2000 technology, which includes the Motion JPEG 2000 extension, was selected as the video coding standard for digital cinema in 2004.