Riiven Threads
JPEG
You've looked at millions of JPEGs. Every one discards roughly 90% of its data, carefully, exactly where your eyes cannot see.
You've looked at millions of JPEG images. Every compressed photo on every website you've seen since 1992 is a JPEG. Each one throws away roughly 90% of its data while you notice nothing. The discarded data was, quite literally, invisible to your eyes to begin with. This is not a trick. It is a careful reverse-engineering of human vision, built on eight separate fields of science. The most elegant: Campbell and Robson, 1968, showed that human eyes are far less sensitive to high spatial frequencies than low ones. JPEG throws away the high frequencies you would not have seen. That is one field. Here are the other seven.
Three centuries of converging work
Each field had to produce a specific result before GPS could be built. This is when they did.
Pull any thread. The story unravels the same way.
Sorted by maturation year, from the oldest foundation to the newest refinement.
Color Science
CIE 1931 quantified how wavelengths of light map to perceived color, defining the XYZ color space and the trichromatic matching functions. JPEG uses a derived space, YCbCr, which separates luminance (Y) from chrominance (Cb, Cr). This separation enables chroma subsampling: keep Y at full resolution, halve the resolution of color. File shrinks 50% before anything else happens.
Without this field
Without perceptual color spaces, JPEG would compress in RGB, treating all three channels as equally important. Compression artifacts would manifest as colorband shifts rather than luminance noise, destroying image structure at modest compression ratios.
YCbCr + 4:2:0 chroma subsampling: 50% file size reduction with no perceptible loss
Information Theory
Shannon's 1948 entropy bound establishes the theoretical minimum bits-per-symbol for lossless compression. JPEG's final stage (Huffman coding over quantized DCT coefficients) approaches that bound. Without this theory, compressor designers have no target to aim at.
Without this field
Without Shannon's entropy framework, compression is ad hoc. The final coding stage would waste bits on probability-blind encoding, and engineers would have no principled way to measure how close to optimal a codec is.
Without entropy coding theory, JPEG files would be roughly 30% larger at the same quality
Huffman Coding
Huffman's 1952 algorithm produces optimal prefix codes for any probability distribution: shorter bit sequences for common symbols, longer for rare ones. JPEG uses Huffman tables tuned to typical quantized-DCT-coefficient statistics, compressing the final stage to within a fraction of a bit of Shannon's theoretical minimum.
Without this field
Without variable-length entropy codes, JPEG would fall back to fixed-length encoding of the quantized coefficient stream, wasting 25 to 30% of file size. Huffman is what makes the final compression step nearly optimal.
Fixed-length encoding of JPEG's coefficient stream would produce ~30% larger files
Quantization Theory
Quantization is JPEG's only lossy step: divide each DCT coefficient by a quantizer, then round to integer. Max (1960) proved how to minimize expected distortion for a given number of levels. JPEG's standard quantization matrices are hand-tuned versions of Max's result, using HVS data: aggressive for high frequencies, gentle for low.
Without this field
Without quantization, JPEG's compression ratio is fundamentally limited to ~2:1 (the DCT's energy compaction without bit reduction). Everything beyond that (the 10:1 to 50:1 ratios consumers actually use) comes from quantization discarding coefficient precision.
Lossless JPEG: ~2:1 compression. With quantization: 10:1 to 50:1.
Human Visual Perception
Campbell and Robson (1968) measured the contrast sensitivity function: human eyes respond sharply at 2 to 4 cycles per degree and fall off rapidly above that. JPEG's quantization matrix discards high-frequency DCT coefficients precisely because the eye cannot resolve them. The file does not shrink arbitrarily. It shrinks exactly where vision is blind.
Without this field
Without the contrast sensitivity function, JPEG has no principled way to decide which DCT coefficients to keep. Quantization without HVS data discards luminance information indiscriminately, producing visible blur rather than imperceptible loss at the same compression ratio.
Typical JPEG quantization zeros out ~50% of high-frequency DCT coefficients, all invisible to the eye
Semiconductor Microelectronics
JPEG decoding requires computing an 8×8 inverse DCT, dequantizing, and Huffman decoding for every block of every image. On a 1971 microprocessor this was seconds per image; today's dedicated hardware JPEG decoders handle millions of images per second at milliwatt power. Consumer digital photography exists because of this scaling.
Without this field
Without dense microelectronics, JPEG remains a research curiosity for workstations. The transition from 'too expensive to decode' (1992 PC) to 'embedded in every phone camera sensor' (2010s) took two decades of continued semiconductor scaling after the format was standardized.
JPEG standardized in 1992; practical at smartphone power budget by ~2012
Discrete Cosine Transform
The 8×8 DCT transforms a block of spatial pixel values into 64 frequency coefficients. Natural images concentrate energy in low frequencies, so after DCT most coefficients become small and cheap to encode. JPEG's 10:1 compression depends entirely on this energy compaction.
Without this field
Without the DCT, JPEG has no way to separate perceptually important information (low-frequency structure) from unimportant detail (high-frequency noise). Lossy compression of raw pixel blocks at 10:1 produces visible noise from the first bit discarded.
After DCT, typical photo blocks have only 4 to 8 significant coefficients out of 64. That is a 10x data reduction before any quantization.
ISO/ITU Standardization
JPEG is not an algorithm. It is an agreed-upon bitstream format (ISO/IEC 10918, ITU-T T.81) published in 1992 after six years of committee work. Without the standard, every camera, browser, and operating system would implement incompatible variations; a JPEG from one device could not open on another.
Without this field
Interoperability is the feature. The JPEG committee's 1992 publication defined one canonical format every vendor implements. Without this committee work, the web as a visual medium could not exist; photos would be locked to specific software.
ISO/IEC JTC 1 Working Group 10 spent six years defining the spec every JPEG decoder still follows
JPEG is not a compression algorithm. It is a model of the human eye baked into a file format. Every step (YCbCr conversion, chroma subsampling, DCT quantization, Huffman coding) asks the same question: which bits can we throw away that the eye cannot perceive? The answer, computed by eight separate fields of science over more than a century, is roughly 90 percent of them. The web as a visual medium exists because of this convergence.
References
- A Mathematical Theory of Communication (1948) tier1
Claude Shannon, Bell System Technical Journal vol. 27 (1948). The founding paper of information theory; established the entropy bound every compressor chases.
- Discrete Cosine Transform (1974) tier1
Ahmed, Natarajan & Rao, IEEE Transactions on Computers vol. C-23 (1974). The paper that introduced the DCT. Every JPEG encoder still uses this specific transform.
- Application of Fourier analysis to the visibility of gratings (1968) tier1
Campbell & Robson, Journal of Physiology vol. 197 (1968). Established that human visual sensitivity drops sharply above 2 to 4 cycles per degree, which is exactly what JPEG exploits.
- CIE 1931 2° Standard Observer (1931) tier1
Proceedings of the Commission Internationale de l'Éclairage, 1931. The color-matching functions that quantify human trichromatic perception. Every color space since rests on this foundation.
- A Method for the Construction of Minimum-Redundancy Codes (1952) tier1
David A. Huffman, Proceedings of the IRE vol. 40 (1952). The algorithm every JPEG encoder still uses for its final entropy-coding stage.
- Quantizing for Minimum Distortion (1960) tier1
Joel Max, IRE Transactions on Information Theory vol. IT-6 (1960). Established the optimal-quantizer design that JPEG's quantization matrices approximate.
- ITU-T T.81 / ISO/IEC 10918-1 (1992) tier1
The JPEG standard document, published 1992. Covers digital compression and coding of continuous-tone still images, with requirements and guidelines for compliant encoders and decoders.
- The Chip: How Two Americans Invented the Microchip and Launched a Revolution (2001) tier2
T. R. Reid (Random House, 2001). Definitive history of the integrated circuit from Kilby's first demonstration to the microprocessor era.