In 1950 Bell Labs researcher Richard W. Hamming made a discovery that would lay an important foundation for the entire modern computing and communications industries. He had invented a code for correcting errors in communication and the Hamming code was born. CIO Blast from the Past takes a journey through 60 years of information theory and discovers how the Hamming code legacy lives on today.

Richard W Hamming was born in Chicago in 1915. He studied mathematics at three universities and received a bachelor of science, master of arts and a PhD. His early experience was obtained at Los Alamos National Laboratory from 1945 to 1946, where he managed the computers used to build the first atomic bomb.

From there, he moved to Bell Labs where he spent 30 years in various aspects of computing, numerical analysis, and management of computing.

In 1976 he joined the Naval Postgraduate School in Monterey, California where he worked as a teacher and authored no fewer than nine books on numerical methods, coding theory and computing.

In April 1950 the American Telegraph and Telephone company published Volume 29 of the Bell System Technical Journal. The paper, titled Error Detecting and Error Correcting Codes, by RW Hamming, outlined a method for performing computing operations on a large scale without errors.

The paper is still available today and is hosted by Alcatel-Lucent (PDF), the modern owner of Bell Labs.

Hamming code was the first discovery in an immense field called coding theory and was also a rather unusual foundation stone for the field because it is a ‘perfect’ code

Hamming wrote about how self-checking circuits help eliminate errors in telephone central offices. Digital computers, however, were “single long path” machines incapable of dealing with a single failure.The “special codes” Hamming proposed could handle error detection and correction and he speculated would only need to be applied to systems requiring unattended operation for long periods of time or “extremely large and tightly integrated” systems where a single failure “incapacitates the entire installation”.

Named after their founder, the Hamming codes went on to be superseded and applied to all aspects of computing and communications systems.

Hamming, a Turing Award recipient in 1968, passed away on January 7, 1998. The IEEE Richard W Hamming Medal, awarded annually for exceptional contributions to IT, lives on in his honour.

**CIO’s Blast from the Past series has previously covered 40 years of Multics (and slideshow) and 110 Years of IBM technology.**

### Meet the Shannon bound

To understand Hamming’s contribution in error correction coding in perspective, it is helpful to understand some other events in the history of communications, says Todd Moon, professor of electrical and computer engineering at Utah State University.

“In 1948, Claude Shannon published A Mathematical Theory of Communication, (Bell System Technical Journal, v. 27, pp. 623—656), in which he created the field of information theory. In that paper, he showed that digital communication could be made as reliable as desired (essentially with zero probability of error), provided that error correction coding was used and that the rate of communications is less than a quantity associated with the channel called the ‘channel capacity’”, Moon says.

See related slideshow: CIO Blast from the Past – 60 years of cryptography |

Transmitting data at a rate less than capacity is tied to the concept of error correction coding.

Moon says Shannon’s bold “existence proof” entails that such codes exist, but the paper does not say how to find them. Codes that are as good as theoretically possible are said to “meet the Shannon bound”.

“Shannon’s paper sparked 60 years of research as engineers began to understand the implications of the theory to actual communications and push to find codes that are close to the Shannon bound,” Moon says.

“It is hardly an overstatement to say that Shannon was the impetus behind the digital revolution, as successively better and better ways of coding led to better and better communications.”

Shannon’s theories were first applied to digital communications over phone lines, and there was a steady succession of faster modems: from 300 to 14,400 bits per second.

David MacKay, professor of natural philosophy in the department of Physics at the University of Cambridge and chief scientific adviser to the UK Department of Energy and Climate Change, has also written about Hamming codes and in 2003 published Information Theory, Inference, and Learning Algorithms.

MacKay says Hamming’s first steps in coding theory coincided with Shannon’s 1948 discovery of the complementary field of information theory.

“Shannon proved the existence of error correcting codes with capabilities way beyond those of Hamming codes,” he says. “He proved the existence of larger codes capable of correcting larger number of errors. Shannon’s proof was non-constructive, and laid down the coding theory community a delightful challenge; Hamming’s work was the foundation stone.”

### The Hamming code connection

Where does Hamming fit into this early work on communications theory?

In 1950 Hamming was working on computers, which at that time were very unreliable since they were based on vacuum tubes, Moon says.

“He was looking for a way to make these computations more reliable, and came up with the concept of these codes which add some redundancy to enable the correction of some errors.

“At the time, his focus was on computer hardware, but it soon emerged that Hamming’s codes were precisely the kind of codes that Shannon was referring to in his paper. They don’t meet the Shannon bound – they don’t even come close – but they started the exploration of the whole field.”

There is not just one, but a whole family of Hamming codes. The (7,4) Hamming code means four bits are put into the encoder and seven coded bits are returned. And since there are four bits in, there are 16 (2^4) possible codewords.

“There is also a (15,11) Hamming code (with 2^11 = 2048 codewords of length 15), a (31,26) Hamming code (with 2^26 codewords of length 31),” Moon says.

“Each of these codes is capable of correcting one error out of the code word. For example, in the (7,4) code, out of the seven coded bits, if a single bit error occurs anywhere, the decoder can fix it. Or, in the (31,26) code, if a single error occurs anywhere, the decoder can fix it.”

Hamming codes can also be used to detect when errors occur. Any of the Hamming codes are capable of identifying whether two bits are flipped (two errors) and, if a system is aware that errors have occurred, the information can be retransmitted.

“This is the way the TCP/IP protocol works on the internet, although it uses a different kind of code for stronger error detection capability,” Moon says.

“In exploring the properties of these codes, Hamming used some geometric analogies to talk about ‘distance’ between codewords. Comparing codewords, it is clear that every code word differs from any other code word in at least three bit positions. This measure of distance is referred to as the Hamming distance, due to Hamming’s original use of the term.”

As the theory of error correcting codes developed, various other attributes and families of codes were discovered, including a class of codes called “perfect” codes and “cyclic” codes. Hamming codes exhibit both code types.

MacKay says the Hamming code was the first discovery in an immense field called coding theory and was also a rather unusual foundation stone for the field because it is a “perfect” code: a code whose codewords are the centres of hyperspheres that perfectly fill the space of all strings, without overlapping.

“Perfect codes are remarkably rare objects. See page 228 of my book which is available free online,” he says.

Page Break

### Error correction today

Error correction codes are now used ubiquitously in the communications industry. Our cell phones, disk drives, flash drives, and the Internet all make use of error correction coding.

Hamming codes are rather narrower in their capabilities, and are used in some computer memory systems, but not much else these days, Moon says.

“But that is not to say that Hamming codes are insignificant in their historical influence, their pedagogical value, and the role at the intersection several different families of codes,” he says.

“Error correction coding was going to be discovered, motivated by Shannon’s information theory [and] it is inevitable that the codes we now call Hamming codes would have been discovered.”

Moon says there may always be niche applications for Hamming codes, and there certainly will always be a place for error correction codes, but Hamming codes as general codes have long since been superseded.

Hamming codes are unique, beautiful, and a very useful teaching tool for introducing information theory

According to professor MacKay, it took another 50 years for the research community to understand how (practically) to reach the limits defined by Shannon, though the key ideas were already invented in the early 1960s by Bob Gallager.“Today’s record-breaking codes are just big, pseudo-random cousins of Hamming’s codes,” he says.

“Error correcting codes have connections to design theory and graph theory. They crop up in other places, for example in the solution to the “Hat puzzle” in my book.”

MacKay says Hamming codes are in one sense impossible to supercede as they are “unique, beautiful, and a very useful teaching tool for introducing information theory”.

“In practical applications, however, they were almost immediately superceded by their bigger, clumsier cousins, which can correct more errors. One of Shannon’s key insights was that (in principle) the larger the number of bits that are protected by an error-correcting code, the greater the fraction of errors that can be protected against.”

### The Hamming legacy

Despite his influence on information and communication technology, the name Richard Hamming is not well known outside academic and research circles.

People such as Bill Gates, Steve Jobs and Linus Torvalds dominate the popularity stakes, which professor Moon says is disappointing.

“At least within the information theory community, there is a good appreciation for history,” he says. “This is helped because the history is so short – 60 years to create an entirely new discipline is a short time. Many technical people (information theorists) alive today are familiar with the founding fathers.”

Moon says there is certainly a lack of awareness and appreciation among the general populous about who has made significant contributions to our modern digital world.

“The name Claude Shannon should be as familiar to high school students as that of Isaac Newton, but it is difficult to find any student who has a clue. Richard Hamming, while in a lesser pantheon, similarly deserves recognition.”

Moon never met Hamming, but says from his research, Hamming seemed like a man very interested in many areas of technology.

“Hamming has written several books, including a very influential book on numerical analysis (doing accurate numerical computations on computers),” Moon says. “He seems to be unassuming, and I think would have been very flattered about the continued use of his name in connection with several areas of communications and signal engineering.”

He is known not only for Hamming codes, but also Hamming distance and a “windowing” function used in digital filtering.

In his later years, Hamming wrote a paper You and Your Research (PDF) which discusses the importance of researchers choosing significant, difficult, problems to work on, writing well, and disseminating the results.

“This paper is now assigned reading by all graduate students in our department,” Moon says.

According to MacKay, with the benefit of hindsight, the inevitability of the discovery of Hamming codes in no way overshadows the achievement – the codes stand as an excellent example of enquiry into existing theory and how it can be applied to solve a specific problem.