Wednesday 23rd July
Tour of the National Computer Museum at Bletchley Park
Sheridan Williams, Andrew Spencer and Chris Monk
Sheridan Williams is a tour guide at the museum; Andrew Spencer is a corporate Events & Groups Manager at museum and Chris Monk is a learning co-ordinator at the museum.
We began our tour with Mr Williams telling us about Lorenz.
Above left shows the Lorenz SZ42 machine with its covers removed. The Lorenz SZ machines had 12 rotors/wheels each with a different number of cams (or “pins”).
The Lorenz SZ40, SZ42A and SZ42B were German rotor stream cipher machines used by the German Army during World War II. They were developed by C. Lorenz AG in Berlin and the model name SZ was derived from Schlüsselzusatz, meaning cipher attachment. The instruments implemented a Vernam stream cipher.
British cryptographers, who referred to encrypted German teleprinter traffic as Fish, dubbed the machine and its traffic Tunny.
The SZ machine served as an in-line attachment to a standard Lorenz teleprinter. The teleprinter characters consisted of five data bits, encoded in the International Telegraphy Alphabet No. 2 (ITA2). The enciphering machine would input plain text and generate a pseudorandom character-by-character key to form the encrypted output characters. The encrypted message could then be sent off.
Each of the five bits (or “impulses”) of the key for each character was generated by the relevant wheels in two parts of the machine.
Each wheel had a series of cams (or “pins”) around them. These cams could be set in a raised (active) or lowered (inactive) position. In the raised position they generated a ‘1’, in the lowered position they generated a ‘0’.
The number of cams on each wheel equalled the number of impulses needed to cause them to complete a full rotation. With a total of 501 cams this equals 2xE501 which is approximately 10xE151, an astronomically large number. However, if the five impulses are considered independently, the numbers are much more manageable. The product of the rotation period of any pair of chi wheels gives numbers between 41×31=1271 and 26×23=598.
Cryptanalysis of the Lorenz cipher was the process that enabled the British to read high-level German army messages during World War II.
It was mistakes made by the Germans that allowed the initial diagnosis of the system, and a way into decryption.
Initially, operator errors produced a number of pairs of transmissions sent with the same keys, giving a “depth”, which often allowed manual decryption to be achieved. One long depth also allowed the complete logical structure of the machine to be worked out, a quite remarkable cryptanalytical feat on which the subsequent comprehensive decrypting of Tunny messages relied.
The first mistake took place on 30 August 1941. A German operator had a long message of nearly 4,000 characters to be sent from one part of the German Army High command to another — probably Athens to Vienna. He correctly set up his Lorenz machine and then sent a twelve letter indicator, using the German names, to the operator at the receiving end. This operator then set his Lorenz machine and asked the operator at the sending end to start sending his message. After nearly 4,000 characters had been keyed in at the sending end, by hand, the operator at the receiving end sent back by radio the equivalent, in German, of “didn’t get that — send it again”.
They now both put their Lorenz machines back to the same start position. Absolutely forbidden, but they did it. The operator at the sending end then began to key in the message again, by hand. If he had been an automaton and used exactly the same key strokes as the first time then all the interceptors would have got would have been two identical copies of the cipher text. Input the same — machines generating the same obscuring characters — same cipher text. But being only human and being thoroughly disgusted at having to key it all again, the sending operator began to make differences in the second message (using abbreviations) compared to the first.
The message began with that well known German phrase SPRUCHNUMMER — “message number” in English. The first time the operator keyed in S P R U C H N U M M E R. The second time he keyed in S P R U C H N R and then the rest of the message text. Now NR means the same as NUMMER, so what difference did that make? It meant that immediately following the N the two texts were different. But the machines were generating the same obscuring sequence, therefore the cipher texts were different from that point on.
The interceptors at Knockholt realised the possible importance of these two messages because the twelve letter indicators were the same. They were sent post-haste to John Tiltman at Bletchley Park. Tiltman applied the same additive technique to this pair as he had to previous Depths. But this time he was able to get much further with working out the actual message texts because when he tried SPRUCHNUMMER at the start he immediately spotted that the second message was nearly identical to the first. Thus the combined errors of having the machines back to the same start position and the text being re-keyed with just slight differences enabled Tiltman to recover completely both texts. The second one was about 500 characters shorter than the first where the German operator had been saving his fingers. This fact also allowed Tiltman to assign the correct message to its original cipher text.
Now Tiltman could add together, character by character, the corresponding cipher and message texts revealing for the first time a long stretch of the obscuring character sequence being generated by this German cipher machine. He did not know how the machine did it, but he knew that this was what it was generating!
John Tiltman then gave this long stretch of obscuring characters to a young chemistry graduate, Bill Tutte, who had recently come to Bletchley Park from Cambridge. He was very good at matching patterns.
Bill Tutte started to write out the bit patterns by hand from each of the five channels in the teleprinter form of the string of obscuring characters at various repetition periods.
When he wrote out the bit patterns from channel one on a repetition of 41, various patterns began to emerge which were more than random. This showed that a repetition period of 41 had some significance in the way the cipher was generated.
Then over the next two months Tutte and other members of the Research section worked out the complete logical structure of the cipher machine which we now know as Lorenz.
When depths became less frequent, decryption was achieved by a combination of manual and automated methods. The first machine to automate part of the decrypting process was called “Heath Robinson” and it was followed by several other “Robinsons”. These were, however, slow and unreliable, and were supplemented by the much faster and flexible “Colossus” the world’s first electronic, programmable digital computer, ten of which were in use by the end of the war
The family of machines known as “Robinsons” were built for the Newmanry. These used two paper tapes, along with logic circuitry, to find the settings of the chi pin wheels of the Lorenz machine and Mr Williams demonstrated how this worked by using information from a German weather station. The Robinsons had major problems keeping the two paper tapes (undulator to paper tape) synchronized and were relatively slow (It could take 20 minutes to set up), reading only 2000 characters per second.
As mentioned before there were problems with Heath Robinson keeping two paper tapes in synchrony at 1,000 characters per second. One tape had punched on to it the pure Lorenz wheel patterns that the manual code breakers had laboriously worked out. The other tape was the intercepted enciphered message tape. The double-delta cross-correlation measurement was then made for the whole length of the message tape. The relative positions then moved one character and the correlation measurement repeated. The codebreaker was looking for the relative position which gave the highest cross-correlation score — which hopefully would correspond to the correct Lorenz wheel start position.
Heath Robinson worked well enough to show that mathematician Max Newman’s idea, that it was possible to automate some parts of the process for finding the settings used for each message, was correct. Newman then went to Dollis Hill where he was put in touch with Tommy Flowers, the brilliant Post Office electronics engineer. Flowers went on to design and build Colossus to meet Max Newman’s requirements for a machine to speed up the breaking of the Lorenz cipher.
Tommy Flowers’ major contribution was to propose that the wheel patterns be generated electronically in ring circuits thus doing away with one paper tape and completely eliminating the synchronisation problem.
Colossus required a vast number of electronic valves but Tommy Flowers was confident it could be made to work. He had, before the war, designed Post Office repeaters using valves. He knew that valves were reliable provided that they were never switched on and off. Nobody else believed him!
Colossus design started in March 1943. By December 1943 all the various circuits were working and the 1,500 valve Mark 1 Colossus was dismantled, shipped up to Bletchley Park, and assembled in F Block over Christmas 1943. The Mark 1 was operational in January 1944 and successful on its first test against a real enciphered message tape.
In the images above you can see Mr Williams demonstrating a fully functional replica of a Colossus Mark 2.
Colossus did not have a stored program, and was programmed through plugboards and jumper cables. It was faster, more reliable and more capable than the Robinsons, so speeding up the process of finding the Lorenz pin wheel settings. Since Colossus generated the putative keys electronically, it only had to read one tape. It used state-of-the-art vacuum tubes (thermionic valves), thyratrons and photomultipliers to optically read a paper tape at 5000 characters per second, and then applied programmable logical functions to the bits of the key and ciphertext characters, counting how often the function returned “true”. It was driven much faster than the Robinsons’ and meant that the tape travelled at almost 48 km/h. This, and the clocking of the electronics from the optically read paper tape sprocket holes, completely eliminated the Robinsons’ synchronisation problems. Bletchley Park management, which had been sceptical of Flower’s ability to make a workable device, immediately began pressuring him to construct another. After the end of the war, Colossus machines were dismantled on the orders of Winston Churchill, but GCHQ retained two of them.
Colossus’ heating element was kept on all the time to prevent it breaking from repeated contraction and expansion.
It used standard telephone technology of the time relying on the operator knowing where to put the pegs.
It was Arnold Lynch’s work on photocells and optical tape readers that enabled Colossus to read punched tape at 5,000 characters per second, five times faster than previous designs.
Construction of a fully functional replica of a Colossus Mark 2 was undertaken by a team led by Tony Sale. In spite of the blueprints and hardware being destroyed, a surprising amount of material survived, mainly in engineers’ notebooks, but a considerable amount of it in the U.S. The optical tape reader might have posed the biggest problem, but Dr. Arnold Lynch, its original designer, was able to redesign it to his own original specification.
Tony Sale with the rebuilt Colossus compute
The second part of our tour involved looking at historical calculation and computer methods with Andrew Spencer.
The equipment looked at included an abacus.
A Chinese abacus
The abacus (plural abaci or abacuses), also called a counting frame, is a calculating tool that was in use centuries before the adoption of the written modern numeral system and is still widely used by merchants, traders and clerks in Asia, Africa, and elsewhere.
We also looked at a slide rule.
The slide rule is a mechanical analogue computer. The slide rule is used primarily for multiplication and division, and also for functions such as roots, logarithms and trigonometry, but is not normally used for addition or subtraction. Though similar in name and appearance to a standard ruler, the slide rule is not ordinarily used for measuring length or drawing straight lines.
A typical ten-inch student slide rule (Pickett N902-T simplex trig).
Computing is actually very old but machines that we would recognise as sort of computers first appeared in the 19th century. The first computers were people! That is, electronic computers (and the earlier mechanical computers) were given this name because they performed the work that had previously been assigned to people. “Computer” was originally a job title: it was used to describe those human beings (predominantly women) whose job it was to perform the repetitive calculations required to compute such things as navigational tables, tide charts, and planetary positions for astronomical almanacs.
Charles Babbage, FRS (26 December 1791 – 18 October 1871) was an English polymath. He was a mathematician, philosopher, inventor and mechanical engineer, who is best remembered now for originating the concept of a programmable computer.
Considered a “father of the computer”, Babbage is credited with inventing the first mechanical computer that eventually led to more complex designs. His varied work in other fields has led him to be described as “pre-eminent” among the many polymaths of his century.
The first computer capable of multiplication was produced in 1895 and the first computer capable of division was produced in 1913.
Computers in the early 1950s, apart from being very big, relied on valves to work. However as the decade passed on these were replace by transistors.
In the images below you have the valves on the left and transistors on the right.
Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer, indefinite, service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. Transistors greatly reduced computers’ size, initial cost, and operating cost. Typically, second-generation computers were composed of large numbers of printed circuit boards such as the IBM Standard Modular System each carrying one to four logic gates or flip-flops.
By around 1963 all computers contained transistors but they were primitive and cost hundreds of thousands of dollars. They had only a few thousand words of magnetic core memory, and programming them was far from easy.
By the end of the 1970s the computers had more memory and they were affordable.
Texas Instruments invented the first programmable hand-held calculator (a prototype called “Cal Tech”) in 1967.
Like physics there is a problem getting women to become involved with computing, i.e. programming rather than just word processing. In the 1960s about half the people working with computers were women but now it is only one woman in 30.
Although teenage girls are now using computers and the Internet at rates similar to their male peers, they are five times less likely to consider a technology-related career or plan on taking post-secondary technology classes.
It can be argued from an economic standpoint that for a country’s IT industry, as with physics and engineering, to withstand competition from abroad, underrepresented groups like women must play a greater role.
It has been suggested that gender could bring benefits such as better decision making, increased creativity, and enhanced, innovative performances.
Research in Canada found that one of the biggest turn-offs for girls is the “geek factor”. Secondary school girls often envisage a career in computing as a lifetime in an isolated cubicle writing code. The “geek factor” affects both male and female high school students, but it seems to have more of a negative effect on the female students. In addition, computer programmers depicted in popular media are overwhelmingly male, contributing to an absence of role models for would-be female computer programmers.
According to a 1998–2000 ethnographic study by Jane Margolis and Allan Fisher at Carnegie Mellon University, men and women viewed computers very differently. Women interviewees were more likely to state that they saw the computer as a tool for use within a larger societal and/or interdisciplinary context than did the men interviewed. On the other hand, men were more likely to express an interest in the computer as a machine.
From a two year research initiative published in 2000 by AAUW found that “Girls approach the computer as a “tool” useful primarily for what it can do; boys more often view the computer as a “toy” and/or an extension of the self.
As with physics there are initiatives to get more girls into computing.
Augusta Ada King, Countess of Lovelace (10 December 1815 – 27 November 1852), born Augusta Ada Byron and now commonly known as Ada Lovelace, was an English mathematician and writer chiefly known for her work on Charles Babbage’s early mechanical general-purpose computer, the Analytical Engine.
She was married at 17 to William King, 8th Baron King,[a] becoming Baroness King. Their residence was a large estate at Ockham Park, in Ockham, Surrey, along with another estate on Loch Torridon, and a home in London.
Between 1842 and 1843, she translated an article by Italian military engineer Luigi Menabrea on the engine, which she supplemented with an elaborate set of notes of her own, simply called Notes. These notes contain what many consider to be the first computer program—that is, an algorithm designed to be carried out by a machine. Because of this, she is often described as the world’s first computer programmer.
Lovelace’s notes are important in the early history of computers. She also developed a vision on the capability of computers to go beyond mere calculating or number-crunching while others, including Babbage himself, focused only on those capabilities.
There have been some other notable women in computer programming. Dina Vaughan was the first woman to set up a computer software company.
Dame Stephanie “Steve” Shirley, DBE, FREng, FRSA, FBCS (born 16 September 1933, Dortmund, Germany) is a British businesswoman and philanthropist.
In 1962, Shirley founded the software company F.I. Group (later Xansa, since acquired by Steria). She was concerned with creating work opportunities for women with dependants, and predominantly employed women, only 3 out of 300-odd programmers were male, until the Sex Discrimination Act 1975 made that illegal. She adopted the name “Steve” to help her in the male-dominated business world. In 1993, she officially retired at the age of 60 and has taken up philanthropy since then.
Programming computers can make you wealthy
23-year-old Grant Verstandig created digital health company Audax Health in 2010. The company’s flagship product is Zensey. He has sold most of it for an awful lot of money (believed to 70 million dollars).
Computers aren’t just the obvious desktop or laptop. There are at least 25 computers in the average home. Some of them are so small that they are invisible to the naked eye.
Many domestic appliances contain an embedded computer system to carry out their control functions. A modern washing machine has a computer system to handle the complex washing cycles including ensuring the correct amount of water and the correct temperature is used.
Other embedded computers can be found in the central heating systems, burglar alarm systems, microwave, dishwasher etc.
Even the above toy cat has a computer inside it to control a breathing mechanism
Personal computing gallery
Home computers were a class of microcomputers entering the market in 1977, and becoming common during the 1980s.
Personal computers started finding there was into homes in the late 1970s
In 1984, Apple launched the Macintosh, which at the time was the most revolutionary microcomputer on the market.
The BBC micro was the first computer I ever owned. You had you use real floppy disks to access the programs.
At the computer museum they are busy installing a very large computer that was once used in air traffic control. When it is finished it will take up half a gallery.
The above images are of the Cray 1 – S/2000 super computer.
The Cray-1 was a supercomputer designed, manufactured and marketed by Cray Research. The first Cray-1 system was installed at Los Alamos National Laboratory in 1976 and it went on to become one of the best known and most successful supercomputers in history. The Cray-1’s architect was Seymour Cray, the chief engineer was Cray Research co-founder Lester Davis.
The Cray-1S, announced in 1979, was an improved Cray-1 that supported a larger main memory of 1, 2 or 4 million words.
It could compete with wind tunnels in analysing air and spacecraft design. It was especially useful in solving problems requiring the analysis and prediction of the behaviour of physical phenomena through computer simulation. The fields of weather forecasting, aircraft design, nuclear research, geophysical research, and seismic analysis involve this process. For example, the movements of global air masses for weather recasting, air flows over wing and airframe surfaces for aircraft design, and the movements of particles for nuclear research, all lent themselves to such simulations. In each scientific field, the equations were known but the solutions required extensive computations involving large quantities of data. The quality of a solution depended heavily on the number of data points that could be considered and the number of computations that can be performed.
Baby Cray (CRAY YMB EC built in 1981) supercomputer was 8 times faster than its older brother. It calculated more than 20 million digits of pi in 1986, and it took 28 hours.
The third part of the talk was on computers in the classroom with Chris Monk
In today’s schools every child will have some sort of lesson with ICT. This process started off in the late 1970s and early 1980s with the computer literacy project.
Before the project and the internet only a small group of people were interested in computers they got most of their information from magazines. The area they were most interested in was coding.
The BBC Computer Literacy Project was conceived of in 1979 by the BBC’s Continuing Education Television Department and led to a launch in January 1982 of a television series based on the BBC Microcomputer commissioned from and designed by computer company Acorn. The project was initiated partly in response to an ITV documentary series The Mighty Micro, in which Dr Christopher Evans of the UK’s National Physical Laboratory predicted the coming microcomputer revolution and its effect on the economy, industry, and lifestyle of the United Kingdom.
The BBC Microcomputer System, or BBC Micro, was a series of microcomputers and associated peripherals designed and built by the Acorn Computer Company Ltd. ( a British computer company established in Cambridge, England) in 1978. for the BBC Computer Literacy Project, operated by the British Broadcasting Corporation. Designed with an emphasis on education, it was notable for its ruggedness, expandability and the quality of its operating system.
After the Literacy Project’s call for bids for a computer to accompany the TV programmes and literature, Acorn won the contract with the Proton, a successor of its Atom computer prototyped at short notice. Renamed the BBC Micro, the system was adopted by most schools in the United Kingdom, changing Acorn’s fortunes. It was also moderately successful as a home computer in the UK despite its high cost. Acorn also employed the machine to simulate and develop the ARM architecture which is much used for embedded systems, including tablets and cellphones. Globally, as of 2013, ARM is the most widely used 32-bit instruction set architecture in terms of quantity produced.
While nine models were eventually produced with the BBC brand, the phrase “BBC Micro” is usually used colloquially to refer to the first six (Model A, B, B+64, B+128, Master 128, and Master Compact), with the subsequent models considered as part of Acorn’s Archimedes series.
Reduced instruction set computing, or RISC, is a CPU design strategy based on the insight that simplified instruction set (as opposed to a complex set) provides higher performance when combined with a microprocessor architecture capable of executing those instructions using fewer microprocessor cycles per instruction. A computer based on this strategy is a reduced instruction set computer, also called RISC. The opposing architecture is called complex instruction set computing, i.e. CISC.
The Acorn Archimedes was Acorn Computers’ first general purpose home computer to be based on their own ARM architecture.
Using a RISC design with a 32-bit CPU (26-bit addressing), at its launch in June 1987, the Archimedes was stated as running at 4 MIPS, with a claim of 18 MIPS during tests.
The name is commonly used to describe any of Acorn’s contemporary designs based on the same architecture, even where Acorn did not include Archimedes in the official name.
By the early 1990s, the UK educational market began to turn away from the Archimedes. Apple Macintosh computers or IBM compatible PCs eclipsed the Archimedes in their multimedia capabilities, which led to an erosion of the Archimedes market share. The Tesco Computers for Schools scheme later changed partnership from Acorn to RM plc and many other computer-related suppliers, which also led to the decrease of the Archimedes’ educational market share. However early portable apple Macintosh computers had Archimedes processors in them and apple products still rely on ARM technology.
Acorn was broken up into several independent operations in 1998; its legacy includes the development of RISC personal computers. One of its operating systems, RISC OS, continues to be developed by RISC OS Open. Some of Acorn’s former subsidiaries live on today—notably ARM Holdings, which is globally dominant in the mobile phone and PDA microprocessor market.
Members of Apple’s Advanced Technology Group (ATG) had made initial contact with Acorn over use of the ARM in an experimental Apple II (2) style prototype called Möbius. Experiments done in the Möbius project proved that the ARM RISC architecture could be highly attractive for certain types of future products. The Möbius project was briefly considered as the basis for a new line of Apple computers but was killed for fear it would compete with the Macintosh and confuse the market. However, the Möbius project evolved awareness of the ARM processor within Apple. The Möbius Team made minor changes to the ARM registers, and used their working prototype to demonstrate a variety of impressive performance benchmarks
ARM is a family of instruction set architectures for computer processors based on a reduced instruction set computing (RISC) architecture developed by British company ARM Holdings.
ARM Holdings licenses the chip designs and the ARM instruction set architectures to third parties, who design their own products that implement one of those architectures—including systems-on-chips (SoC) that incorporate memory, interfaces, radios, etc. Currently, the widely used Cortex cores, older “classic” cores, and specialized SecurCore cores variants are available for each of these to include or exclude optional capabilities. Companies that make chips that implement an ARM architecture include Apple, AppliedMicro, Nvidia, Qualcomm, Samsung Electronics, and Texas Instruments. Qualcomm introduces new three-layer 3D chip stacking in their 2014-15 ARM SoCs such as in their first 20 nm 64-bit octa-core.
ARM technology is used in a number of products, particularly PDAs and smartphones. Some computing examples are the Microsoft Surface, Apple’s iPad and ASUS Eee Pad Transformer. Others include Apple’s iPhone smartphone and iPod portable media player, Canon PowerShot A470 digital camera, Nintendo DS handheld game console and TomTom turn-by-turn navigation system.
In 2005, ARM Holdings took part in the development of Manchester University’s computer, SpiNNaker, which used ARM cores to simulate the human brain.
ARM chips are also used in Raspberry Pi, BeagleBoard, BeagleBone, PandaBoard and other single-board computers, because they are very small, inexpensive and consume very little power.
The BBC Computer Literacy Project 2012, inspired by the original scheme which introduced the BBC Micro in the 1980s, is being developed by BBC Learning to provide a starting place for young people and others to develop marketable skills in computing technology and program coding. It is felt that children need to be taught coding.
Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C. The language provides constructs intended to enable clear programs on both a small and large scale. The small scale being something that children can get involved with.
For part 4 of our tour we went and looked at some of the museums other exhibits.
Harwell Dekatron Computer was an early British relay-based computer. It is described in the museum as “the oldest original functioning electronic stored program computer in the world”
The images above shows Mr Williams with a calculator from 1965. It is classed as computer because it is programmable.