Computer History

Donald Daniel

2022, revised Dec 2024

If the lines of text are too long you can fix the problem with these instructions.

up one level

This is a technical history of computers more than a social or commercial history.

Digital information technology started long before digital computing. In 1725 paper tape with holes in it was used to control a loom for weaving patterns in fabric. In 1804 the famous Jacquard machine read paper cards with holes in them to control looms. In 1837 Morse code was used to transmit text over a telegraph.

In 1842 Ada Lovelace wrote a description of how to configure Charles Babbage's proposed mechanical calculator, his "analytical engine", to do different calculations. He did not finish making the analytical engine.

Herman Hollerith's mechanical "tabulating machine" was developed for and successfully used to process the data obtained in the 1890 United States census. It was not a general purpose computer, but it could add up the census data. Data was input to it in the form of paper cards with the information coded in the pattern of holes in the cards. It could be used to compile statistics about how many people with characteristics recorded by census takers lived in different regions. This early mechanical calculator sold well.

In 1901 Hollerith introduced an automatic card sorter. Card sorters allowed searching decks of cards the way computers today search arrays in memory. For instance, the police department of a large city could make a list of suspects based on the description provided by a witness.

Later the Hollerith company combined with two other companies and became International Business Machines, or IBM. Successive improvements in the tabulating machine ended in the IBM 407 mechanical accounting machine which came out in 1949. It weighed 1.3 tons. It had electric motors like the one in your refrigerator, and electric solenoids like the one that rings your doorbell, but no electronics. That is to say the calculations were not done with vacuum tubes or transistors. To see more on the IBM 407 see: www.piercefuller.com/library/ibm407.html. More mechanical tabulating machines are listed at www.columbia.edu/cu/computinghistory/tabulator.html.

During world war two, a one of a kind electronic calculator was made. The book "proving ground" by Kathy Kleinman describes the Eniac calculator designed during the war and finished shortly after. The book "eniac in action" by T. Haigh, M. priestley and C. Rope is a more thorough account. The Eniac project was started in 1943 (eniac in action p. 31). The Eniac was designed by Eckert and Mauchly. It had multiple adders, multipliers and other math functions implemented with vacuum tube electronics. It was designed to calculate numbers to go in handbooks with tables of numbers. An example would be tables to aim artillery under different conditions, all computed rapidly by the same equation. It was not programmable, but it was configurable. It was configured by flipping switches and by plugging cables. It could be configured to represent a mathematical equation to calculate a number. It stored data between successive calculations as punched cards. It had to be configured differently for each different equation. With enough re-configuration it could be used to calculate anything. As a result it has been described as a general purpose computer, but certainly not in the modern sense of the term. Eniac was put to use for an important calculation in Dec 1945, before it was completely finished (eniac in action p. 81). The machine was finished in Feb 1946.

Eckert, Mauchly and von Neumann discussed the possibility of a programmable computer. Von Neumann wrote a description of how it would work. The program would be stored in a memory. Each step in the program would control special electronics which would route data from where it was at the present step in the program to where it was needed for the next step in the program. The program would obviate the need to physically configure the machine for different calculations. The memory would allow a single adder and a single multiplier to be used many times to perform a complex calculation. Intermediate results would be stored in memory between arithmetic operations. Instead of being laboriously configured to compute one equation like the Eniac, many different equations could be computed sequentially in the same program. Later they disputed who's idea it was.

The edvac computer was designed to be a programmable computer, but was not finished until 1949 (eniac in action, p.225). The eniac calculator was converted to be a programmable computer. The conversion was completed in 1948 (eniac in action, p. 164). Eniac was finally a real computer.

Commercial programmable computers were built after the war. Early computers had a central processing unit (CPU), memory, a hard drive, magnetic tape drives and a machine to read either paper tape or paper cards with holes punched in them. The hard drives were magnetic drums. Later hard drives would become disks. Williams tube. While it would have been possible to provide a terminal for the user to slowly type programs or data into the computer, it was more efficient for the computer to rapidly read paper tape or paper cards with holes in them. The early computers did not have operating systems, editors or compilers. They had assemblers that could translate mnemonic words to machine words.

The Univac 1 electronic computer became commercially available in 1951. It was a vacuum tube computer with mercury delay line memory. The memory was only 1000 words of 12 bits each. This infinitesimal storage would be considered useless today. It had a drum hard drive, magnetic tape drives and read paper tape.

IBM did not want to make computers. IBM was forced into it by their customers. Metlife insurance company informed IBM that three floors of their building were needed to store customer data on IBM cards. Univac's magnetic tapes could store the same data in much less space. So IBM developed the 701 computer with tape drives just like the Univac 1. The first IBM 701 delivered to a customer outside of IBM was delivered in 1953. Its Williams tube memory was 2048 words of 36 bits each, which was better than the Univac 1. In terms of the 32 bit words that became standard with binary arithmetic later, this was equivalent to 8K bytes. Its drum hard drive stored 80K decimal digits, roughly 80K bytes. Its tape drive could store 2 megabytes on one reel of tape. It read paper cards.

Some of these early vacuum tube computers were very large. They would fill a large room. The Univac 1103 came out in 1953.

xyz

The picture shows 2 computers in the same room. Univac recommended a room at least 30 feet by 60 feet to hold one computer. This monster had 4K bytes of Williams tube memory, a 64K byte drum hard drive, tape drives and a paper tape reader. Heavy duty air conditioning was required to remove the heat of all of those glowing vacuum tubes. For more information see https://vipclubmn.org/processors.html

It was not until 1955 that magnetic core memory became available, which reduced the physical size or expanded the memory capacity of vacuum tube computers. The Univac 1103A had 48K byte core memory in the same sized cabinet instead of the 4K Williams tube memory of the 1103.

In 1958 IBM developed the first fortran compiler. Compilers made it easier to program computers. With a compiler a computer could have fewer expensive arithmetic units and still be easy to program.

Vacuum tube computers were slow and very unreliable. If a vacuum tube burned out considerable time would be needed to find and replace the burned out tube. Reliability was essential in the Sage air defense early warning system, so two identical computers were used at each location so that if one was down for repair, the other would still be running.

Transistors did not burn out. When transistors became good enough to use in computers, the industry switched from vacuum tubes to transistors. In 1959 IBM made a transistor version of the vacuum tube IBM 709 and called it the IBM 7090. It still had the old fashioned 36 bit word. It had the equivalent of 128K bytes of core memory. This kind of computer was shown in the movie "hidden figures". The programmer sat down at an IBM 029 keypunch machine with an ordinary typewriter keyboard that was separate from the computer and punched up a deck of paper cards containing his program. It was not necessary to press the keys hard, it was electric. When each key was pressed a group of solenoids hidden inside the machine simultaneously punched the pattern of small rectangular holes for each key with a loud "thunk". After each key was pressed the card would move to the left in the machine, ready for the next key. The pattern of holes in each of the 80 columns of holes across the length of the card represented a character. The cards were 0.0075 inches thick. The pattern of holes in each 7.375 inch wide by 3.25 inch high card represented a line of text. Each hole was a rectangle 0.100 by 0.053 inches. The cards had rounded corners except the top right corner which was cut off straight to identify the orientation of the card.

The programmer would then put the deck of cards into a card reader machine that was connected to the computer by a cable. He then punched the button on the computer only once that told it to activate the card reader to read the deck of cards rapidly one card at a time. This was the same technology used in 1890. The computer had refrigerator sized tape machines with two 10.5 inch diameter reels of magnetic tape on each machine. Initially one reel would be full, the other empty. The machines served the same purpose as the USB ports on your laptop today. A reel of tape served the same function as a memory stick you would insert into your USB port. IBM claimed the 7090 was six times faster than the earlier tube version. And of course it was very reliable. In 2022 money the IBM 7090 cost 23 million dollars. It was not nearly as good a computer as the cheapest laptop that you can buy today. Transistor computers sold much better than vacuum tube computers had before them.

Some early computers are shown at the computer history museum, 1401 N. Shoreline Blvd., Mountain View, California. The most thorough book I have seen on computer history is "The Dream Machine" by M. Mitchell Waldrup. It is not about commercial developments so much as the research efforts that developed concepts and prototypes that were then commercialized by others. The print is so small that I need a large magnifying glass to read it, but it is an excellent history book. It covers computers, the internet and laser printers. The book does not cover commercial developments very well, but has a large impressive bibliography which should lead to lots of information on commercial developments.

The IBM 7090 had a hardware adder in it. Here we mean electronic hardware, not mechanical hardware. The first computer I used in 1963 was the much cheaper IBM 1620 model 1. It did not have a hardware adder in it. It had to do arithmetic entirely in software, with no help from arithmetic hardware. This would be the equivalent of you doing arithmetic with only pencil and paper, without a pocket calculator. It had a clock speed of only 50kHz.

The IBM 1620 model 1 was very primitive. Today if a programmer wishes to write, compile and execute a program he would set at a screen and keyboard. He would use an editor program already on the hard disk to write the program. His program would be on the hard disk. He would invoke a compiler program on the disk to compile his program and produce an executable version of it on the disk. He would then execute it. All using a screen and keyboard. The IBM 1620 model 1 that I started on had a keyboard for use only by the IBM repairman. The programmer had no keyboard, screen, hard drive or operating system at his disposal. The lack of hard drive on the one I used was presumably to save money. The programmer used the keyboard on a separate IBM 029 keypunch machine to create a deck of cards that was his fortran program. He would put that with a deck of cards provided by the computer center that was the first half of the fortran compiler. He would put the decks in the card reader and punch that was attached to the computer. He would push a button on the computer. The deck of cards would be read. A new deck would be punched out that was his program half compiled. He would put that with a deck that was the second half of the fortran compiler. When that deck was read a final deck of cards would be punched out that was either his errors or his results. Unlike the cards punched by the keypunch machine, cards punched by the computer's machine did not have the text printed at the top of the card. He would put the deck punched by the computer in a massive IBM 407 accounting machine that could print out the contents of the deck on fanfold paper. The IBM 1620 model 1 was replaced by a model 2 that had a hard drive in a separate cabinet behind it. It is shown here:

xyz

The panel had many lights on it which showed the bits stored in registers that you could select. The display panel is above the desktop. Below the desktop down to the floor was the CPU and about a third of the core memory. The rest of the core memory was in a separate cabinet. The card reader and punch was a separate cabinet.

All large computers were installed in a room with raised flooring to hide the cables, and air conditioning to remove the heat. It should be noted that before personal computers the slang term for a computer was a "number cruncher".

Even though the IBM 1620 did not have any arithmetic hardware, most historic computers at least had an adder. The Univac 1103 could do addition, subtraction, multiplication and division in hardware. The most efficient form of binary arithmetic proved to be twos complement arithmetic. If one of two numbers was first negated, an adder could also subtract. There were multistep procedures to use an adder to accomplish multiplication and division. These procedures would be implemented by the compiler, not the programmer. More complex and expensive computers had a multiplier to speed up multiplication and a reciprocal unit to facilitate division. The compiler knew how to combine this arithmetic capability to compute transcendental functions, etc. All this is explained in the book "computer arithmetic algorithms" by Israel Karen.

It should be emphasized that expensive computers were not purchased for playing games, social media or even word processing. They were purchased for financial accounting, engineering design, or scientific work. The engineering and scientific work was mostly new programs written for each project. In the early days programming was done in "assembly language", which was just words that represented the detailed actions of the computer hardware. Assembly programmers might have had no college, or studied music or literature in college. They were trained in the details of how their computer worked. The assembly language was different for each kind of computer. A scientist or engineer would tell the programmer in great detail each mathematical or logical step of the program. The programmer did not understand how it all worked, but did know how to program the computer to do each mathematical or logical step, which might take several computer instructions. FORTRAN was introduced in 1958, the first of the higher level computer languages. It permitted programs to be expressed in mathematical or logical steps, not in computer instructions. A program called a compiler would translate the fortran program to a program in computer instructions. Assembly language programming was no longer necessary. Fortran was the same for all computers. Engineers were more likely to write the program themselves using fortran or some other programming language.

The COBOL programming language was developed for business applications. But since so few business people can program, cobol programs are still in use for routine accounting that were written in the 1960's.

In the early days the person who wrote the program would put the deck of cards in the card reader. He would activate the computer to read the cards and execute the program. As more people wanted to use the computer, the user would give his deck of cards to to computer staff people whose only job was to run everybody's programs 24 hours a day. The user would come back the next day and get his deck and a printout of his results from the computer staff. To speed up the turnaround, starting about 1965 computers gradually switched to time sharing operating systems. Users would set at remote terminals and everybody would use the computer at the same time. Engineers and scientists writing programs no longer needed punched cards. If too many people were using the computer, response would be slow. At first the terminals were mechanical teletype terminals. Later electronic terminals with a keyboard and a text only screen were used. Instead of getting printouts from the very expensive high speed printer in the computer room, printouts would come from slow cheap dot matrix printers in the different terminal rooms. In the mid 1970's I wrote Fortran programs on a Univac 1108 using these terminals. Line drawing plotters for graphics were expensive and only one would be in the computer room.

By 1970 most large computers had 32 bit words. This permitted enough decimal digits in the calculation for satisfactory accuracy for most applications. The number of decimal digits was far fewer than the number of binary bits. The most expensive computers had 64 bit words. Typical minicomputers had 16 bit words, and had to do extra work to produce 32 bit results.

Large computers got bigger and better. About 1975 the Cray-1 computer came out. It set records on the Linpack benchmark program which was representative of the most demanding scientific programs, but not of typical scientific programs. The cheapest laptop today would be very much faster.

About 1980 I copied a bunch of tape reels with a Univac 1110 computer.

The invention of semiconductor integrated circuits resulted in the replacement of discrete transistors and magnetic core memory with integrated circuits. Increase in the number of transistors in each integrated circuit eventually made personal computers possible.

The Apple-2 was the first personal computer in a hardware configuration that could potentially do useful work. It had 8 bit words and had to do a lot of extra work to produce 32 bit results. It was available in 1977. It came with the BASIC programming language. BASIC made it possible to write simple programs, but nothing useful. In 1979 two pieces of software became available for it that permitted useful work. The Visicalc spreadsheet program made it useful for business work. It was an application program, not a programming language. Business people could not write programs. The UCSD operating system, editor and Pascal compiler were sold as "Apple Pascal" software. It came with a memory extender card to extend the memory from 48K 8 bit bytes to 64K bytes. Notice that this was an enormous memory compared with the old Univac 1103. The UCSD software made it possible to write new programs for scientific and engineering work. Apple pascal was very good software. It seems almost impossible that such capable software could run with only 64K bytes of memory and two 140K byte 5.25 inch floppy disk drives. If you wrote a large program there might not be enough space in memory for the arrays needed. The arrays would have to be on floppy disk. Only about one in 20 engineers can write engineering programs. Visicalc sold very much better than the UCSD software. The Apple-2 did not have a graphical user interface, but with UCSD pascal it had "turtle graphics" which allowed drawing straight lines on the screen. With enough short strait lines curves could be drawn. This was low resolution graphics. For high resolution graphics an external plotter was required. With Apple Pascal I wrote two port and nodal circuit simulations. I wrote a printed circuit autorouter for two sided boards with TTL DIPs on them, and ceramic capacitors between power and ground. I routed artwork to make boards with up to 30 chips on them. It was a channel router with busrouting added. The Apple-2 was connected via RS232 interface to an IBM Selectric typewriter modified with a third party adapter kit for text output and an HP7225 flat bed ink plotter for graphics output. That was serious computing for someone weaned on an IBM 1620.

Some of the programs that I have written invoke publicly available graphics routines that were not available in Apple pascal. Except for those, all of the programs I have written on other computers before and after the Apple-2 could have been written on the Apple-2, except one. My digital filter program found elsewhere at this website uses an advanced feature not offered in Apple Pascal, namely a variable that could become different procedures. It is the variable "proc" in procedure "pltfltr" in module "cmptst".

It is important to point out that memory used to be expensive. Memories were not large. Today, memory is practically free. A tiny program compiled in Apple pascal on the Apple-2 would have a tiny executable. The same program compiled on today's computer would have an executable that would be many times the size of the Apple's memory. Compilers today include in the executable a large library most of which has no relevance to the program.

After personal computers became popular cheap laser printers became available which greatly reduced the cost of printing and plotting.

Today engineers and scientists are much less likely to need to write a new program for a new project. Different types of applications programs are available for different categories of projects. So all that is necessary is to configure the application program for the specifics of your particular project. In my opinion today's engineers are spoiled by the lack of necessity to program, and do not write a new program when that would be the best way to proceed. Similarly, there are a variety of applications programs for business applications.

In 1979 I had an Apple-2 computer with UCSD Pascal software for it. I tested it for speed using the Whetstone benchmark program.

Programs ask computers to do different kinds of tasks. To compare the speeds of computers you need to include the different kinds of tasks in proportion to the way they occur in typical programs. Scientific programs needed speed more than financial programs. So the attempts to compare computer speeds concentrated on scientific programs. IBM developed the "Gibson mix" of computer tasks to compare computers. But it was difficult to use. The Whetstone benchmark program was developed for an easier way to compare computers. The speed numbers reported by the Gibson mix and the Whetstone were similar, but not identical. The speed number I measured for the Apple-2 was about the same as the speed of the IBM 1620 mod 1 that I used in 1963. The mod 2 was twice as fast as the mod 1. For historic results see http://www.roylongbottom.org.uk/cpumix.htm.

In 1976 the Algol version of the Whetstone benchmark program was published in the computer journal, vol 19 no. 1 p. 43. I translated it to Pascal to test the Apple-2. More recently I translated it to Oberon-2 to test my small Dell 3190 laptop.

It is interesting to see what speed difference we would expect between the Apple-2 and the Dell 3190. The Apple-2 had a clock speed of 1MHz. It had a hardware adder. But it was an 8 bit computer, it could only compute 8 bits at a time. The Whetstone benchmark used 32 bit numbers. So the Apple-2 had to do four 8-bit adds to achieve one 32-bit add.

I bought a Dell 3190 laptop that had a Pentium N5000 with a clock speed of 2.4 GHz. This was before they switched to the faster memory and clock speed that they use today. It had a hardware adder and a hardware multiplier in it. I ran the Ubuntu linux operating system on it and the oo2c Oberon-2 compiler.

So now for our speed ratio estimate. The clock speed ratio is 2300. The Dell is a 64 bit computer but we were only using 32 bit numbers. The ratio of 32 bits for the Dell to 8 bits for the apple is 4. 2300 times 4 is 9200.

The Apple-2 Pascal used a compiler, but it did not compile to machine code, it compiled to software "p code", which was then interpreted to machine code. This defeated any speed advantage that a compiler would normally be expected to provide. I have seen two websites where the execution speed of interpreted code was compared to compiled code. They both concluded that the ratio was very roughly on the order of 100. 9200 times 100 is 920 thousand or 0.920 million. This then is a very rough estimate of the speed ratio we would expect.

To use the Whetstone benchmark you run the program and enter an integer value for the variable "i" when prompted to. The time from when you enter the number until the program completes is used to compute the speed of the computer in terms of "Whetstone instructions". If you enter 10, you are asking the program to perform 1 million Whetstone instructions. The number of instructions and the execution time is proportional to the number you enter.

With the Apple-2 my old notes indicate that I entered 10 and the result was 1.9KWi/s. This would mean that it took 8.7 minutes for the apple-2 to complete the program. 1.9KWi/s is 1.9E3 Wi/s. On the Dell I entered 100000 and it took 10 seconds. If i=10 is 1E6 Whetstone instructions, i=100000 is 1E10 Whetstone instructions. Divided by 10 seconds is 1E9 Wi/s. 1E9/1.9E3 is 526315 or 0.526 million. The Dell is 0.526 million times the speed of the apple-2. Our estimate is as close as we could reasonably expect to the result.

The source code of the Oberon-2 version of the Whetstone benchmark is given below:

MODULE whetstone;
(*Input number i=10 is 1 million whetstone instructions.
Number of instructions proportional to i. Adjust i for
convenient timing. Translated from algol version published
in the computer journal, vol 19 no. 1 p.43, 1976.
With the program author's correction of a missprint
in module 6. This program was devised to have the same
proportion of tasks as an "average" scientific program.
The original version of the Whetstone benchmark had more
sections. As analysis of a large number of scientific
programs progressed, it was determined that not all of
the sections were necessary and some were removed. But the
label numbers of the remaining sections were not changed.
*)
IMPORT In, Out, rm:=RealMath;
TYPE e1array=ARRAY 5 OF REAL;
VAR x1,x2,x3,x4,x,y,z,t,t1,t2:REAL;
    e1:e1array;
    i,j,k,l,n1,n2,n3,n4,n5,n6,n7,n8,n9,n10,n11:LONGINT;
PROCEDURE pa(VAR e:e1array);
VAR j:LONGINT;
BEGIN
j:=0;
REPEAT
e[1]:=(e[1]+e[2]+e[3]-e[4])*t;
e[2]:=(e[1]+e[2]-e[3]+e[4])*t;
e[3]:=(e[1]-e[2]+e[3]+e[4])*t;
e[4]:=(-e[1]+e[2]+e[3]+e[4])/t2;
j:=j+1;
UNTIL j=6;
END pa;
PROCEDURE p0;
BEGIN
e1[j]:=e1[k];
e1[k]:=e1[l];
e1[l]:=e1[j]
END p0;
PROCEDURE p3(x,y:REAL;VAR z:REAL);
BEGIN
x:=t*(x+y);
y:=t*(x+y);
z:=(x+y)/t2
END p3;
PROCEDURE pout(n,j,k:LONGINT;x1,x2,x3,x4:REAL);
BEGIN
Out.LongInt(n,12);Out.LongInt(j,12);Out.LongInt(k,12);
Out.Ln;
Out.RealEng(x1,12,4);
Out.RealEng(x2,12,4);
Out.RealEng(x3,12,4);
Out.RealEng(x4,12,4);Out.Ln;
END pout;

PROCEDURE firsthalf;
BEGIN
(* initialize constants *)
t:=0.499975;
t1:=0.50025;
t2:=2.0;
(* read value of i, controlling total weight: if i=10 the
  total weight is one million Whetstone instructions *)
Out.String('i=10 is one million whetstone instructions');Out.Ln;
Out.String('enter i');Out.Ln;
In.LongInt(i);
n1:=0;
n2:=12*i;
n3:=14*i;
n4:=345*i;
n5:=0;
n6:=210*i;
n7:=32*i;
n8:=899*i;
n9:=616*i;
n10:=0;
n11:=93*i;
(* module 1: simple identifiers *)
x1:=1.0;
x2:=-1.0;x3:=-1.0;x4:=-1.0;
FOR i:= 1 TO n1 DO
  x1:=(x1+x2+x3-x4)*t;
  x2:=(x1+x2-x3+x4)*t;
  x3:=(x1-x2+x3+x4)*t;
  x4:=(-x1+x2+x3+x4)*t;
  END (* module 1 *);
pout(n1,n1,n1,x1,x2,x3,x4);
(* module 2: array elements *)
e1[1]:=1.0;
e1[2]:=-1.0;e1[3]:=-1.0;e1[4]:=-1.0;
FOR i:= 1 TO n2 DO
  e1[1]:=(e1[1]+e1[2]+e1[3]-e1[4])*t;
  e1[2]:=(e1[1]+e1[2]-e1[3]+e1[4])*t;
  e1[3]:=(e1[1]-e1[2]+e1[3]+e1[4])*t;
  e1[4]:=(-e1[1]+e1[2]+e1[3]+e1[4])*t;
  END (* module 2 *);
pout(n2,n3,n2,e1[1],e1[2],e1[3],e1[4]);
(* module 3: array as a parameter *)
FOR i:= 1 TO n3 DO
pa(e1);END;
pout(n3,n2,n2,e1[1],e1[2],e1[3],e1[4]);
(* module 4: conditional jumps *)
j:=1;
FOR i:= 1 TO n4 DO
  IF j = 1 THEN j:=2 ELSE j:=3 END;
  IF j > 2 THEN j:=0 ELSE j:=1 END;
  IF j < 1 THEN j:=1 ELSE j:=0 END;
  END (* module 4 *);
pout(n4,j,j,x1,x2,x3,x4);
(* module 5: ommitted in original benchmark *)
(* module 6: integer arithmetic *)
j:=1;
k:=2;
l:=3;
FOR i:=1 TO n6 DO
  j:=j*(k-j)*(l-k);
  k:=l*k-(l-j)*k;
  l:=(l-k)*(k+j);
  e1[l-1]:=j+k+l;
  e1[k-1]:=j*k*l;
  END (* module 6 *);
pout(n6,j,k,e1[1],e1[2],e1[3],e1[4]);
END firsthalf;

PROCEDURE secondhalf;
BEGIN
(* module 7: trig functions *);
x:=0.5;y:=0.5;
FOR i:=1 TO n7 DO
  x:=t*rm.arctan(t2*rm.sin(x)*rm.cos(x)/
     (rm.cos(x+y)+rm.cos(x-y)-1.0));
  y:=t*rm.arctan(t2*rm.sin(y)*rm.cos(y)/
     (rm.cos(x+y)+rm.cos(x-y)-1.0));
  END (* module 7 *);
pout(n7,j,k,x,x,y,y);
(* module 8: procedure calls *)
x:=1.0;y:=1.0;z:=1.0;
FOR i:=1 TO n8 DO
  p3(x,y,z);END;
pout(n8,j,k,x,y,z,z);
(* module 9: array references *);
j:=1;
k:=2;
l:=3;
e1[1]:=1.0;
e1[2]:=2.0;
e1[3]:=3.0;
FOR i:= 1 TO n9 DO
 p0;END;
pout(n9,j,k,e1[1],e1[2],e1[3],e1[4]);
(* module 10: integer arithmetic *);
j:=2;
k:=3;
FOR i:=1 TO n10 DO
  j:=j+k;
  k:=j+k;
  j:=k-j;
  k:=k-j-j;
  END (* module 10 *);
pout(n10,j,k,x1,x2,x3,x4);
(* module 11: standard functions *)
x:=0.75;
FOR i:=1 TO n11 DO
  x:=rm.sqrt(rm.exp(rm.ln(x)/t1));END;
pout(n11,j,k,x,x,x,x);
END secondhalf;

BEGIN
firsthalf;
secondhalf;
Out.Char(CHR(7));Out.Ln; (*beep speaker*)
END whetstone.

up one level

.

.

.

.

.

.

.

.

.

.

.

.

.