Register Machine Model of Computation

From OSDev Wiki
Jump to navigation Jump to search

This page is intended to present an overview of the theoretical Register Machine model of computation, which is the conceptual basis for most real-world CPU designs. While it is assumed that anyone coming here will be at least passingly familiar with the concepts, it is meant to give a review of the topic to eliminate any misconceptions newcomers may have.

The Register Machine Model is a model of computation used to describe the computability of different equations. It is a sub-type of a class of computation models called Random Access Machines (RAMs), which also includes the related Stack Machine model. It is one of many highly diverse models, some of which — including many register machines — are Turing-Equivalent, that is to say, they can compute the same set of propositions as a Universal Turing Machine. The relevance of the Register Machine model is that is it the easiest of all models to approximate in electronic hardware, and in fact the model was developed to describe the first programmable computer systems in mathematical terms, in order to demonstrate that they were Turing-equivalent.

Computational Models and the History of Computing

A model of computation is a mathematical formalism that describes a mechanical process ('mechanical' in the sense of deterministic and rules-based, not necessarily mechanized or electronic) for performing computations. The idea of such formalisms arose in the early 20th century, when several mathematicians — most notably Alfred Whitehead, David Hilbert, and Bertrand Russell — began to speculate on whether it was possible to develop a mechanical system for deriving mathematical proofs.

Early Models: Recursive Functions, λ-Calculus, and the Universal Turing Machine

The earliest computational models, both developed in the early 1920s, were recursive function theory and recursive set theory, both of which were used for developing a set of rules for performing computations and proving theorems.

The original goal of automatic theorem proving was dealt a critical blow in 1933, when Kurt Gödel demonstrated that any formalism of mathematical logic would have blind spots — either in the form of undecidable propositions, true theorems which could not be proven, or invalid propositions, theorems which could be 'proven' in the system but which were not actually true or contradicted another theorem. While this was a major setback, the proof Gödel used demonstrated that recursive functions were a useful tool in exploring the new area of computability.

The next significant model to arise was the lambda calculus, created by Alonzo Church in 1935. This was a simpler model of functions than the existing type of recursive function theory, but was demonstrated to have the same computational power as RFT. Church postulated that this these two models were, in fact, as complete as any sequential model could ever be.

That same year, a graduate student of Church's, Alan Turing, developed yet another model, the Turing Machine, which he used to demonstrate that a well-known unsolved goal in mathematics — finding a method for proving that any arbitrary computation process would go to completion — was impossible, by showing that the testing process itself would necessarily be undecidable in the general case.

The Turing Machine, in its basic form as described in the paper "On Computable Numbers, with an Application to the Entscheidungsproblem", is thought of as a machine consisting of a tape read/writer and an infinitely long paper tape divided into cells containing data - one can think of the cells as boxes drawn on the tape to mark where a particular datum was. For each operation, the Turing Machine would position itself over a cell, read the datum in the cell, and based on that datum, decide whether to write a datum in a cell (whether the same one or a different one), move to another cell, or stop running and eject the tape in its current state as its final output. In his paper, he initially described each machine having a fixed set of responses to the data, stored separately in an action table, and a fixed set of states which the data could put it into. He then proceeded

In doing so, he showed that it was possible to design a variant of the Turing Machine that could simulate any other possible Turing machine; this Universal Turing Machine (UTM) demonstrated the same kind of generality as RF and λ-Calculius, and he expanded on Church's premise that these three models represented the absolute limit of sequential computation, which became known as the Church-Turing Thesis. While it is not proven, it is generally accepted that it is likely to be correct, with the result that Completeness became the benchmark for the computing power of any mechanical model.

It has to be understood that these models were 'mechanical' only in the sense of being built on a set of rules; they were not very useful for developing a physical computing device. Of them all, the UTM was the closest, as it was described in terms of an imaginary mechanism consisting of a reading and writing head and an infinitely long paper tape. While it was not a practical design, it demonstrated that the mechanical calculators of the time had a solid theoretical basis, and more importantly, re-introduced the concept (originally proposed by Charles Babbage) of a programmable computation device.

Early Electromechanical and Electronic Computers

Turing would go on to do significant work in the development of one of the first programmable electronic computers, the ULTRA project's Colossus decryption machine originally conceived by Tom Flowers in early 1943. During this period, from the late 1930s to the early 1950s, several groups of designers seem to have developed similar ideas independently of each other: Konrad Zuse in Germany, John Atanasoff and Clifford Berry at Iowa State University, Presper Eckert and John Mauchly at Princeton, and Howard Aiken at Harvard, all rediscovered the principle of stored program computers between 1941 and 1943, a concept later codified in a white paper by a team led by John Von Neumann. Because of this, designs in which the program and data are stored in a shared memory became known as the Von Neumann architecture, while those which have independent memories for program and data, such as the Zuse Z4 and the Harvard Mark 1, became known as the Harvard Architecture.

Linear-Bounded Automata and the Limits of Physical Computation

After the war, the question of how electronic computing related to the theoretical models became important, as it wasn't clear if they were equivalent to the UTM, or if they possessed limits that the purely-imaginary models did not. The solution to this came down to an important property of both the Turing Machine and shared by all of the more abstract equivalents: the availability of at least one infinite resource, specifically the tape memory in the case of the Universal Turing Machine.

This led to the creation of a new model which could describe these limitations and allow them to determine what they were. This new model, the Linear-Bounded Automaton, a variant of the Turing machine with a finite available memory (it is usually described in terms of the reader having access to a finite, contiguous part of an infinite tape, hence the term 'linear bounded'). This allowed for study of the memory requirements of different classes of computations, and similar models were developed for studying the time requirements.

Finite Automata and the Chomsky Hierarchy

Meanwhile, work on understanding the more abstract forms continued, and a wide variety of Turing-complete computation models were formulated, along with many provably less powerful models. While the Church-Turing thesis served as a hypothetical limit for these, it remained a conjecture, and it wasn't entirely clear how the different models related to each other - especially those which were not Turing-complete.

In 1955, linguist Noam Chomsky worked out an hierarchy for categorizing formal grammars in terms of both the grammars capable of generating sentences in them, and the mathematical models for 'recognizing' grammatically-correct sentences mechanically. He demonstrated that not only did the same [Chomsky Hierarchy] apply to both how sentences can be generated mechanically, and how they can be recognized mechanically, but also that the levels of the hierarchy corresponded to certain degrees of computability - and specifically, that his Type-0 grammars (also called unrestricted grammars) were recursively enumerable, and hence required a Turing-complete mechanism to recognize or generate.

This article is a stub! This page or section is a stub. You can help the wiki by accurately contributing to it.

Counters, Accumulators, Stacks, and Registers

However, the question of whether the machines could in principle be Turing-complete if given an infinite resource - memory, presumably - remained. This led to the creation of a set of abstract models for machines similar to those in actual use, but without the memory limits of a LBA.

This article is a stub! This page or section is a stub. You can help the wiki by accurately contributing to it.

The Random Access Machine model

The Random-Access Machine is an abstract model that closely resembles the structure of a conventional computer. It consists of a data memory divided into a set of cells, arranged linearly and monotonically (that is to say, the cells follow each other in a strictly numeric order, and each cell is the same 'size' in the address space - which doesn't necessarily reflect its capacity to hold data), where the cells can be accessed by a numeric address; an instruction memory, which may or may not be the same as the data memory, but is similarly structured; a set of operations which it can perform; a set of instructions, encoded in a form which could be held in a set of memory cells, which act to direct which operation to perform; and a Program Counter, which holds the address of a cell in the instruction memory which encodes which operation to perform next.


This article is a stub! This page or section is a stub. You can help the wiki by accurately contributing to it.