Automata theory

It is a theory in theoretical computer science with close connections to mathematical logic.

An automaton (automata in plural) is an abstract self-propelled computing device which follows a predetermined sequence of operations automatically.

The figure on the right illustrates a finite-state machine, which is a well-known type of automaton.

As the automaton sees a symbol of input, it makes a transition (or jump) to another state, according to its transition function, which takes the previous state and current input symbol as its arguments.

In this context, automata are used as finite representations of formal languages that may be infinite.

Automata play a major role in the theory of computation, compiler construction, artificial intelligence, parsing and formal verification.

1956 saw the publication of Automata Studies, which collected work by scientists including Claude Shannon, W. Ross Ashby, John von Neumann, Marvin Minsky, Edward F. Moore, and Stephen Cole Kleene.

[4] With the publication of this volume, "automata theory emerged as a relatively autonomous discipline".

[6] In the same year, Noam Chomsky described the Chomsky hierarchy, a correspondence between automata and formal grammars,[7] and Ross Ashby published An Introduction to Cybernetics, an accessible textbook explaining automata and information using basic set theory.

The study of linear bounded automata led to the Myhill–Nerode theorem,[8] which gives a necessary and sufficient condition for a formal language to be regular, and an exact count of the number of states in a minimal machine for the language.

The pumping lemma for regular languages, also useful in regularity proofs, was proven in this period by Michael O. Rabin and Dana Scott, along with the computational equivalence of deterministic and nondeterministic finite automata.

[11][12] By the end of the decade, automata theory came to be seen as "the pure mathematics of computer science".

[5] What follows is a general definition of an automaton, which restricts a broader definition of a system to one viewed as acting in discrete time-steps, with its state behavior and outputs defined at each step by unchanging functions of only its state and input.

A familiar example of a machine recognizing a language is an electronic lock, which accepts or rejects attempts to enter the correct code.

The following is an incomplete hierarchy in terms of powers of different types of virtual machines.

The hierarchy reflects the nested categories of languages the machines are able to accept.

(below is stronger) Deterministic Push Down Automaton (DPDA-I) with 1 push-down store

Multidimensional Turing Machine Each model in automata theory plays important roles in several applied areas.

Finite automata are used in text processing, compilers, and hardware design.

Some other examples which could be explained using automata theory in biology include mollusk and pine cone growth and pigmentation patterns.

Going further, a theory suggesting that the whole universe is computed by some sort of a discrete automaton, is advocated by some scientists.

The idea originated in the work of Konrad Zuse, and was popularized in America by Edward Fredkin.

Well known automata simulators include Turing's World, JFLAP, VAS, TAGS and SimStudio.

In the case of non-deterministic, or other complex kinds of automata, the latter set of endomorphisms may become, however, a variable automaton groupoid.

Combinational logic Finite-state machine Pushdown automaton Turing machine Automata theory
The automaton described by this state diagram starts in state S 1 , and changes states following the arrows marked 0 or 1 according to the input symbols as they arrive. The double circle marks S 1 as an accepting state. Since all paths from S 1 to itself contain an even number of arrows marked 0, this automaton accepts strings containing even numbers of 0s.