Introduction to Automata Theory, Languages by John E. Hopcroft, Rajeev Motwani, Jeffrey D. Ullman - Free ebook download as PDF File .pdf), Rajeev Motwani, Jeffrey D. Ullman. -- 3rd ed. p. cm. Includes bibliographical references and index. A third change in the environment is that Computer Science has grown to. Introduction to Automata Theory, Languages, and Computation, 3rd Edition. John E. Hopcroft, Cornell University. Rajeev Motwani, Stanford University. Jeffrey D. Theory of Computer Science (Automata, Languages and Computation) Third Edition &Ullman, Intro to Automata Theory, Languages and Computation 3rd Ed.
|Language:||English, Spanish, Indonesian|
|Genre:||Politics & Laws|
|ePub File Size:||24.73 MB|
|PDF File Size:||13.15 MB|
|Distribution:||Free* [*Regsitration Required]|
It has been more than yo years sincer Johari laperot and Jeffrey Limun fint published this classic book on formal languages, automata theory and. Computational. Introduction to automata theory, languages, and computation / by John E. Hopcroft,. Rajeev Motwani, Jeffrey D. Ullman. -- 3rd ed. p. cm. Includes bibliographical. Introduction to Automata Theory Languages and lesforgesdessalles.info Loading latest commit This file is too big to show. Sorry! Desktop version.
Again, the proof separates into four parts. Finally, we must design the transitions for state q1. If Assume that w ends in Rabin and D. Proofs by contradiction. Now, let L be the set of all strings of 0's.
Exercises The book contains extensive exercises, with some for almost every section. We indicate harder exercises or parts of exercises with an exclamation point. The hardest exercises have a double exclamation point. Some of the exercises or parts are marked with a star. For these exercises, we shall endeavor to maintain solutions accessible through the book's Web page.
These solutions are publicly available and should be used for self-testing. If certain parts of A have solutions, then you should expect the corresponding parts of B to have solutions as well. Gradiance On-Line Homeworks A new feature of the third edition is that there is an accompanying set of on-line homeworks using a technology developed by Gradiance Corp. Gradiance questions look like ordinary questions, but your solutions are sampled. If your instructor permits, you are allowed to try again, until you get a perfect score.
For more information, visit the Addison-Wesley web site www. Support on the World Wide Web The book's home page is http: Here are solutions to starred exercises, errata as we learn of them, and backup materials. Comments and errata on drafts of the second edition were received from: We also received many emails pointing out errata in the second edition of this book, and these were acknowledged on-line in the errata sheets for that.
The help of all these people is greatefully acknowledged. Remaining errors are ours, of course. Deterministic Finite Automata. Nondeterministic Finite Automata. An Application: Text Search. Finite Automata With Epsilon-Transitions. Summary of Chapter 2. Gradiance Problems for Chapter 2. References for Chapter 2. Applications of Regular Expressions. Algebraic Laws for Regular Expressions. Summary of Chapter 3. Gradiance Problems for Chapter 3. References for Chapter 3. Undecidable Problems About Turing Machines.
Post's Correspondence Problem. Other Undecidable Problems. Summary of Chapter 9. Gradiance Problems for Chapter 9. References for Chapter 9. Kruskal's Algorithm. The Traveling Salesman Problem Additional NP-Complete Problems. Summary of Chapter Gradiance Problems for Chapter References for Chapter Turing studied an abstract machine that had all the capabilities of today's computers, at least as far as in what they could compute.
Turing's goal was to describe precisely the boundary between what a computing machine could do and what it could not do his conclusions apply not only to his abstract Turing machines, but to today's real machines. These automata, originally proposed to model brain function, turned out to be extremely useful for a variety of other purposes, which we shall mention in Section 1. Also in the late 's, the linguist N. In , S. Cook extended Turing's study of what could and what could not be computed.
All of these theoretical developments bear directly on what computer scientists do today. Some of the concepts, like nite automata and certain kinds of formal grammars, are used in the design and construction of important kinds of software.
Other concepts, like the Turing machine, help us understand what 1. In this introductory chapter, we begin with a very high-level view of what automata theory is about, and what its uses are. Much of the chapter is devoted to a survey of proof techniques and tricks for discovering proofs. We cover deductive proofs, reformulating statements, proofs by contradiction, proofs by induction, and other important concepts. A nal section introduces the concepts that pervade automata theory: There are several reasons why the study of automata and complexity is an important part of the core of Computer Science.
This section serves to introduce the reader to the principal motivation and also outlines the major topics covered in this book. Finite automata are a useful model for many important kinds of hardware and software.
We shall see, starting in Chapter 2, examples of how the concepts are used. For the moment, let us just list some of the most important kinds: Software for designing and checking the behavior of digital circuits. Software for scanning large bodies of text, such as collections of Web pages, to nd occurrences of words, phrases, or other patterns.
Software for verifying systems of all types that have a nite number of distinct states, such as communications protocols or protocols for secure exchange of information. While we shall soon meet a precise de nition of automata of various types, let us begin our informal introduction with a sketch of what a nite automaton is and does. Since there are only a nite number of states, the entire history generally cannot be remembered, so the system must be designed carefully, to.
The advantage of having only a nite number of states is that we can implement the system with a xed set of resources. For example, we could implement it in hardware as a circuit, or as a simple form of program that can make decisions looking only at a limited amount of data or using the position in the code itself to make the decision. Push Start off.
Figure 1. As for all nite automata, the states are represented by circles in this example, we have named the states on and o. Here, both arcs are labeled by the input Push, which represents a user pushing the button. The intent of the two arcs is that whichever state the system is in, when the Push input is received it goes to the other state.
In our example, the start state is o , and we conventionally indicate the start state by the word Start and an arrow leading to that state. Entering one of these states after a sequence of inputs indicates that the input sequence is good in some way. For instance, we could have regarded the state on in Fig. It is conventional to designate accepting states by a double circle, although we have not made any such designation in Fig.
The job of this automaton is to recognize the keyword then. These positions correspond to the pre xes of the word, ranging from the empty string i. A nite automaton modeling recognition of then In Fig. Inputs correspond to letters.
We may imagine that the lexical analyzer examines one character of the program that it is compiling at a time, and the next character to be examined is the input to the automaton. The start state corresponds to the empty string, and each state has a transition on the next letter of then to the state that corresponds to the next-larger pre x. The state named then is entered when the input has spelled the word then.
Since it is the job of this automaton to recognize when then has been seen, we could consider that state the lone accepting state. There are two important notations that are not automaton-like, but play an important role in the study of automata and their applications.
Grammars are useful models when designing software that processes data with a recursive structure. We introduce context-free grammars, as they are usually called, in Chapter 5. Regular Expressions also denote the structure of data, especially text strings. As we shall see in Chapter 3, the patterns of strings they describe are exactly the same as what can be described by nite automata.
This expression represents patterns in text that could be a city and state, e.
Parentheses are used to group components of the expression they do not represent characters of the text described.
Automata are essential for the study of the limits of computation. As we mentioned in the introduction to the chapter, there are two important issues: What can a computer do at all? The subject is studied in Chapter While geometry has its practical side e. In the USA of the 's it became popular to teach proof as a matter of personal feelings about the statement.
While it is good to feel the truth of a statement you need to use, important techniques of proof are no longer mastered in high school. Yet proof is something that every computer scientist needs to understand. Some computer scientists take the extreme view that a formal proof of the correctness of a program should go hand-in-hand with the writing of the program itself.
We doubt that doing so is productive. On the other hand, there are those who say that proof has no place in the discipline of programming. Our position is between these two extremes. Testing programs is surely essential. However, testing goes only so far, since you cannot try your program on every input. More importantly, if your program is complex say a tricky.
When your testing tells you the code is incorrect, you still need to get it right.
To make your iteration or recursion correct, you need to set up an inductive hypothesis, and it is helpful to reason, formally or informally, that the hypothesis is consistent with the iteration or recursion.
This process of understanding the workings of a correct program is essentially the same as the process of proving theorems by induction. Thus, in addition to giving you models that are useful for certain types of software, it has become traditional for a course on automata theory to cover methodologies of formal proof. As mentioned above, a deductive proof consists of a sequence of statements whose truth leads us from some initial statement, called the hypothesis or the given statement s , to a conclusion statement.
Each step in the proof must follow, by some accepted logical principle, from either the given facts, or some of the previous statements in the deductive proof, or a combination of these. The hypothesis may be true or false, typically depending on values of its parameters. Often, the hypothesis consists of several independent statements connected by a logical AND. In those cases, we talk of each of these statements as a hypothesis, or as a given statement.
It is not hard to convince ourselves informally that Theorem 1. Rather, its truth depends on the value of the parameter x e. As x grows larger than 4, the left side, 2x doubles each time x increases by. Since 1: We have now completed an informal but accurate proof of Theorem 1. We shall return to the proof and make it more precise in Example 1.
Theorem 1. In the next example, we consider a complete deductive proof of a simple theorem that uses Theorem 1. The intuitive idea of the proof is that if the hypothesis is true for x, that is, x is the sum of the squares of four positive integers, then x must be at least 4.
Therefore, the hypothesis of Theorem 1. The reasoning can be expressed as a sequence of steps. Each step is either the hypothesis of the theorem to be proved, part of that hypothesis, or a statement that follows from one or more previous statements. This logical rule is often called modus ponens i. We also allow certain other logical steps to be used in creating a statement that follows from one or more previous statements. While we shall not generally prove theorems in such a stylized form, it helps to think of proofs as very explicit lists of statements, each with a precise justi cation.
In step 1 , we have repeated one of the given statements of the theorem: It often helps in proofs if we name quantities that are referred to but not named, and we have done so here, giving the four integers the names a, b, c, and d.
In step 2 , we put down the other part of the hypothesis of the theorem: Technically, this statement represents four distinct statements, one for each of the four integers involved.
Justi cation Given Given 2 and properties of arithmetic 1 , 3 , and properties of arithmetic 4 and Theorem 1. A formal proof of Theorem 1. The rst statement tells us that x is the sum of the four squares in question, and statement 3 tells us that each of the squares is at least 1. At the nal step 5 , we use statement 4 , which is the hypothesis of Theorem 1.
The theorem itself is the justi cation for writing down its conclusion, since its hypothesis is a previous statement. Since the statement 5 that is the conclusion of Theorem 1. That is, we have started with the hypothesis of that theorem, and have managed to deduce its conclusion. In the previous two theorems, the hypotheses used terms that should have been familiar: In many other theorems, including many from automata theory, the terms used in the statement may have implications that are less obvious.
A useful way to proceed in many proofs is:. Here is an example of a theorem that is simple to prove once we have expressed its statement in elementary terms. It uses the following two de nitions: Intuitively, an in nite set is a set that contains more than any integer number of elements. That is, each element of U is in exactly one of S and T put another way, T consists of exactly those elements of U that are not in S.
Let S be a nite subset of some in nite set U. Let T be the complement of S with respect to U. Then T is in nite. Intuitively, this theorem says that if you have an in nite supply of something U , and you take a nite amount away S , then you still have an in nite amount left.
Let us begin by restating the facts of the theorem as in Fig. Original Statement S is nite. Restating the givens of Theorem 1. We then use that assumption, together with parts of the hypothesis, to prove the opposite of one of the given statements of the hypothesis. We have then shown that it is impossible for all parts of the hypothesis to be true and for the conclusion to be false at the same time.
The only possibility that remains is for the conclusion to be true whenever the hypothesis is true. That is, the theorem is true.
In the case of Theorem 1. That is, the elements of U are exactly the elements of S and T. Having seen the ideas behind the proof, let us reprove the theorem in a few lines. The order in which the quanti ers appear in the statement determines who goes rst.
If the last player to make a choice can always nd some allowable value, then the statement is true. That is a proof that the set of integers is in nite. So assume that T is nite. However, we see other kinds of statements proved as theorems also. In this section, we shall examine the most common forms of statement and what we usually need to do to prove them.
H implies C. H only if C. Whenever H holds, C follows. In addition, in formal logic one often sees the operator! C in some mathematical literature we shall not use it here. The if part: The only-if part: The answer to this question is not easy. The bottom line regarding proofs is that their purpose is to convince someone, whether it is a grader of your classwork or yourself, about the correctness of a strategy you are using in your code. Thus, in Theorem 1. However, there are certain things that are required in proofs, and omitting them surely makes the proof inadequate.
For instance, any deductive proof that uses statements which are not justi ed by the given or previous statements, cannot be adequate.
As an additional example, inductive proofs discussed in Section 1. The proofs can be presented in either order. In many theorems, one part is decidedly easier than the other, and it is customary to present the easy direction rst and get it out of the way. Sometimes, you will nd it helpful to break an if-and-only-if into a succession of several equivalences.
Proving any one step in only one of the directions invalidates the entire proof. The following is an example of a simple if-and-only-if proof. It uses the notations: Since dxe is always an integer, x must also be an integer in this case.
This part is easy. Sometimes, we encounter a theorem that appears not to have a hypothesis. An example is the well-known fact from trigonometry:. From the de nitions of these terms, and the Pythagorean Theorem in a right triangle, the square of the hypotenuse equals the sum of the squares of the other two sides , you could prove the theorem. In essence, the if-then form of the theorem is really: Proofs about sets.
Proofs by contradiction. Proofs by counterexample. More precisely, every element in the set represented by E is in the set represented by F , and every element in the set represented by F is in the set represented by E.
Example 1. The commutative law of union says that we can take the union of two sets R and S in either order.
Proof that if x is in E , then x is in F. Prove that if x is in F , then x is in E. As an example of this proof process, let us prove the distributive law of union over intersection: This part, summarized in Fig. Here, we assume x is in F and show it is in E. The steps are summarized in Fig. Since we have now proved both parts of the if-and-only-if statement, the distributive law of union over intersection is proved.
Every if-then statement has an equivalent form that in some circumstances is easier to prove. Justi cation Given 1 and de nition of union 2 and de nition of intersection 3 and de nition of union 3 and de nition of union 4 , 5 , and de nition of intersection. Justi cation Given 1 and de nition of intersection 1 and de nition of intersection 2 , 3 , and reasoning about unions 4 and de nition of intersection 5 and de nition of union.
H and C both true. H true and C false. C true and H false. H and C both false. There is only one way to make an if-then statement false the hypothesis must be true and the conclusion false, as in case 2.
For the other three cases, including case 4 where the conclusion is false, the if-then statement itself is true. These two conditions are again case 2 , which shows that in each of the four cases, the original statement and its contrapositive are either both true or both false i. Recall Theorem 1. Thus, Theorem 1. In fact, the two parts of an if-and-only-if proof are always some statement and its converse.
One equivalent proof form is:. Complete the proof by showing that something known to be false follows logically from H and not C. This form of proof is called proof by contradiction.
Our proof was to derive a falsehood from H and not C. We rst showed from the assumptions that S and T are both nite, that U also must be nite. By showing that H and not C leads to falsehood, we are showing that case 2 cannot occur. In real life, we are not told to prove a theorem. To resolve the question, we may alternately try to prove the theorem, and if we cannot, try to prove that its statement is false.
Theorems generally are statements about an in nite number of cases, perhaps all values of its parameters. The situation is analogous to programs, since a program is generally considered to have a bug if it fails to operate correctly for even one input on which it was expected to work. It often is easier to prove that a statement is not a theorem than to prove it is a theorem.
The following are two examples, rst of an obvious nontheorem, and the second a statement that just misses being a theorem and that requires some investigation before resolving the question of whether it is a theorem or not. Alleged Theorem 1. All primes are odd. More formally, we might say: The integer 2 is a prime, but 2 is even. There is an essential de nition that we must rst establish. Our rst proposed theorem, which we shall determine to be false, is:.
This case turns out to be fatal to our proof attempts. However, consider the third case: We thus have a disproof of the alleged theorem: In the process of nding the counterexample, we have in fact discovered the exact conditions under which the alleged theorem holds.
Here is the correct version of the theorem, and its proof. The best technique is a. We again derive a contradiction of the hypothesis, and conclude the only-if part is also true. We have now proved both directions and conclude that the theorem is true.
Suppose we are given a statement S n , about an integer n, to prove. One common approach is to prove two things: The basis, where we show S i for a particular integer i. We can argue as follows. Suppose S n were false for one or more of those integers.
Now j could not be i, because we prove in the basis part that S i is true. Thus, j must be greater than i. Then we know from the inductive step that S j ; 1 implies S j. Since we also know S j ; 1 , we can conclude S j. That is, the only way to prove that we can nd such a j is to prove it by a method that is essentially an inductive proof. Thus, we generally take as an integral part of our logical reasoning system: The following two examples illustrate the use of the induction principle to prove theorems about integers.
The proof is in two parts: However, there is a general principle that when the upper limit of a sum 0 in this case is less than the lowerPlimit 1 here , the sum is over no terms and therefore the sum is 0. The right side of Equation 1. Thus, Equation 1. We must prove the inductive step, that Equation 1. We may simplify Equations 1. These equations become: We need to prove 1. In that way, we can replace the sum to n by the left side of 1. These steps are as follows: The nal veri cation that 1.
In the next example, we prove Theorem 1. As in Theorem 1. Begin by simplifying 1. Divide 1. Thus, the left side of 1. Therefore, Equations 1.
Equation 1. Sometimes an inductive proof is made possible only by using a more general scheme than the one proposed in Section 1. We can use several basis cases. The following example will illustrate the potential of both principles. Notice, incidentally, that 7 cannot be written as a sum of 3's and 5's. The basis cases are S 8 , S 9 , and S In automata theory, there are several recursively de ned structures about which we need to prove statements.
The familiar notions of trees and expressions are important examples. Like inductions, all recursive de nitions have a basis case, where one or more elementary structures are de ned, and an inductive step, where more complex structures are de ned in terms of previously de ned structures. Here is the recursive de nition of a tree: A single node is a tree, and that node is the root of the tree. Begin with a new node N , which is the root of the tree.
Here is another recursive de nition. Intuition Behind Structural Induction We can suggest informally why structural induction is a valid proof method. The basis elements come rst, and the fact that Xi is in the de ned set of structures can only depend on the membership in the de ned set of structures that precede Xi on the list.
Viewed this way, a structural induction is nothing but an induction on integer n of the statement S Xn. This induction may be of the generalized form discussed in Section 1. However, we should remember, as explained in Section 1.
Any number or letter i. For example, both 2 and x are expressions by the basis. Notice how each of these expressions depends on the previous ones being expressions. Let S X be a statement about the structures X that are de ned by some particular recursive de nition. As a basis, prove S X for the basis structure s X. Our conclusion is that S X is true for all X. The next two theorems are examples of facts that can be proved about trees and expressions.
The formal statement S T we need to prove by structural induction is: The basis case is when T is a single node. The nodes of T are node N and all the nodes of the Ti 's.
The edges of T are the k edges we added explicitly in the inductive de nition step, plus the edges of the Ti 's. Hence, T. Thus, T has one more node than it has edges. If G is de ned by the basis, then G is a number or variable. These expressions have 0 left parentheses and 0 right parentheses, so the numbers are equal.
There are three rules whereby expression G may have been constructed according to the inductive step in the de nition: We may assume that S E and S F are true that is, E has the same number of left and right parentheses, say n of each, and F likewise has the same number of left and right parentheses, say m of each.
Then we can compute the numbers of left and right parentheses in G for each of the three cases, as follows:. In each of the three cases, we see that the numbers of left and right parentheses in G are the same.
This observation completes the inductive step and completes the proof. Automata theory provides many such situations.
In Example 1. These statements tell under what sequences of inputs the automaton gets into each of the states. However, when there are really several independent statements to prove, it is generally less confusing to keep the statements separate and to prove them all in their own parts of the basis and inductive steps. We call this sort of proof mutual induction.
An example will illustrate the necessary steps for a mutual recursion. The automaton itself is reproduced as Fig. Since pushing the button switches the state between on and o , and the switch starts out in the o state, we expect that the following statements will together explain the operation of the switch:.
The automaton is in state o after n pushes if and only if n is even. The automaton is in state on after n pushes if and only if n is odd. We might suppose that S1 implies S2 and vice-versa, since we know that a number n cannot be both even and odd.
However, what is not always true about an automaton is that it is in one and only one state. It happens that the automaton of Fig.
Repeat of the automaton of Fig. The proofs depend on several facts about odd and even integers:.
Since that is the start state, the automaton is indeed in state o after 0 pushes. Thus, this part of the basis also holds. Since the hypothesis is false, we can again conclude that the if-then statement is true. Again, the proof separates into four parts. Thus, n is odd. Inspecting the automaton of Fig. The reader should be able to construct this part of the proof easily. An alphabet is a nite, nonempty set of symbols. Common alphabets include: A string or sometimes word is a nite sequence of symbols chosen from some alphabet.
The string is another string chosen from this alphabet. The empty string is the string with zero occurrences of symbols. It is often useful to classify strings by their length, that is, the number of positions for symbols in the string. For instance, has length 5.
Thus, there are only two symbols, 0 and 1, in the string , but there are ve positions for symbols, and its length is 5. The standard notation for the length of a string w is jwj. The former is an alphabet its members 0 and 1 are symbols. The latter is a set of strings. Put another way,.
Thus, two appropriate equivalences are:. Type Convention for Symbols and Strings Commonly, we shall use lower-case letters at the beginning of the alphabet or digits to denote symbols, and lower-case letters near the end of the alphabet, typically w, x, y, and z , to denote strings.
You should try to get used to this convention, to help remind you of the types of the elements being discussed. Let x and y be strings. Then xy denotes the concatenation of x and y, that is, the string formed by making a copy of x and following it by a copy of y. However, common languages can be viewed as sets of strings. An example is English, where the collection of legal English words is a set of strings over the alphabet that consists of all the letters.
Another example is C, or any other programming language, where the legal programs are a subset of the possible strings that can be formed from the alphabet of the language. However, there are also many other languages that appear when we study automata.
Some are abstract examples, such as:. The set of strings of 0's and 1's with an equal number of each:. The set of binary numbers whose value is a prime:. The only important constraint on what can be a language is that all alphabets are nite. Thus languages, although they can have an in nite number of strings, are restricted to consist of strings drawn from one xed, nite alphabet. In automata theory, a problem is the question of deciding whether a given string is a member of some particular language.
For some strings, this decision is easy. For instance, cannot be the representation of a prime, for the simple reason that every integer except 0 has a binary representation that begins with 1. However, it is less obvious whether the string belongs to Lp , so any solution to this problem will have to use signi cant computational resources of some kind: For instance, the task of the parser in a C compiler. It is also common to replace w by some expression with parameters and describe the strings in the language by stating conditions on the parameters.
Here are some examples the rst with parameter n, the second with parameters i and j:. Notice that, as with alphabets, we can raise a single symbol to a power n in order to represent n copies of that symbol. This language consists of strings with some 0's possibly none followed by at least as many 1's.
However, the parser does more than decide. It produces a parse tree, entries in a symbol table and perhaps more. In this theory, we are interested in proving lower bounds on the complexity of certain problems. Especially important are techniques for proving that certain problems cannot be solved in an amount of time that is less than exponential in the size of their input. Is It a Language or a Problem? Languages and problems are really the same thing. Which term we prefer to use depends on our point of view.
When we care only about strings for their own sake, e. In those cases, where we care more about the thing represented by the string than the string itself, we shall tend to think of a set of strings as a problem. That is, if we can prove it is hard to decide whether a given string belongs to the language LX of valid strings in programming language X , then it stands to reason that it will not be easier to translate programs in language X to object code.
For if it were easy to generate code, then we could run the translator, and conclude that the input was a valid member of LX exactly when the translator succeeded in producing object code.
We thus contradict the assumption that testing membership in LX is hard. It is an essential tool in the study of the complexity of problems, and it is facilitated greatly by our notion that problems are questions about membership in a language, rather than more general kinds of questions. Finite Automata: Finite automata involve states and transitions among states in response to inputs. Regular Expressions: These are a structural notation for describing the same patterns that can be represented by nite automata.
They are used in many common types of software, including tools to search for patterns in text or in le names, for instance. Context-Free Grammars: These are an important notation for describing the structure of programming languages and related sets of strings they are used to build the parser component of a compiler. Turing Machines: These are automata that model the power of real computers.
They allow us to study decidabilty, the question of what can or cannot be done by a computer. They also let us distinguish tractable problems those that can be solved in polynomial time from the intractable problems those that cannot. Deductive Proofs: This basic method of proof proceeds by listing statements that are either given to be true, or that follow logically from some of the previous statements. Proving If-Then Statements: Deductive proofs of if-then statements begin with the hypothesis, and continue with statements that follow logically from the hypothesis and previous statements, until the conclusion is proved as one of the statements.
Proving the Contrapositive: Proof by Contradiction: Sometimes we are asked to show that a certain statement is not true. If the statement has one or more parameters, then we can show it is false as a generality by providing just one counterexample, that is, one assignment of values to the parameters that makes the statement false.
Inductive Proofs: A statement that has an integer parameter n can often be proved by induction on n. We prove the statement is true for the basis, a nite number of cases for particular values of n, and then prove the inductive step: Structural Inductions: In some situations, including many in this book, the theorem to be proved inductively is about some recursively de ned construct, such as trees. We may prove a theorem about the constructed objects by induction on the number of steps used in its construction.
This type of induction is referred to as structural. An alphabet is any nite set of symbols. A string is a nite-length sequence of symbols. Languages and Problems: A language is a possibly in nite set of strings, all of which choose their symbols from some one alphabet. When the strings of a language are to be interpreted in some way, the question of whether a string is in the language is sometimes called a problem.
Each of these problems is worked like conventional homework. The Gradiance system gives you four choices that sample your knowledge of the solution. If you make the wrong choice, you are given a hint or advice and encouraged to try the same problem again. Problem 1. Suppose we want to prove the statement S n: What is the concatenation of X and Y?
The exception is the problem of nding palindromes, which are strings that are identical when reversed, like , regardless of their numerical value. Aho and J. After an extended example that will provide motivation for the study to follow, we de ne nite automata formally.
We conclude the chapter with a study of an extended nondeterministic automaton that has the additional choice of making a transition from one state to another spontaneously, i. However, we shall nd them quite important in Chapter 3, when we study regular expressions and their equivalence to automata.
The study of the regular languages continues in Chapter 3. There, we introduce another important way to describe regular languages: After discussing regular expressions, and showing their equivalence to nite automata, we use both automata and regular expressions as tools in Chapter 4 to show certain important properties of the regular languages.
The latter are algorithms to answer questions about automata or regular expressions, e. The seller must know that the le has not been forged, nor has it been copied and sent to the seller, while the customer retains a copy of the same le to spend again. The nonforgeability of the le is something that must be assured by a bank and by a cryptography policy.
However, the bank has a second important job: However, in order to use electronic money, protocols need to be devised to allow the manipulation of the money in a variety of ways that the users want. Because monetary systems always invite fraud, we must verify whatever policy we adopt regarding how money is used. In the balance of this section, we shall introduce a very simple example of a bad electronic-money protocol, model it with nite automata, and show how constructions on automata can be used to verify protocols or, in this case, to discover that the protocol has a bug.
There are three participants: The customer may decide to transfer this money le to the store, which will then redeem the le from the bank i.
In addition, the customer has the option to cancel the le. That is, the customer may ask the bank to place the money back in the customer's account, making the money. Interaction among the three participants is thus limited to ve events: The customer may decide to pay. That is, the customer sends the money to the store.
The customer may decide to cancel. The money is sent to the bank with a message that the value of the money is to be added to the customer's bank account. The store may ship goods to the customer. The store may redeem the money. That is, the money is sent to the bank with a request that its value be given to the store.
The bank may transfer the money by creating a new, suitably encrypted money le and sending it to the store. The three participants must design their behaviors carefully, or the wrong things may happen. In our example, we make the reasonable assumption that the customer cannot be relied upon to act responsibly.
In particular, it must make sure that two stores cannot both redeem the same money le, and it must not allow money to be both canceled and redeemed.
The store should be careful as well. In particular, it should not ship goods until it is sure it has been given valid money for the goods.
Protocols of this type can be represented as nite automata. Each state represents a situation that one of the participants could be in. Transitions between states occur when one of the ve events described above occur. It turns out that what is important about the problem is what sequences of events can happen, not who is allowed to initiate them. Figure 2. The bank does not know that the money has been sent by the customer to the store it discovers that fact only when the store executes the action redeem.
Let us examine rst the automaton c for the bank. The start state is state 1 it represents the situation where the bank has issued the money le in question but has not been requested either to redeem it or to cancel it. Finite automata representing a customer, a store, and a bank cancel request is sent to the bank by the customer, then the bank restores the money to the customer's account and enters state 2.
The latter state represents the situation where the money has been cancelled. The bank, being responsible, will not leave state 2 once it is entered, since the bank must not allow the same money to be cancelled again or spent by the customer. If so, it goes to state 3, and shortly sends the store a transfer message, with a new money le that now belongs to the store.
After sending the transfer message, the bank goes to state 4. In that state, it will neither accept cancel or redeem requests nor will it perform any other actions regarding this particular money le. Now, let us consider Fig. While the bank always does the right thing, the store's system has some defects.
Imagine that the shipping and nancial operations are done by separate processes, so there is the opportunity for the ship action to be done either before, after, or during the redemption of the electronic money.
That policy allows the store to get into a situation where it has already shipped the goods and then nds out the money was bogus. The store starts out in state a. The bank will in fact be running the same protocol with a large number of electronic pieces of money, but the workings of the protocol are the same for each of them, so we can discuss the problem as if there were only one piece of electronic money in existence. In this state, the store begins both the shipping and redemption processes.
If the goods are shipped rst, then the store enters state c, where it must still redeem the money from the bank and receive the transfer of an equivalent money le from the bank. Alternatively, the store may send the redeem message rst, entering state d. From state d, the store might next ship, entering state e, or it might next receive the transfer of money from the bank, entering state f.
From state f , we expect that the store will eventually ship, putting the store in state g, where the transaction is complete and nothing more will happen. In state e, the store is waiting for the transfer from the bank. Unfortunately, the goods have already been shipped, and if the transfer never occurs, the store is out of luck. Last, observe the automaton for the customer, Fig. While the three automata of Fig.
However, in the formal de nition of a nite automaton, which we shall study in Section 2. Thus, the automaton for the store needs an additional arc from each state to itself, labeled cancel.
Another potential problem is that one of the participants may, intentionally or erroneously, send an unexpected message, and we do not want this action to cause one of the automata to die. For instance, suppose the customer decided to execute the pay action a second time, while the store was in state e. Since that state has no arc out with label pay, the store's automaton would die before it could receive the transfer from the bank.
In summary, we must add to the automata of Fig. The two kinds of actions that must be ignored are: Actions that are irrelevant to the participant involved. As we saw, the only irrelevant action for the store is cancel, so each of its seven states. The complete sets of transitions for the three automata has a loop labeled cancel. For the bank, both pay and ship are irrelevant, so we have put at each of the bank's states an arc labeled pay, ship.
For the customer, ship, redeem and transfer are all irrelevant, so we add arcs with these labels. Of course, the customer is still a participant, since it is the customer who initiates the pay and cancel actions. However, as we mentioned, the matter of who initiates actions has nothing to do with the behavior of the automata. Actions that must not be allowed to kill an automaton. As mentioned, we must not allow the customer to kill the store's automaton by executing pay again, so we have added loops with label pay to all but state a where the pay action is expected and relevant.
We have also added loops with labels cancel to states 3 and 4 of the bank, in order to prevent the customer from killing the bank's automaton by trying to cancel money that has already been redeemed.
The bank properly ignores such a request. Likewise, states 3 and 4 have loops on redeem. Out of print. John E. Ullman, Stanford University. If You're an Educator Additional order info. If You're a Student Additional order info.
Overview Features Order Overview. Description This classic book on formal languages, automata theory, and computational complexity has been updated to present theoretical concepts in a concise and straightforward manner with the increase of hands-on, practical applications.
Includes additional practice and tests comprehension of important concepts with Gradiance , an online homework and tutorial system. Please note, Gradiance is no longer available with this book. New to This Edition. Provides more detail and intuition for definitions and proofs.
Includes additional practice and tests comprehension of important concepts with Gradiance an online homework and tutorial system. Previous editions. Sign In We're sorry! Username Password Forgot your username or password?