Pick two subject from every Chapter 1,2,3 Write three page paper discussing all
ID: 3673898 • Letter: P
Question
Pick two subject from every Chapter 1,2,3
Write three page paper discussing all chosen subject with explanation.
Here is the topic I chose:
1.4 Compilation and Interpretation.
1.5 Programming Environments.
2.4 Theoretical Foundations
2.4.1 Finite Automata
2.4.2 push Down Automata
2.4.3 Grammar and Language Classes
2.1 Spedfying syntax: Regular Expressions and Context Free Grammars
42
2.1.1 Tokens and Regular Expression 43 2.1.2 Context-Free Grammars 46
2.1.3 Derivations and Parse Trees
Ch-3 3.1 The Notion of Binding Time
3.3 Scope Rules 3.3.1 Static Scoping 3.3.2 Nested Subroutines 3.3.3 Declaration order
3.3.4 Mod
3.3.5 ModuleTypes and Classes
3.3.6 Dynamic Scoping
Finally here
example ,for beter understanding
write 1-main subject(Specifying Syntax Regular Expressions) than
2- specific topic( 2.1.1 Tokens and Regular Expressions)or
(2.1.2 Context-Free Grammars) and the explanation and detels.
Explanation / Answer
Compilation and Interpertation
Programming languages are usually implemented by interpreters or compilers, or some mix of both example C. In reality, almost all language implementations are a mix of both, to at least a small degree, and the line between them is surprisingly fuzzy.
Definition of Compiler:
In a compiled implementation of a language, a compiler will translate the program directly into code that is specific to the target machine, which is also known as machine code – basically code that is specific to a given processor and operating system. Then the computer will run the machine code on its own.
A pure compiler reads the source text of a program, and translates it into machine code that will have the effect of executing the program when it is run. A big advantage of compilers is that they can read through and analyze the source program once, and generate code that you can run to give the same effect as interpreting the program. Rather than analyzing each expression each time they encounter it, compilers do the analysis once, but record the actions an interpreter would take at that point in the program.
Definition of Interperter:
In an interpreted implementation of a language, the source code is not directly run by the target machine. What happens instead is that another program reads and then executes the original source code. This other program is also known as the interpreter.
A pure interpreter reads the source text of a program, analyzes it, and executes it as it goes. This is usually very slow--the interpreter spends a lot of time analyzing strings of characters to figure out what they mean. A pure interpreter must recognize and analyze each expression in the source text each time it is encountered, so that it knows what to do next. This is pretty much how most command shell languages work, including UNIX shells and Tcl.
In effect, a compiler is a weird kind of interpreter, which "pretends" to interpret the program, and records what an interpreter would do. It then goes through its record of actions the interpreter would take, and spits out instructions whose effect is the same as what the interpreter would have done. Most of the decision-making that the interpreter does--like figuring out that an expression is an assignment expression, or a procedure call--can be done at compile time, because the expression is the same each time it's encountered in running the program.
The compiler's job is to do the work that's always the same, and spit out instructions that will do the "real work" that can only be done at runtime, because it depends on the actual data that the program is manipulating. For example, an if statement is always an if statement each time it's encountered, so that analysis can be done once. But which branch will be taken depends on the runtime value of an expression, so the compiler must emit code to test the value of the expression, and take the appropriate branch.
Most real interpreters are somewhere in between pure interpreters and compilers. They read through the source code for a program once, and translate it into an "intermediate representation" that's easier to work with--a data structure of some kind--and then interpret that. Rather than stepping through strings of source text, they step through a data structure that represents that source text in a more convenient form, which is much faster to operate on. That is, they do some analysis once, while converting the source text into a data structure, and the rest as they execute the program by stepping through the data structure.
There are four good reasons for using a Scheme interpreter as an example Scheme program:
1. A simple interpreter really is simple, but it can show off some of the handy features of Scheme. It's a good example of Scheme programming.
2. Most serious programs include some kind of command interpreter, so every programmer should know how to write a decent one. Often, the command interpreter has a tremendous impact on the usability and power of a system, and too many programs have bad ones.
3. Understanding how a Scheme interpreter works may clarify language issues. It gives you a nice, concrete understanding of what Scheme does when it encounters an expression, so you know what your programs will do--for example, it'll be obvious when you need a quote, or parentheses, and when you don't.
4. Every programmer should understand the basics of how a compiler works. Understanding a Scheme interpreter gets you half-way to understanding a Scheme compiler. A Scheme compiler is really very much like a Scheme interpreter--it analyzes Scheme expressions and figures out what to do. The main difference between an interpreter and a compiler is just that when an interpreter figures out what to do, it does it immediately, while a compiler records what to do when you run the program later.
Definition of Finite Automata
A finite automaton (FA) is a simple idealized machine used to recognize patterns within input taken from some character set (or alphabet) C. The job of an FA is to accept or reject an input depending on whether the pattern defined by the FA occurs in the input.
A finite automaton consists of:
1. a finite set S of N states
2. a special start state
3. a set of final (or accepting) states
4. a set of transitions T from one state to another, labeled with chars in C
As noted above, we can represent a FA graphically, with nodes for states, and arcs for transitions.
We execute our FA on an input sequence as follows:
1. Begin in the start state
2. If the next input char matches the label on a transition from the current state to a new state, go to that new state
3. Continue making transitions on each input char
1. If no move is possible, then stop
2. If in accepting state, then accept
Pushdown Automaton
A pushdown automaton is a way to implement a context-free grammar in a similar way we design DFA for a regular grammar. A DFA can remember a finite amount of information, but a PDA can remember an infinite amount of information.
Basically a pushdown automaton is ?
"Finite state machine" + "a stack"
A pushdown automaton has three components ?
1. an input tape,
2. a control unit, and
3. a stack with infinite size.
Spedfying syntax
2.1.2 Context-Free Grammar
A context-free grammar (CFG) is a set of recursive rewriting rules (or productions) used to generate patterns of strings.
A CFG consists of the following components:
1. a set of terminal symbols, which are the characters of the alphabet that appear in the strings generated by the grammar.
2. a set of nonterminal symbols, which are placeholders for patterns of terminal symbols that can be generated by the nonterminal symbols.
3. a set of productions, which are rules for replacing (or rewriting) nonterminal symbols (on the left side of the production) in a string with other nonterminal or terminal symbols (on the right side of the production).
4. a start symbol, which is a special nonterminal symbol that appears in the initial string generated by the grammar.
To generate a string of terminal symbols from a CFG, we:
1. Begin with a string consisting of the start symbol;
2. Apply one of the productions with the start symbol on the left hand size, replacing the start symbol with the right hand side of the production;
3. Repeat the process of selecting nonterminal symbols in the string, and replacing them with the right hand side of some corresponding production, until all nonterminals have been replaced by terminal symbols.
2.1.3 Derivations and Parse Trees
Parsing is the process where we take a particular sequence of words and check to see if they correspond to the language defined by some grammar.
Basically what we have to do is to show how we can get:
from the start symbol of the grammar
to the sequence of words that supposedly belong to the language
by using the production rules
We can do this by constructing the appropriate parse tree for the sequence of words, which is simply a graphical way of listing the production rules that we have used.
There are two ways of constructing a parse tree whose root is the start symbol and whose leaves are the sentence: Top-Down and Botton-Up
The Notion of Binding Time
3.3.1 Static Scoping
Lexical scoping (sometimes known as static scoping ) is a convention used with many programming languages that sets the scope (range of functionality) of a variable so that it may only be called (referenced) from within the block of code in which it is defined. The scope is determined when the code is compiled. A variable declared in this fashion is sometimes called a private variable.
3.3.6 Dynamic Scoping
The opposite approach of static scoping is known as dynamic scoping . Dynamic scoping creates variables that can be called from outside the block of code in which they are defined. A variable declared in this fashion is sometimes called a public variable.