Interpreter (computing)

From Wikipedia, the free encyclopedia

In computer science, an interpreter is a computer program that directly executes, i.e. performs, instructions written in a programming or scripting language, without previously batch-compiling them into machine language. An interpreter generally uses one of the following strategies for program execution:

  1. parse the source code and perform its behavior directly
  2. translate source code into some efficient intermediate representation and immediately execute this
  3. explicitly execute stored precompiled code[1] made by a compiler which is part of the interpreter system

Early versions of the Lisp programming language and Dartmouth BASIC would be examples of the first type. Perl, Python, MATLAB, and Ruby are examples of the second, while UCSD Pascal is an example of the third type. Source programs are compiled ahead of time and stored as machine independent code, which is then linked at run-time and executed by an interpreter and/or compiler (for JIT systems). Some systems, such as Smalltalk, contemporary versions of BASIC, Java and others may also combine two and three.

While interpretation and compilation are the two main means by which programming languages are implemented, they are not mutually exclusive, as most interpreting systems also perform some translation work, just like compilers. The terms "interpreted language" or "compiled language" signify that the canonical implementation of that language is an interpreter or a compiler, respectively. A high level language is ideally an abstraction independent of particular implementations.

History

The first interpreted high-level language was probably Lisp. Lisp was first implemented by Steve Russell on an IBM 704 computer. Russell had read John McCarthy's paper, and realized (to McCarthy's surprise) that the Lisp eval function could be implemented in machine code.[2] The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions".

Compilers versus interpreters

An illustration of the linking process. Object files and static libraries are assembled into a new library or executable

Programs are usually written in high level code, which has to be converted into machine code for the CPU to execute it. This conversion is done by either a compiler (and a linker) or an interpreter, the former generally producing binary code, machine code, that can be processed to be directly executable by computer hardware but compilers will proceed usually by first producing an intermediate binary form called object code. Object code has the property of containing a symbol table (names, tags) and relocatable binary modules (the processes) identified in the table. Most often, these functions will call on building block functions which are kept in a library of object code modules. Such library functions are built into an interpreters run time environment as well. These object files can be stitched together by a process called linking. In an interpreter, similar binary blocks implementing the small group of single instructions of the high level language are previously prepared and stored and then executed when an instruction is looked up in a table pointing to that instruction code. They are ready for input in a certain way and place (memory address) which is provided by the program doing the linking--the interpreter. Further, Interpreters generally have a self contained editor built in, so source files are the simpler to keep straight, while compilers need a separate source code editor. But in an interpreter, the process is kept in the source code, only low level functions (multiple or divide two floating point numbers, convert degrees to radians, copy a string value) are in the aforementioned binary blocks, and in executing those the interpreter must cook the process on the fly each time with a possibly unique set of input values. This is inherently slower in most any real world process such as calculation of a planar motion equation describing the parabolic path of a projectile in a gravitational field.

So compilers read source code (text files), parse the code and produce both binary code and lookup tables to be linked later, while an interpreter takes a source code file, parses the commands out of it (turning them into tokens), deciphers the command sequence and defines values (operands) which it feeds in the correct manner to pre-saved binary building block modules (functions) it finds by using a lookup table based on the token sequence it defined when it parsed the section of source code. In early interpreters, the section of source code would have been a single text line, today's software is more flexible.

At the stage of compilation, in fact compilers act as interpreters and patch together such binary executables from an object code library defining which binary code sequence is named which command name. Compilation, Linking are generally more complicated in programming practice, but run faster by far; compilers are designed to optimize code. Interpreter function coding is external to the source code, and has to be in memory at an address the interpreter program can access, as does an executables binary code.

In Compilation, the executable code output of the linkers (.exe files or .dll files or a library, see picture above and right) like the object code is relocatable in terms of memory addresses--the key factor for machine level code, if one part of a program can't find the next part of the series of processes, the program is broken.

With respect to source code, a compiler makes the conversion from source code sequence just once, while an interpreter typically converts it every time a program is executed.

Development cycle

During the software development cycle, programmers make frequent changes to source code. When using a compiler, each time a change is made to the source code, they must wait for the compiler to translate the altered source files and link all of the binary code files together before the program can be executed. The larger the program, the longer the wait. By contrast, a programmer using an interpreter does a lot less waiting, as the interpreter usually just needs to translate the code being worked on to an intermediate representation (or not translate it at all), thus requiring much less time before the changes can be tested since effects are evident upon saving the source code and reloading the program. Compiled code is generally less readily debugged as editing, compiling, and linking are sequential processes that have to be conducted in the proper sequence with a proper set of commands. For this reason, many compilers also have an executive aid, known as a Make file and program. The Make file lists compiler and linker command lines and program source code files, but might take a simple command line menu input (e.g. "Make 3") which selects the third group (set) of instructions then issues the commands to the compiler, and linker feeding the specified source code files. In a WYSIWYG program, the Make groups might manifest as buttons.

Distribution

A compiler converts source code into binary instruction for a specific processor's architecture, thus making it less portable. This conversion is made just once, on the developer's environment, and after that the same binary can be distributed to the user's machines where it can be executed without further translation. A cross compiler can generate binary code for the user machine even if it has a different processor than the machine where the code is compiled.

An interpreted program can be distributed as source code. It needs to be translated in each final machine, which takes more time but makes the program distribution independent of the machine's architecture. However, the portability of interpreted source code is dependent on the target machine actually having a suitable interpreter. If the interpreter needs to be supplied along with the source, the overall installation process is more complex than delivery of a monolithic executable since the interpreter itself is part of what need be installed.

The fact that interpreted code can easily be read and copied by humans can be of concern from the point of view of copyright. However, various systems of encryption and obfuscation exist. Delivery of intermediate code, such as bytecode, has a similar effect to obfuscation, but bytecode could be decoded with a decompiler or disassembler.[citation needed]

Efficiency

The main disadvantage of interpreters is that when a program is interpreted, it typically runs more slowly than if it had been compiled. The difference in speeds could be tiny or great; often an order of magnitude and sometimes more. It generally takes longer to run a program under an interpreter than to run the compiled code but it can take less time to interpret it than the total time required to compile and run it. This is especially important when prototyping and testing code when an edit-interpret-debug cycle can often be much shorter than an edit-compile-run-debug cycle.[citation needed]

Interpreting code is slower than running the compiled code because the interpreter must analyze each statement in the program each time it is executed and then perform the desired action, whereas the compiled code just performs the action within a fixed context determined by the compilation. This run-time analysis is known as "interpretive overhead". Access to variables is also slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at compile time.[citation needed]

There are various compromises between the development speed when using an interpreter and the execution speed when using a compiler. Some systems (such as some Lisps) allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed.[citation needed] Many interpreters do not execute the source code as it stands but convert it into some more compact internal form. Many BASIC interpreters replace keywords with single byte tokens which can be used to find the instruction in a jump table. A few interpreters, such as the PBASIC interpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally "16-bit" constants are stored in a variable-length code requiring 3, 6, 10, or 18 bits, and address operands include a "bit offset". Many BASIC interpreters can store and read back their own tokenized internal representation.

An interpreter might well use the same lexical analyzer and parser as the compiler and then interpret the resulting abstract syntax tree.

Regression

Interpretation cannot be used as the sole method of execution: even though an interpreter can itself be interpreted and so on, a directly executed program is needed somewhere at the bottom of the stack because the code being interpreted is not, by definition, the same as the machine code that the CPU can execute.[3][4]

Variations

Bytecode interpreters

There is a spectrum of possibilities between interpreting and compiling, depending on the amount of analysis performed before the program is executed. For example, Emacs Lisp is compiled to bytecode, which is a highly compressed and optimized representation of the Lisp source, but is not machine code (and therefore not tied to any particular hardware). This "compiled" code is then interpreted by a bytecode interpreter (itself written in C). The compiled code in this case is machine code for a virtual machine, which is implemented not in hardware, but in the bytecode interpreter. The same approach is used with the Forth code used in Open Firmware systems: the source language is compiled into "F code" (a bytecode), which is then interpreted by a virtual machine.[citation needed]

Control tables - that do not necessarily ever need to pass through a compiling phase - dictate appropriate algorithmic control flow via customized interpreters in similar fashion to bytecode interpreters.

Abstract Syntax Tree interpreters

In the spectrum between interpreting and compiling, another approach is transforming the source code into an optimized Abstract Syntax Tree (AST) then executing the program following this tree structure, or using it to generate native code Just-In-Time.[5] In this approach, each sentence needs to be parsed just once. As an advantage over bytecode, the AST keeps the global program structure and relations between statements (which is lost in a bytecode representation), and when compressed provides a more compact representation.[6] Thus, using AST has been proposed as a better intermediate format for Just-in-time compilers than bytecode. Also, it allows to perform better analysis during runtime.

However, for interpreters, an AST causes more overhead than a bytecode interpreter, because of nodes related to syntax performing no useful work, of a less sequential representation (requiring traversal of more pointers) and of overhead visiting the tree.[7]

Just-in-time compilation

Further blurring the distinction between interpreters, byte-code interpreters and compilation is just-in-time compilation (or JIT), a technique in which the intermediate representation is compiled to native machine code at runtime. This confers the efficiency of running native code, at the cost of startup time and increased memory use when the bytecode or AST is first compiled. Adaptive optimization is a complementary technique in which the interpreter profiles the running program and compiles its most frequently executed parts into native code. Both techniques are a few decades old, appearing in languages such as Smalltalk in the 1980s.[8]

Just-in-time compilation has gained mainstream attention amongst language implementers in recent years, with Java, the .NET Framework and most modern JavaScript implementations now including JITs.[citation needed]

Applications

  • Interpreters are frequently used to execute command languages, and glue languages since each operator executed in command language is usually an invocation of a complex routine such as an editor or compiler.[citation needed]
  • Self-modifying code can easily be implemented in an interpreted language. This relates to the origins of interpretation in Lisp and artificial intelligence research.[citation needed]
  • Virtualization. Machine code intended for one hardware architecture can be run on another using a virtual machine, which is essentially an interpreter.[citation needed]
  • Sandboxing: An interpreter or virtual machine is not compelled to actually execute all the instructions the source code it is processing. In particular, it can refuse to execute code that violates any security constraints it is operating under.[citation needed]

Punched card interpreter

The term "interpreter" often referred to a piece of unit record equipment that could read punched cards and print the characters in human-readable form on the card. The IBM 550 Numeric Interpreter and IBM 557 Alphabetic Interpreter are typical examples from 1930 and 1954, respectively.[citation needed]

Notes and references

  1. In this sense, the CPU is also an interpreter, of machine instructions.
  2. According to what reported by Paul Graham in Hackers & Painters, p. 185, McCarthy said: "Steve Russell said, look, why don't I program this eval..., and I said to him, ho, ho, you're confusing theory with practice, this eval is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the eval in my paper into IBM 704 machine code, fixing bug, and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today..."
  3. Theodore H. Romer, Dennis Lee, Geoffrey M. Voelker, Alec Wolman, Wayne A. Wong, Jean-Loup Baer, Brian N. Bershad, and Henry M. Levy, The Structure and Performance of Interpreters
  4. Terence Parr, Johannes Luber, The Difference Between Compilers and Interpreters
  5. AST intermediate representations, Lambda the Ultimate forum
  6. A Tree-Based Alternative to Java Byte-Codes, Thomas Kistler, Michael Franz
  7. Surfin' Safari - Blog Archive » Announcing SquirrelFish. Webkit.org (2008-06-02). Retrieved on 2013-08-10.
  8. L. Deutsch, A. Schiffman, Efficient implementation of the Smalltalk-80 system, Proceedings of 11th POPL symposium, 1984.

External links

This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.