Interpreter (computing)

Program execution |
---|
General concepts |
Types of code |
Compilation strategies |
Notable runtimes |
|
Notable compilers & toolchains |
|
In computing, an interpreter is software that directly executes encoded logic. Use of an interpreter contrasts the direct execution[1] of CPU-native executable code that typically involves compiling source code to machine code. Input to an interpreter conforms to a programming language which may be a traditional, well-defined language (such as JavaScript), but could alternatively be a custom language or even a relatively trivial data encoding such as a control table.
Historically, programs were either compiled to machine code for native execution or interpreted. Over time, many hybrid approaches were developed. Early versions of Lisp and BASIC runtime environments parsed source code and performed its implied behavior directly. The runtime environments for Perl, Raku, Python, MATLAB, and Ruby translate source code into an intermediate format before executing to enhance runtime performance. The .NET and Java eco-systems use bytecode for an intermediate format, but in some cases the runtime environment translates the bytecode to machine code (via Just-in-time compilation) instead of interpreting the bytecode directly.
Although each programming language is usually associated with a particular runtime environment, a language can be used in different environments. For example interpreters have been constructed for languages traditionally associated with compilation, such as ALGOL, Fortran, COBOL, C and C++. Thus, the terms interpreted language and compiled language, although commonly used, have little meaning.
History
[edit]In the early days of computing, compilers were more commonly found and used than interpreters because hardware at that time could not support both the interpreter and interpreted code and the typical batch environment of the time limited the advantages of interpretation.[2]
Interpreters were used as early as 1952 to ease programming within the limitations of computers at the time (e.g. a shortage of program storage space, or no native support for floating point numbers). Interpreters were also used to translate between low-level machine languages, allowing code to be written for machines that were still under construction and tested on computers that already existed.[3] The first interpreted high-level language was Lisp. Lisp was first implemented by Steve Russell on an IBM 704 computer. Russell had read John McCarthy's paper, "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I", and realized (to McCarthy's surprise) that the Lisp eval function could be implemented in machine code.[4] The result was a working Lisp interpreter which could be used to run Lisp programs, or more properly, "evaluate Lisp expressions".
The development of editing interpreters was influenced by the need for interactive computing. In the 1960s, the introduction of time-sharing systems allowed multiple users to access a computer simultaneously, and editing interpreters became essential for managing and modifying code in real-time. The first editing interpreters were likely developed for mainframe computers, where they were used to create and modify programs on the fly. One of the earliest examples of an editing interpreter is the EDT (Editor and Debugger for the TECO) system, which was developed in the late 1960s for the PDP-1 computer. EDT allowed users to edit and debug programs using a combination of commands and macros, paving the way for modern text editors and interactive development environments.[citation needed]
Use
[edit]Notable uses for interpreters include:
- Virtualization
- An interpreter acts as a virtual machine to execute machine code for a hardware architecture different from the one running the interpreter.
- Emulation
- An interpreter (virtual machine) can emulate another computer system in order to run code written for a that system.
- Sandboxing
- While some types of sandboxes rely on operating system protections, an interpreter (virtual machine) an offer additional control such as blocking code that violates security rules.[citation needed]
- Self-modifying code
- Self-modifying code can be implemented in an interpreted language. This relates to the origins of interpretation in Lisp and artificial intelligence research.[citation needed]
Eliminates compile time
[edit]A software developer spends a significant amount of time in the edit-build-run cycle, and the build phase can consume significant time. But using an interpreter can significantly reduce the time between editing and running a program since there's no compile step. When using a compiler, the program must be built before it is run if any code changes were made. The time to build varies dramatically, and can be long enough to impact productivity. In contrast, when using an interpreter, the program can be run without the compile step. The interpreter does process the source code before it starts executing it, but often an interpreter processes similar source code faster than a build takes to run. This enables less time waiting to start running.
Distribution
[edit]Code that runs in an interpreter can be run on any platform that has a compatible interpreter. The same code can be distributed to any such platform – instead of the alternative of building an executable for each platform.
Efficiency
[edit]Interpretive overhead is the runtime cost of executing code via an interpreter instead of as native (compiled) code. Interpreting is slower because the interpreter executes multiple machine-code instructions for the equivalent functionality in the native code. In particular, access to variables is slower in an interpreter because the mapping of identifiers to storage locations must be done repeatedly at run-time rather than at compile time.[5] But faster development (due to factors such as shorter edit-build-run cycle) can outweigh the value of faster execution speed; especially when prototyping and testing when the edit-build-run cycle is frequent.[5][6]
An interpreter may generate an intermediate representation (IR) of the program from source code in order achieve goals such as fast runtime performance. A compiler may also generate an IR, but the compiler generates machine code for later execution whereas the interpreter prepares to execute the program. These differing goals lead to differing IR design. Many BASIC interpreters replace keywords with single byte tokens which can be used to find the instruction in a jump table.[5] A few interpreters, such as the PBASIC interpreter, achieve even higher levels of program compaction by using a bit-oriented rather than a byte-oriented program memory structure, where commands tokens occupy perhaps 5 bits, nominally "16-bit" constants are stored in a variable-length code requiring 3, 6, 10, or 18 bits, and address operands include a "bit offset". Many BASIC interpreters can store and read back their own tokenized internal representation.
There are various compromises between the development speed when using an interpreter and the execution speed when using a compiler. Some systems (such as some Lisps) allow interpreted and compiled code to call each other and to share variables. This means that once a routine has been tested and debugged under the interpreter it can be compiled and thus benefit from faster execution while other routines are being developed.[citation needed]
Implementation
[edit]Since the early stages of interpreting and compiling as similar, an interpreter might use the same lexical analyzer and parser as a compiler and then interpret the resulting abstract syntax tree.
Example
[edit]An expression interpreter written in C.
// data types for abstract syntax tree
enum _kind { kVar, kConst, kSum, kDiff, kMult, kDiv, kPlus, kMinus, kNot };
struct _variable { int *memory; };
struct _constant { int value; };
struct _unaryOperation { struct _node *right; };
struct _binaryOperation { struct _node *left, *right; };
struct _node {
enum _kind kind;
union _expression {
struct _variable variable;
struct _constant constant;
struct _binaryOperation binary;
struct _unaryOperation unary;
} e;
};
// interpreter procedure
int executeIntExpression(const struct _node *n) {
int leftValue, rightValue;
switch (n->kind) {
case kVar: return *n->e.variable.memory;
case kConst: return n->e.constant.value;
case kSum: case kDiff: case kMult: case kDiv:
leftValue = executeIntExpression(n->e.binary.left);
rightValue = executeIntExpression(n->e.binary.right);
switch (n->kind) {
case kSum: return leftValue + rightValue;
case kDiff: return leftValue - rightValue;
case kMult: return leftValue * rightValue;
case kDiv: if (rightValue == 0)
exception("division by zero"); // doesn't return
return leftValue / rightValue;
}
case kPlus: case kMinus: case kNot:
rightValue = executeIntExpression(n->e.unary.right);
switch (n->kind) {
case kPlus: return + rightValue;
case kMinus: return - rightValue;
case kNot: return ! rightValue;
}
default: exception("internal error: illegal expression kind");
}
}
Just-in-time compilation
[edit]Just-in-time (JIT) compilation is the process of converting an intermediate format (i.e. bytecode) to native code at runtime. As this results in native code execution, it is a method of avoiding the runtime cost of using an interpreter while maintaining some of the benefits that lead to the development of interpreters.
Variations
[edit]- Control table interpreter
- Logic is specified as data formatted as a table.
- Bytecode interpreter
- Some interpreters process bytecode which is an intermediate format of logic compiled from a high-level language. For example, Emacs Lisp is compiled to bytecode which is interpreted by an interpreter. One might say that this compiled code is machine code for a virtual machine – implemented by the interpreter. Such an interpreter is sometimes called a compreter.[7][8]
- Threaded code interpreter
- A threaded code interpreter is similar to bytecode interpreter but instead of bytes, uses pointers. Each instruction is a word that points to a function or an instruction sequence, possibly followed by a parameter. The threaded code interpreter either loops fetching instructions and calling the functions they point to, or fetches the first instruction and jumps to it, and every instruction sequence ends with a fetch and jump to the next instruction. One example of threaded code is the Forth code used in Open Firmware systems. The source language is compiled into "F code" (a bytecode), which is then interpreted by a virtual machine.[citation needed]
- Abstract syntax tree interpreter
- An abstract syntax tree interpreter transforms source code into an abstract syntax tree (AST), then interprets it directly, or uses it to generate native code via JIT compilation.[9] In this approach, each sentence needs to be parsed just once. As an advantage over bytecode, AST keeps the global program structure and relations between statements (which is lost in a bytecode representation), and when compressed provides a more compact representation.[10] Thus, using AST has been proposed as a better intermediate format than bytecode. However, for interpreters, AST results in more overhead than a bytecode interpreter, because of nodes related to syntax performing no useful work, of a less sequential representation (requiring traversal of more pointers) and of overhead visiting the tree.[11]
- Template interpreter
- Rather than implement the execution of code by virtue of a large switch statement containing every possible bytecode, while operating on a software stack or a tree walk, a template interpreter maintains a large array of bytecode (or any efficient intermediate representation) mapped directly to corresponding native machine instructions that can be executed on the host hardware as key value pairs (or in more efficient designs, direct addresses to the native instructions),[12][13] known as a "Template". When the particular code segment is executed the interpreter simply loads or jumps to the opcode mapping in the template and directly runs it on the hardware.[14][15] Due to its design, the template interpreter very strongly resembles a JIT compiler rather than a traditional interpreter, however it is technically not a JIT due to the fact that it merely translates code from the language into native calls one opcode at a time rather than creating optimized sequences of CPU executable instructions from the entire code segment. Due to the interpreter's simple design of simply passing calls directly to the hardware rather than implementing them directly, it is much faster than every other type, even bytecode interpreters, and to an extent less prone to bugs, but as a tradeoff is more difficult to maintain due to the interpreter having to support translation to multiple different architectures instead of a platform independent virtual machine/stack. To date, the only template interpreter implementations of widely known languages to exist are the interpreter within Java's official reference implementation, the Sun HotSpot Java Virtual Machine,[12] and the Ignition Interpreter in the Google V8 JavaScript execution engine.
- Microcode
- Microcode provides an abstraction layer as a hardware interpreter that implements machine code in a lower-level machine code.[16] It separates the high-level machine instructions from the underlying electronics so that the high-level instructions can be designed and altered more freely. It also facilitates providing complex multi-step instructions, while reducing the complexity of computer circuits.
See also
[edit]- Dynamic compilation
- Homoiconicity – Characteristic of a programming language
- Meta-circular evaluator – Type of interpreter in computing
- Partial evaluation – Technique for program optimization
- Read–eval–print loop – Computer programming environment
References
[edit]- ^ interpretation actually
- ^ "Why was the first compiler written before the first interpreter?". Ars Technica. 8 November 2014. Retrieved 9 November 2014.
- ^ Bennett, J. M.; Prinz, D. G.; Woods, M. L. (1952). "Interpretative sub-routines". Proceedings of the ACM National Conference, Toronto.
- ^ According to what reported by Paul Graham in Hackers & Painters, p. 185, McCarthy said: "Steve Russell said, look, why don't I program this eval..., and I said to him, ho, ho, you're confusing theory with practice, this eval is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the eval in my paper into IBM 704 machine code, fixing bug, and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today..."
- ^ a b c This article is based on material taken from Interpreter at the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.
- ^ "Compilers vs. interpreters: explanation and differences". IONOS Digital Guide. Retrieved 2022-09-16.
- ^ Kühnel, Claus (1987) [1986]. "4. Kleincomputer - Eigenschaften und Möglichkeiten" [4. Microcomputer - Properties and possibilities]. In Erlekampf, Rainer; Mönk, Hans-Joachim (eds.). Mikroelektronik in der Amateurpraxis [Micro-electronics for the practical amateur] (in German) (3 ed.). Berlin: Militärverlag der Deutschen Demokratischen Republik , Leipzig. p. 222. ISBN 3-327-00357-2. 7469332.
- ^ Heyne, R. (1984). "Basic-Compreter für U880" [BASIC compreter for U880 (Z80)]. radio-fernsehn-elektronik (in German). 1984 (3): 150–152.
- ^ AST intermediate representations, Lambda the Ultimate forum
- ^ Kistler, Thomas; Franz, Michael (February 1999). "A Tree-Based Alternative to Java Byte-Codes" (PDF). International Journal of Parallel Programming. 27 (1): 21–33. CiteSeerX 10.1.1.87.2257. doi:10.1023/A:1018740018601. ISSN 0885-7458. S2CID 14330985. Retrieved 2020-12-20.
- ^ Surfin' Safari - Blog Archive » Announcing SquirrelFish. Webkit.org (2008-06-02). Retrieved on 2013-08-10.
- ^ a b "openjdk/jdk". GitHub. 18 November 2021.
- ^ "HotSpot Runtime Overview". Openjdk.java.net. Retrieved 2022-08-06.
- ^ "Demystifying the JVM: JVM Variants, Cppinterpreter and TemplateInterpreter". metebalci.com.
- ^ "JVM template interpreter". ProgrammerSought.
- ^ Kent, Allen; Williams, James G. (April 5, 1993). Encyclopedia of Computer Science and Technology: Volume 28 - Supplement 13. New York: Marcel Dekker, Inc. ISBN 0-8247-2281-7. Retrieved Jan 17, 2016.
Sources
[edit]- Aycock, J. (June 2003). "A brief history of just-in-time". ACM Computing Surveys. 35 (2): 97–113. CiteSeerX 10.1.1.97.3985. doi:10.1145/857076.857077. S2CID 15345671.
External links
[edit]- IBM Card Interpreters page at Columbia University
- Theoretical Foundations For Practical 'Totally Functional Programming' (Chapter 7 especially) Doctoral dissertation tackling the problem of formalising what is an interpreter
- Short animation explaining the key conceptual difference between interpreters and compilers. Archived at ghostarchive.org on May 9, 2022.