HOPL

From Wikipedia, the free encyclopedia

HOPL (History of Programming Languages) is an infrequent ACM SIGPLAN conference. Past conferences were held in 1979 and 1993 with the next conference scheduled in 2007.

Contents

[edit] HOPL I

HOPL I was held June 1-3, 1979 in Los Angeles, California. Jean E. Sammet was both the General and Program Committee Chair. John A. N. Lee was the Administrative Chair. Richard L. Wexelblat was the Proceedings Chair. From Jean Sammet's introduction: The HOPL Conference "is intended to consider the technical factors which influenced the development of certain selected programming languages." The languages and presentations in the first HOPL were by invitation of the program committee. The invited languages must have been created and in use by 1967. They also must have remained in use in 1977. Finally, they must have had considerable influence on the field of computing.

The papers and presentations went through extensive review by the program committee (and revisions by the authors), far beyond the norm for conferences and commensurate with some of the best journals in the field. The languages (and speakers) included in HOPL-I were:

Preprints of the proceedings were published in "SIGPLAN Notices", volume 13, number 8, August 1978. The final proceedings, including transcripts of question and answer sessions, was published as a book in the ACM Monograph Series: "History of Programming Languages", edited by Richard L. Wexelblat. Academic press, 1981.

[edit] HOPL II

HOPL II was held April 20-23, 1993 in Cambridge, Massachusetts. John A. N. Lee was the Conference Chair and Jean E. Sammet was the Program Chair. In contrast to HOPL I, HOPL II included both invited papers and papers submitted in response to an open call. The scope also expanded. Where HOPL I had only papers on the early history of languages, HOPL II solicited contributions on:

  • early history of specific languages,
  • evolution of a language,
  • history of language features and concepts, and
  • classes of languages for application-oriented languages and paradigm-oriented languages.

The submitted and invited languages must have been documented by 1982. They also must have been in use or taught by 1985.

As in HOPL I, there was a rigorous multi-stage review and revision process. The selected papers and authors were:

Preprints of the proceedings were published in "SIGPLAN Notices", volume 28, number 3, March 1993. The final proceedings, including copies of the presentations and transcripts of question and answer sessions, was published as the ACM Press book [1] : "History of Programming Languages", edited by Thomas J. Bergin and Richard G. Gibson. Addison Wesley, 1996.

[edit] HOPL III

HOPL III will be held June 9-10, 2007 in San Diego, California. Brent Hailpern and Barbara G. Ryder are the Conference co-Chairs. HOPL III had an open call for participation and asked for papers on either the early history or the evolution of programming languages. The languages must have come into existence before 1996 and been widely used since 1998, either commercially or within a specific domain. Research languages that had a great influence on subsequent programming languages were also candidates for submission.

As with HOPL I and HOPL II, the papers were managed with a multiple stage review/revision process.

Accepted Papers for HOPL III are:

  • "A history of Erlang" by Joe Armstrong
  • "A history of Modula-2 and Oberon" by Niklaus Wirth
  • "AppleScript" by William R. Cook
  • "Evolving a language in and for the real world: C++ 19912006" by Bjarne Stroustrup
  • "Self" by David Ungar, Randall B. Smith
  • "Statecharts in the making: a personal account" by David Harel
  • "The design and development of ZPL" by Lawrence Snyder
  • "The development of the Emerald programming language" by Andrew P. Black, Norman Hutchinson, Eric Jul and Henry M. Levy
  • "The evolution of Lua" by Roberto Ierusalimschy, Luiz Henrique de Figueiredo, and Waldemar Celes
  • "A history of Haskell: being lazy with class" by Paul Hudak, John Hughes, Simon Peyton Jones, and Philip Wadler
  • "The rise and fall of High Performance Fortran: an historical object lesson" by Ken Kennedy, Charles Koelbel, Hans Zima
  • "The when, why and why not of the BETA programming language" by Bent Bruun Kristensen, Ole Lehrmann Madsen, Birger Møller-Pedersen


The HOPL III programming languages can be broadly categorized into five classes (or paradigms): Object-Oriented (Modula-2, Oberon, C++, Self, Emerald, and BETA), Functional (Haskell), Scripting (AppleScript, Lua), Reactive (Erlang, StateCharts), and Parallel (ZPL, High Performance Fortran). Each HOPL III paper describes the perspective of the creators of the language. Hence it is important to understand the context of each programming paradigm during the same period.

[edit] Brief History of Object-Oriented Programming Languages: 1983-1996

HOPL I and HOPL II described, in detail, the histories of the early object-oriented (OO) languages: SIMULA, Smalltalk, C++, and (to some extent) Ada 83 and Fortran 90. As one can easily see from the first OOPSLA conference in 1986, the OO field was exploding with new concepts and experimental languages. Different mechanisms for sharing (inheriting code, data, and interfaces included flavors, types, classes/subclasses, delegation/prototypes/exemplars, virtual copies, encapsulators, and genericity. Exploratory languages at OOPSLA 86 included Trellis/Owl, CommonLoops, Oaklisp, Emerald, SOAR, Quicktalk, Impulse-86, Orient84/K, ABCL/1, Concurrent Smalltalk, and Pi. Some of these were toy languages, some continued to exist for a few years, but none achieved the commercial success of their predecessors.

Though not presented at the first OOPSLA, the Eiffel programming language was developed commercially by Bertrand Meyer beginning in 1985. The language developed hand-in-hand with the concept of design by contract. Unlike C++, Eiffel emphasized declarative statements over procedural code.

The 1995 revision of Ada introduced a complete object-oriented system, which was built on a very different foundation from Simula or C++.

Niklaus Wirth had designed Modula as a successor to Pascal in the late 1970s. Though not strictly an OO language, Modula supported strong encapsulation via modules. In 1978, Wirth designed Modula-2 for the Lilith computer. Oberon followed in 1986. The strong modular encapsulation principles of Modula-2 were imitated in Ada 83 and Fortran 90.

For many of the years of this period, C++ and Eiffel were the main commercial successes among OO languages. In the early 1990s, however, Sun Microsystems introduced Java. Java was started as a project called "Oak" by James Gosling in June of 1991. The project underwent a number of changes of direction and focus, but eventually became the language of choice for server-side computing. The first public implementation was Java 1.0 in 1995. Eventually, Java became popular in a variety of domains, with platform support for several application families, including J2EE for enterprise applications and J2ME for mobile applications.

[edit] Brief History of Functional Programming Languages: 1983-1996

Functional programming, as a distinct topic in programming languages, arose very early in theory and fairly late in practice. The theoretical foundations date back to 1932, in Alonzo Church's Lambda calculus. The purity and power of the lambda calculus inspired theoreticians for some time to come, as in Peter J. Landin's influential but never-implemented language ISWIM and John Backus's FP programming language presented at his 1977 Turing Award lecture Can Programming Be Liberated From the von Neumann Style? A Functional Style and its Algebra of Programs.

Functional programming was possible, although not generally singled out as a programming style, from quite early in the history of computing. John McCarthy's LISP programming language was inspired in part by the first chapter of Church's book on lambda calculus. (McCarthy claims to have avoided the rest of the theoretical material because he didn't understand it, though it is possible that he actually understood it and avoided it for sound technical reasons instead.) Lisp allows (and in many cases introduces (albeit in a highly-parenthesized notation rather than the more finely-tuned notations of later languages)) the core mechanisms of successful functional programming languages: it allows the manipulation of functions as data; it is expression-based; it has rich underlying data structures; it is garbage-collected; it allows recursion. While it includes non-functional features, it is possible and even natural to write a great deal of code in the functional subset of the language. Arguably, the first serious functional programs were in Lisp, although Lisp is not actually a functional programming language.

The first truly functional languages that were actually implemented and used for more than minimal examples seem to be the ML programming language of Robin Milner, and the language Miranda of David Turner, both from the 1970's. Most modern functional programming languages continue in the style of ML and Miranda. Strictly, ML isn't purely functional; it allows side effects. However, ML's side effects are confined to a small set of types, and much more programming is possible without using them. ML is notable for first class functions, automatic memory management through garbage collection, parametric polymorphism, static typing, type inference, algebraic data types and pattern matching adopted from the Hope programming language and exception handling.

Dataflow languages for describing computer architecture arose in the 1970's. These can broadly be considered functional, or at least declarative, languages. They generally do not have first-class functions, however. Dataflow programs describe computer architectures in functional terms.

A wide range of functional programming languages arose in the late 1970s and 1980s. Perhaps the most important was the Hope programming language, whose concrete data types and pattern matching were included in ML and have become more or less standard in mainstream functional programming languages.

In 1987, the functional programming community gathered to design a unified functional programming language: Haskell. Characteristic features of Haskell include lazy evaluation, pattern matching, currying, list comprehensions, guards, definable operators, and single assignment. Haskell has been dominant among purely functional languages since then.

Quasi-functional languages are probably more heavily used than strictly functional ones. Lisp in its incarnations as Common Lisp and Scheme, and ML in its incarnations as Standard ML and Caml, remain dominant in this category, used for academic research and assorted commercial applications.

[edit] Brief History of Reactive Languages: 1983-1996

Reactive Systems” were first defined in a 1985 paper by David Harel and Amir Pnueli. The term refers to not-necessarily-terminating systems whose behavior is characterized mainly by the way they react with their environments, to distinguish from Transformational, or Computational Systems, whose behavior is characterized by transformations, or computation. The essence of the distinction was that computational systems can be specified by input-output relations (as with pre and post conditions in Floyd-Hoare tradition), while reactive systems must be specified and verified in terms of their ongoing behavior.

In the late 80-s, the application areas of reactive system design were in real-time control systems, communication protocol design, and event-driven user interfaces. From a language design perspective, reaction to events (a.k.a event-condition-action) as a basic construct, compositional semantics (full abstraction and bi-simulation) of reactive components, asynchronous and synchronous behavior were fundamental considerations. Development of the field proceeded simultaneously in devising semantic foundations with temporal logic and synchronization primitives for communicating processes as well as visual design notations such as communicating Finite State Machines, Statecharts and Message Sequence Charts.

Languages like Esterel, Lustre and LOTOS appeared for specifying communication protocols. Central to these languages is the synchrony hypothesis, due to Gerard Berry, which prescribes an abstraction whereby steps take zero time. On the visual design notations front, Statecharts [Harel] used AND/OR compositions of state machines to describe reactive systems. Statecharts were put in a more general context in the mid-1980s, where the language was combined with a functional decomposition, so that a statechart could be used to specify the behavior of a function [Harel et al]. Similar combinations were carried out independently, using finite state machines [Ward/Mellor, Hatley/Pirbhai]. ObjChart[Gangopadhyay] and ROOM [Selic] were two early visual notations that adopted the Statechart formalism, but emphasized networks of autonomous objects to define structure and composition of the system under design, where behavior of individual objects was specified by finite state machines. This gave rise to an object-oriented version of the statecharts language, a variant of which serves as the main behavioral medium in the UML.

Since then, languages were commercially applied to various applications areas (Avionics, Embedded Systems, Communication, automotive systems and hardware design). Hand-in-hand verification technologies based on temporal logic and model checking have been developed with varying degrees of real life success...

[edit] Brief History of Parallel Programming Languages: 1983-1996

In the world of parallelism, it is customary to distinguish between two kinds of languages. Parallel programming is traditionally concerned with making transformational programs go faster by exploiting parallel hardware. Thus parallelism is needed for performance, not expressiveness. Parallel programming languages may support parallelism explicitly (the language has explicit concurrent control constructs) or implicitly (there are no such constructs). High performance computing typically falls in this category.

In contrast, concurrent programming is traditionally concerned with expressing reactive computations -- those that maintain an ongoing interaction with their environment. Hence concurrent control constructs are necessary in order to express the underlying programming problem. Operating systems, real-time control software, client/server applications, web services typically fall in this category.

[edit] Implicitly parallel languages

A central focus of implicitly parallel languages has been data-parallelism. No explicit control constructs are provided to spawn tasks or threads or enable them to synchronize. Instead operations may be performed on aggregate data-structures (such as arrays, or array sections) in parallel.

[edit] Fortran extensions

Fortran 90 introduced the notion of aggregate operations on entire arrays, or sections of arrays. For example, if A, B, C are conformant, the programmer may write A = B + C to indicate that corresponding elements of B and C should be added and stored in A. Such aggregate operations are made easier to use by introducing "implicit right-hand-side buffer semantics:" the entire right hand side is evaluated and buffered before being assigned to the left hand side (the optimizer can decide to use A for the buffer if this would not change the semantics). The "triple notation" l:u:s is used to specify sets of indices (all the indices between l and u with stride s); an array subscripted with a triple in one or more dimensions specifies an "array section" (a portion of an array). Nonstandard control constructs such as FORALL (standardized in 1995) and DO INDEPENDENT (to be standardized in 2008 as DO CONCURRENT) specify operations that can be performed in parallel on aggregates. This parallelism can be exploited by SPMD architectures.

In the early 90s it became feasible to run a single computation on a collection of computers, a process per computer. The computation could work on an array larger than the amount of memory available on a single processor.

A series of languages, Vienna Fortran 90, Kali, Blaze led to the development of HPF. Version 1 was published in 1993 and version 2 in 1997. These languages explored the idea that the programmer would supply a high level directive controlling the distribution of the array (e.g. block distribution, block-cyclic distribution), and continue to write code with a single program counter. The compiler would then generate the necessary message passing and synchronization code to implement these operations. Compared with using MPI directly for message passing, this approach has the advantage of offering a much higher level of abstraction to the programmer. However, it suffered from the lack of flexibility (users wanted to be able to specify more complex distributions than the ones supported in the language) and efficiency (the task of automatically generating efficient code for this language is much harder than originally believed). Research on HPF continues today, attempting to address these limitations.

In 1997, Numrich et al. described F--, which has become Co-Array Fortran. It has been implemented by Cray and others, and will be part of the 2008 standard. Co-Array Fortran attacks the same problems as HPF, but apparently with more success.

[edit] Array languages

NESL develops the idea of nested aggregates: aggregates may themselves contain aggregates as elements and, further, may be operated on by parallel functions. NESL runs on Unix workstations, and various parallel machines.

[edit] Dependency-based Languages

Jade is an implicitly parallel language for coarse-grained computing developed by Martin Rinard for his PhD. Programs are written in a standard sequential imperative language, augmented with Jade constructs to specify dependencies between data accesses. The compiler extracts a task-dependence graph and maps this graph to the machines at hand. Execution of a Jade program preserves the sequential semantics of the original program. The system was first released in 1994 for shared memory multiprocessors and networks of computers.

[edit] SIMD Languages

SIMD languages include C* (an extended C for data parallel programming), and various languages developed for the Connection Machine, including *Lisp and CM Fortran.

[edit] Explicitly parallel languages

[edit] Languages based on CCS and CSP

Occam was announced in 1993 by INMOS and implemented on INMOS transputers, based on CSP. Its core communication construct is a hand-shake between a process offering a value and a process offering a variable in which to store the value. A process may offer several of these actions simultaneously (through a choice operation),

The Pi-calculus was developed by Robin Milner, Joachim Parrow and David Walker in 1982 as a simple calculus for concurrent systems with dynamically changing connectivity. Active development of the Pi-calculus continues today. Languages based on the Pi-calculus include Occam-pi and Pict.

[edit] Data-flow languages

Data-flow languages adopted a different approach to parallelism, based on producer consumer synchronization.

Sisal develops a language for HPC on top of the functional programming Val by adding finite streams and arrays. The output of the compiler is a data-flow program. The first compiler was implemented in 1986. The last version of the Optimizing Sisal Compiler (for Sisal 1.2) was released in 1994.

Id introduced the notion of i-structures as a way of handling fine-grained array update operations in a data-flow context, without losing determinacy. The operational semantics of Id is described as a parallel reduction system on graphs.

Erlang is a general-purpose reactive concurrent language, originally announced in 1987. It extends a sequential single-assignment functional language with concurrency constructs based on Actors. It is fairly unique among concurrent languages in supporting hot swapping. Erlang continues to be in active development. Release 11 was released in May 2006.

[edit] Concurrent logic programming languages

Concurrent logic programming languages --- such as Concurrent Prolog, Parlog, GHC and Strand arose from an entirely different tradition (Logic Programming) but may be thought of as more versatile versions of data-flow languages. In these languages computations are represented as dynamically evolving recursive graphs, whose nodes are called processes, and edges are called logical variables. Values are produced for edges through restricted forms of unification. Unification is an elegant operation that permits binding of values to variables as well as destructuring of values into their components. Thus it permits dynamic data-flow -- the direction of flow of data at runtime is not determined statically. Processes may wait for data to arrive on edges before reducing to new graphs, by using one of several clauses.

Of these, Concurrent Prolog was the most general, offering a "read-only" annotation (?) as a way of controlling unification (and hence data-flow) dynamically. The first version of the Logix system, based on FCP was released in 1985. Parlog relied on statically specified "modes" to enforce directionality. It was first released in 1987. GHC used an elegant restriction of unification -- matching -- to specify suspension. GHC was intended to be used as the systems programming language for the Japanese Fifth Generation Systems project pursued at ICOT. It was released in 1990 as the KLIC, a portable implementation of the KL1 system. Strand further restricted unification so that in effect it became an assignment to a single variable at a time. Strand was released commercially by Strand Technologies in 1989.

These languages had an influence on the theoretical development of declarative concurrent programming languages -- e.g. through the development of Concurrent Constraint Programming -- but are not in wide-spread use today.

[edit] Languages based on C

Cilk is a very elegant extension of C based on the idea of work stealing. It is intended for general-purpose parallel computation, and has primitives for spawning recursive computations. However, all computations spawned within the body of a function must terminate before the function call returns. Cilk has been used for algorithms that can be expressed using dynamic, asynchronous computation.

Split C is an extension of C for distributed memory multiprocessors. It extends [[C (programming language)|C]] with a partioned global address space. The address space for the computation is the union of the address space of the participating processes. Pointers may be global or local; global pointers may point into memory allocated in a process other than then process in which the reference lives. Split C follows an SPMD programming model. It was a precursor UPC.

Concert C is an extension of ANSI C for parallel and distributed computing developed at the IBM TJ Watson Research Center. It permits the creation of new processes on the current processor or a remote processor. It permits processes to communicate through one-way, asynchronous queues of messages. Arbitrary C data-structures may be communicated between processes (using marshalling and demarshalling). It transparently integrated remote procedure calls. It was designed to run on a heterogenous collection of machines and operating systems. The first compiler was released in 1994.

[edit] Object-oriented languages

Illinois Concert C++ was developed to permit the expression of irregular and fine-grained concurrency in an extension of C++. Fine-grained concurrency was supported by a "conc" block which permits unrelated statements to be executed in parallel. It introduced the notion of object consistency as a way of building up concurrent data abstractions from modular pieces. It integrated arrays into the object system. The first compiler was released in 1995.

Charm++ is a portable concurrent object-oriented language based on C++. It uses actor-style concurrency for asynchronous messaging between objects, as well certain kinds of shared objects. It provides extensive support for load balancing. It provides a "branched chare" -- a replicated object which has a parallel interface and can be used as the basis for data-parallel algorithms. The system was first released in 1995.

[edit] Miscellaneous languages

Linda was developed by David Gelernter and Nicholas Carrero as a simple means of extending sequential programming languages with the notion of a tuplespace. A tuple space is shared by multiple processes and permits the injection of tuples, their (associative) removal, and their destructive update. Linda has been integrated into Prolog, Ruby, C and Java. Linda was first released in 1985.

[edit] Brief History of Scripting Languages: 1983-1996

(This is a stub to be filled out by the community.)

Early scripting languages in Unix included the Bourne shell, Awk and XXX.

REXX (or Rexx) was started by Mike Cowlishaw in 1979, described to the public in 1981 and first shipped as an IBM product in 1982, soon achieving widespread use both for general scripting and as the embedded language of the XEDIT editor. Large applications could be, and were, entirely constructed in it. Its direct influence on other languages is hard to quantify because its core user community was in high-end mainframes rather than computer science, but its design and feature set make it a strong contender for being the first successful 'modern' scripting language.

ABC was created by Lambert Meertens and others at CWI in the late '70s and early '80s; its most important influence was to be the main inspiration for Python. ABC was first released to the public around 1985.

Perl was started by Larry Wall in 1987; Perl 1.0 was released the same year.

Tcl was started by John Ousterhout at UCB in 1988; it was first release to the public in 1990.

Python was started by Guido van Rossum at CWI in 1989; it was first released to the public in 1991.

Lua was started by Roberto Ierusalimschy, Luiz Henrique de Figueiredo, and Waldemar Celes at PUC-Rio in 1993; it was first released to the public in 1994.

Ruby was started by Yukihiro "Matz" Matsumoto in 1993; it was first released to the public in 1995.

[edit] External link