Feature Oriented Programming
From Wikipedia, the free encyclopedia
Feature Oriented Programming (FOP) or Feature Oriented Software Development (FOSD) is a general paradigm for program synthesis in software product lines.
FOSD arose out of layer-based designs of network protocols and extensible database systems in the late-1980s [1]. A program was defined as a stack of layers. Each layer added functionality to previously composed layers and different compositions of layers produced different programs. Not surprisingly, there was a need for a compact language to express such designs. Elementary algebra fit the bill: each layer was function that added new code to an existing program to produce a new program, and a program's design was modeled by an expression, i.e., a composition of functions (layers). The figure to the right illustrates the stacking of layers h, j, and i (where h is on the bottom and i is on the top) and the notations i(j(h))and i•j•h.
Over time, the idea of layers was generalized to features, where a feature is an increment in program development or functionality. The paradigm for program design and synthesis was recognized to be a generalization of relational query optimization, where query evaluation programs were defined as relational algebra expressions, and query optimization was expression evaluation [2].A software product line (SPL) is a family of programs where each program is defined by a unique composition of features, and no two programs have the same combination of features. FOSD has since evolved into the study of feature modularity, tools, analyses, and design techniques to support feature-based program synthesis.
Further advances in FOSD arose from recognizing the following facts: Every program has multiple representations (e.g., source, makefiles, documentation, etc.) and adding a feature to a program could elaborate each of its representations so that all representations are consistent. Additionally, some of these representations could be generated (or derived) from other representations. In this article, the mathematics of the three most recent generations of FOSD, namely GenVoca[1], AHEAD[3], and FOMDD[4][5], are progressively described, and links to product-lines that have been developed using FOSD tools are provided.
Please note this page is under construction (it currently has both omissions and errors). A first draft should be finished by mid-June 2008.
Contents |
[edit] GenVoca
GenVoca (a meld of the names Genesis and Avoca) is a compositional paradigm for defining programs of a product lines [1]. Base programs are 0-ary functions or transformations called values:
- f -- base program with feature f
- h -- base program with feature h
and features are unary functions that elaborate (modify, extend, refine) a program:
- i • x -- adds feature i to program x
- j • x -- adds feature j to program x
where • denotes function composition. The design of a program is a named expression, e.g.:
- p1 = j • f -- program p1 has features j and f
- p2 = j • h -- program p2 has features j and h
- p3 = i • j • h -- program p3 has features i, j, and h
A GenVoca model of a domain or software product line is a set of values and functions. The programs (expressions) that can be created defines a product line. Expression optimization is program design optimization, and expression evaluation is program synthesis.
- Note: GenVoca is based on the step-wise development of programs: a process that emphasizes design simplicity and understandability, which are essential for program comprehension and the automation of program design and development. Consider program p3: it begins with base program h, then feature j is added (read: the functionality of feature j is added to the codebase of h), and finally feature i is added (read: the functionality of feature i is added to the codebase of j•h).
- Note: A more recent formulation of GenVoca is symmetric: there is only one base program, 0 (the empty program), and all features are unary functions. This gives rise to the interpretation that GenVoca composes program structures by superposition, the idea that complex structures are composed by superimposing simpler structures.[6]
- Note: not all combinations of features are meaningful. Feature diagrams (which can be translated into propositional formulas) are graphical representations that define legal combinations of features. [7]
[edit] Implementation
To illustrate the basic ideas, consider the following Java base class foo, which we will call value r:
class foo { int x = 0; void inc() { x++; } }
A feature (a.k.a refinement, extension) of r is shown below in AHEAD syntax. Let's call this feature w, which adds to class foo an 'int y' field, a 'void set()' method, and a refinement of the 'void inc()' method. Method refinements or deltas are written just like method overrides in Java subclassing hierarchies: the phrase 'super.inc()' effectively means substitute the previously-defined body of the 'void inc()' method.
refines class foo { int y; void set() { y=x; } void inc() { y++; super.inc(); } }
The composition w•r is shown below: the method and field of w is added to foo, and the inc() method is refined:
class foo { int x = 0; int y; set() { y=x; } void inc() { y++; x++; } }
The AHEAD [3] notion of 'refinement' or 'delta' is quite general, and can apply to non-Java program representations as well. For example, below is a base grammar q and its token definitions:
“+” PLUS Expr : Val | Val Opr Expr ; Val : INTEGER ; Opr : PLUS ;
A refinement of q, here called t, that adds a token MINUS and a new right-hand side to production Opr is:
“-” MINUS Opr : super | MINUS ;
The “super” construct refers to the prior right-hand sides of a production (in this case, Opr). The composition t•q is the composite grammar:
“+” PLUS “-” MINUS Expr : Val | Val Opr Expr ; Val : INTEGER ; Opr : PLUS | MINUS ;
The benefit of using similar refinement concepts for different program representations is pragmatic. If each representation had a completely different way to express refinements, the ability of any individual to understand all of them and use them effectively rapidly diminishes. Our experience is that uniformity contributes to understandability and simplicity.
- Note: FOSD does not preclude other and more sophisticated ways of defining refinements. Aspects and rule sets of transformation systems are examples. Both technologies could be (and have been) uniformly applied to all kinds of program representations. So GenVoca captures the essence of these technologies, where the examples above are just one possible implementation.
[edit] AHEAD
Algebraic Hierarchical Equations for Application Design (AHEAD) [3] generalizes GenVoca in two ways. First it reveals the internal structure of GenVoca values as tuples. Every program has multiple representations, such as source, documentation, bytecode, and makefiles. A GenVoca value is a tuple of program representations. In a product line of parsers, for example, a base parser f is defined by its grammar gf, Java source sf, and documentation df. Program f is modeled by the tuple f=[gf, sf, df]. Each program representation may have subrepresentations, and they too may have subrepresentations, recursively. In general, a GenVoca value is a tuple of nested tuples that define a hierarchy of representations for a particular program.
- Example. Suppose terminal representations are files. In AHEAD, grammar gf corresponds to a single BNF file, source sf corresponds to a tuple of Java files [c1…cn], and documentation df is a tuple of HTML files [h1…hk]. GenVoca values (nested tuples) can be depicted as directed graphs: the graph for program f is shown in the figure below. Arrows denote projections, i.e., mappings from a tuple to one of its components. AHEAD implements tuples as file directories, so f is a directory containing file gf and subdirectories sf and df. Similarly, directory sf contains files c1…cn, and directory df contains files h1…hk.
- Note: Files can be hierarchically decomposed further. Each Java class can be decomposed into a tuple of members and other class declarations (e.g., initialization blocks, etc.).
Second, AHEAD expresses features as nested tuples of unary functions called deltas. Deltas can be program refinements (semantics-preserving transformations), extensions (semantics-extending transformations), or interactions (semantics-altering transformations). We use the neutral term “delta” to represent all of these possibilities, as each occurs in FOSD.
As an example, suppose feature j extends a grammar by Δgj (new rules and tokens are added), extends source code by Δsj (new classes and members are added and existing methods are modified), and extends documentation by Δdj. The tuple of deltas for feature j is modeled by j=[Δgj,Δsj,Δdj], which we call a delta tuple. Elements of delta tuples can themselves be delta tuples. For example, Δsj represents the changes that are made to each class in sf by feature j, i.e., Δsj=[Δc1…Δcn]. The representations of a program are computed recursively by composing tuples element-wise. The representations for parser p (whose GenVoca expression is j•f) are:
p2 = j • f -- GenVoca expression
= [ Δgj, Δsj, Δdj] • [gf, sf, df] -- substitution
= [Δgj•gf, Δsj•sf, Δdj•df] -- compose tuples element-wise
That is, the grammar of p is the base grammar composed with its extension (Δgj•gf), the source of p is the base source composed with its extension (Δsj•sf), and so on. As elements of delta tuples can themselves be delta tuples, composition recurses, e.g., Δsj•sf= [Δc1…Δcn]•[c1…cn]=[Δc1•c1…Δcn•cn]. Summarizing, GenVoca values are nested tuples of program artifacts, and features are nested delta tuples, where • recursively composes them. This is the essence of AHEAD.
[edit] FOMDD
Feature Oriented Model Driven Design (FOMDD) [4]
[5] combines the ideas of AHEAD with Model Driven Design (MDD) (a.k.a. Model-driven architecture (MDA)). AHEAD functions capture the lockstep update of program artifacts. But there are other functional relationships among program artifacts that express derivations. For example, the relationship between a grammar gf and its parser source sf is defined by the tool javacc. Similarly, the relationship between Java source sf and its bytecode bf is defined by the javac compiler. A commuting diagram expresses these relationships. Objects are program representations, downward arrows are derivations, and horizontal arrows are deltas. The figure to the right shows the commuting diagram for program p3 = i•j•h = [g3,s3,b3].
A fundamental property of a commuting diagram is that all paths between two objects are equivalent. For example, one way to derive the bytecode b3 of parser p3 (lower right object in the above figure) from grammar gf of parser f (upper left object) is to derive the bytecode bf and refine to b3, while another way refines gf to g3, and then derive b3:
Δbi • Δbj • javac • javacc = javac • javacc • Δgi • Δgj
There are possible paths to derive the bytecode b3 of parser p3 from the grammar gf of parser f. Each path represents a metaprogram whose execution synthesizes the target object (b3) from the starting object (gf). There is a potential optimization: traversing each arrow of a commuting diagram has a cost. The cheapest (i.e., shortest) path between two objects in a commuting diagram is a geodesic, which represents the most efficient metaprogram that produces the target object from a given object.
- Note: the basic ideas we sketch here align with elementary ideas from Category theory [4][5]
[edit] Applications
There is a large history of product line applications developed using FOSD. Among the many include:
- Network Protocols
- Extensible Database Systems
- Data Structures
- Distributed Army Fire Support Simulator
- Graph Product Line
- Extensible Java Preprocessors
- Web Portlets
- SVG Applications
More to be supplied.
[edit] References
- ^ a b c Design and Implementation of Hierarchical Software Systems with Reusable Components.
- ^ Access Path Selection In Relational Databases.
- ^ a b c Scaling Step-Wise Refinement.
- ^ a b c Feature Oriented Model Driven Development: A Case Study for Portlets.
- ^ a b c Generative Metaprogramming.
- ^ Superimposition: A Language-Independent Approach to Software Composition.
- ^ Feature Models, Grammars, and Propositional Formulas.
This article is uncategorized. Please categorize this article to list it with similar articles. (May 2008) |