Talk:Lambda calculus
From Wikipedia, the free encyclopedia
[edit] Note on readability
just wanted to leave a note saying that the article seemed pretty confusing to read at first sight. (there seemed to be good links though and maybe i'll come back after understanding it to make it more readable.) Harshav
[edit] Computer science?
The article starts with "In computer science, lambda calculus...". This doesn't make sense to me; lambda calculus was originally part of mathematics. In fact the sentences immediately afterwards make the start sound silly since they cite Church and Kleene from the 1930s, when there were no computers and no computer science. I am changing this to "mathematical logic and computer science". --Fuchsias 22:05, 1 August 2006 (UTC)
[edit] Syntax of constant function
Maybe I'm being really dumb but isn't (λ x. x 3) (λ x. x+2) identical to 3?? (λ x. x 3) is the constant function 3, applying it to (λ x. x+2) still gives you 3? Kuratowski's Ghost 15:28, 23 Nov 2004 (UTC)
- The constant function you mean is (λ x. 3) which is different from (λ x. x 3). See the difference? :-) -- Jan Hidders 15:50, 23 Nov 2004 (UTC)
-
- Nope what I'm saying is that my understanding of (λ x. x 3) is that it is identical to writing (λ x. x) 3 which is 3. The article seems to treat it as meaning λ x.( (x) 3) in other words its treating function application as having precendence over λ quantification. I've never seen that before and can't find any mention of it, all the texts I have include parentheses to avoid ambiguity, I've never really seen something like (λ x. x 3) which can even be misinterpreted as multiplication of x by 3. :P Kuratowski's Ghost 00:10, 24 Nov 2004 (UTC)
-
-
- The precedence rules are later explained in the section that gives the formal definition, right after the syntax. These are AFAIK the common conventions when using the dot notation. I agree if you say that we shouldn't apply those rules before we have actually introduced them. I leave it to you to add brackets to disambiguate. -- Jan Hidders 12:38, 24 Nov 2004 (UTC)
-
-
-
-
- I'm trying to find a good rigorous source on the subject. Some of the stuff I have does naughty things like relying on the fact that people know that f and g are functions and that x and y are numbers so that fx is function application while xy is multiplication :) Kuratowski's Ghost 15:25, 24 Nov 2004 (UTC)
-
-
-
-
-
-
- I don't think that's a good idea. We should be consistent in this article and if you simulate numbers in pure calculus you cannot tell the difference. The best reference is the work by Barendregt, but that's probably not what you are looking for. For an introduction google for "introduction lambda calculus barendregt barendsen". -- Jan Hidders 23:03, 24 Nov 2004 (UTC)
-
-
-
-
-
-
-
- It looks quite good, still not rigourous enough for my liking ;) Kuratowski's Ghost 17:26, 30 Nov 2004 (UTC)
-
-
-
[edit] Kudos
Really excellent work on this. When I saw the initial version, I went ahead and made a stub for Y combinator, admittedly for humor reasons, but perhaps something about some of the combinators would be nice. :) -- EdwardOConnor
[edit] Renaming
When this article talks about renaming formal arguments:
(λx. λy. y x) (λx. y) to λz. z (λx. y)
It doesn't make sense to me.
- It's an information loss, because we don't know what z is.
- It conflicts with the discussion about normal forms, since then any expression can be transformed to another one just by calling it λ z. z.
But it is a very interesting topic, and I'm certainly learning from this page. -- forgotten gentleman
[edit] Alpha conversion
Change of bound variables is known as alpha conversion, and has no effect on the meaning of the λ-expression e.g
(λ x. f x) 3 is just the same as (λ y. f y) 3
Your example is moderately interesting, and requires also the use of beta conversion or substitution, which goes as follows:
(λx. λy. y x) (λx. y) = (λx.λz.z x) (λx. y) (by alpha conversion - do this first!) = (λz. z (λx.y)) by beta conversion (substitution of (λx. y) for x )
- This should work this time, because there is no clash on y.
Does that make you happier? User talk:David Martland
- I hope that the reordering of the conversion processes now gives the correct results - thanks for the comment re free occurences. This example does seem quite instructive - it ought to be possible to get somewhere going the other way surely ...
- The article is veering towards considerable formalism - this is in many ways a good thing, but may also render it less accessible to people who don't have much of an idea about this stuff, and want to learn. Maybe some more examples would help? Sometimes intuitive examples do help. David Martland 22:22 Dec 7, 2002 (UTC)
Maybe there should be a note that this is the untyped lambda calculus, and that there are also several typed lambda calculi.
- done.
I removed this:
- Church invented lambda-calculus in order to set up a foundational project restricting mathematics to quantities with "effective procedures". Unfortunately, the resulting system admits Russell's paradox in a particularly nasty way; Church couldn't see any way to get rid of it, and gave up the project.
This is taken from FOLDOC; I don't think it is correct. The motive ascribed to Church (and Kleene) is wrong: they wanted to formalize the notion of computability. Furthermore, the paragraph presents the project of the lambda calculus as a failure; indeed it was a smashing success: it solved the Entscheidungsproblem and defined computable function for the first time cleanly (as the article already explains). It is not that "Church couldn't see any way to get rid of it", but in fact he proved that "there is no way to get rid of it", thus solving the Entscheidungsproblem in the negative. AxelBoldt 08:51 Sep 3, 2002 (PDT)
[edit] Barendregt quote
I did a little research and found an article by Henk Barendregt (one of the worlds leading experts on lambda calculus) titled The impact of the lambda calculus in logic and computer science that you can find at http://citeseer.nj.nec.com/barendregt97impact.html and there he says:
- (page 2) As to the representation of proofs, one of Church's original goals had been to construct a formal system for the foundations of mathematics by having a system of functions together with a set of logical notions. When the resulting system turned out to be inconsistent, this program was abandoned by him. Church then separated out the consistent subsystem that is now called the lambda calculus and concentrated on computability. (*1). [...]
At the top of page 5 you can read more about the details. -- Jan Hidders 12:16 Sep 3, 2002 (PDT)
- Nice reference; we definitely have to weave this in then. AxelBoldt 14:20 Sep 3, 2002 (PDT)
[edit] Combinatory logic
I've written the beginning of an article about Combinatory logic, a variation of lambda calculus that has only application, and no abstraction. It seems to me it should be linked from here, but I'm not sure what to say. -- Mark Jason Dominus
[edit] De Bruijn indices
To do: De Bruijn discovered a version of the Lambda calculus that avoids the alpha-equivalence difficulties by eliminating all variable names. There should be a paragraph about that. I will write it if someone else doesn't get there first. Dominus 04:18 Nov 30, 2002 (UTC)
[edit] AQlpha conversion quibble
The discussion of alpha-conversion is not quite correct. For example, it says
- The alpha-conversion rule states that
- λ V. E == λ W. E[V/W]
- where V and W are variables, E is a lambda expression and E[V/W] means the expression E with every free occurrence of V in E replaced with W.
Here is a counterexample: Take V==x, W==y, and E==(x y). Then the definition above asserts that
- λ x.(y x) == λ y.(y x)[x/y]
- == λ y.(y y)
since I have replaced "every free occurrence of x with y", as instructed. But the equivalence is wrong.
I would like to fix this, following the exposition in section 1.1 of Lectures on the Curry-Howard Isomorphism by Sørenson and Urzyczyn. If nobody objects, I will. As the same time I would add a section about the De Bruijn formulation, which avoids the α-reduction issue entirely. Dominus 00:17 Dec 1, 2002 (UTC)
-
- FWIW that certainly has my blessing (you just have to add that W does not ocurr freely in E and all occurrence of W in E[V/W] should be free) and I would even vote for a separate article on the De Bruijn formulation that explains the why and how of it. -- Jan Hidders 14:46 Dec 1, 2002 (UTC)
[edit] Eta conversion
What about eta conversions? I used to know this stuff, but now it's very hazy. I think there are some circumstances where etas are needed - not just alpha and betas. David Martland 22:51 Dec 10, 2002 (UTC)
- There's a bit on it now, but it needs work.
Are the last two paragraphs intended as a proof of eta-conversion or extensionality? It reads as if this is the intention. If this is the intention, then it is not a proof. One cannot prove two statements by showing that they imply each other. Here is a counter example:
statement1 := If I let go of my pen it will not fall.
statement2 := Gravity does not exist.
If either statement is assumed to be true then the other logically follows; however they are both clearly false.
- Extensionality is true by definition. The proof shows that the notion of eta-conversion gives rise to exactly the same equivalence relation as the notion of extensionality. greenrd 19:49, 28 December 2006 (UTC)
[edit] Kudos II
Great article! I'll point my students to it. -- Ellen Spertus
[edit] To do
TODO: connection between lambda calculus and natural language semantics Kowey 08:01, 27 Nov 2003 (UTC)
[edit] Parallelism in LC
I reverted back the changes make by CarlHewitt, the reason being that the lambda calculus is suited neither for parallel nor concurrent programming. This is motivated by Berry's sequentiality theorem (see, e.g., Barendregt's book Ch. 14). In essence the theorem states that whatever execution is performed in the lambda calculus it will always be sequential and it can never be either concurrent or parallel. Note, however, that simulation is possible for both parallel and/or concurrent programming: the lambda calculus is Turing complete. Koffieyahoo 6 July 2005 11:37 (UTC)
- The accepted practice on the Wikipedia is that if you disagree with something in the text is as follows: maintain a neutral point of view by laying out the arguments for and against instead of imposing your own point of view by simply reverting anything that you disagree with.--CarlHewitt 2005 July 6 15:34 (UTC)
-
-
-
- This is massively hypocritical as you have reverted all alternative views (and even the addition of pointers to online papers) in the Actor model page that you handle as of it were your personal property --Blissex
-
-
-
- Removing incorrect information is allowed, and your statement regarding parallelism is incorrect. So it seems to me we can agree on a correct statement on this discussion page and leave any incorrect information out of the article for the time being. Koffieyahoo 6 July 2005 17:45 (UTC)
- Actually the lambda calculus is fairly well suited for parallel computation but not for concurrent computation. On the other hand concurrent computation as in the Actor model and Process calculi can perform the parallelism of the lambda calculus. Theory of the lambda calculus says that lambda calculus computations can always be carried out sequentially not that they must be carried out sequentially.--CarlHewitt 2005 July 6 15:30 (UTC)
-
- No it is not. Try to define, e.g., the parallel-or in the lambda calculus (P(T,x) -> T; P(x, T) -> T; P(F, F) -> F). I think we can agree that this is about the most primitive parallel function that exists. Now, a consequence of Berry's theorem is that this function is *not* definable in the lambda calculus, so I don't see how you can justify that the lambda calculus allows for parallel computation. That the lambda calculus allows for parallel reduction, which is what you mean is something completely different. Koffieyahoo 6 July 2005 17:45 (UTC)
- It's OK by me if you want to call computation by the name "reduction". You might try working through the parallel-or using the combinators for true and false.--CarlHewitt 2005 July 6 18:14 (UTC)
-
- Clearly you only partially understood what I tried to say. The true and false are not the problem. If you want to introduce them as constants it's even fine. This still does not make it possible to write down the parallel-or as a lambda term. Koffieyahoo 6 July 2005 18:19 (UTC)
- If you look in the literature, you will find true and false themselves defined as lambda expressions. Using them it is possible to study parallel-or using parallel reductions.--CarlHewitt 2005 July 6 18:37 (UTC)
-
-
- Given the existence of parallel rewriting models for the lambda calculus, I can see no problem with what Carl wrote. Koffieyahoo: you might like to think more carefully about what Berry's result tells us about the possibility of using the lambda calculus to express concurrent computation. --- Charles Stewart 6 July 2005 19:37 (UTC)
-
- Charles has a good point, but the issue of whether the lambda calculus can implement parallel-or is not settled until the investigation is done. My conjecture would be that there are some limitations on the ability of the lambda calculus to implement parallel-or. Of course the lambda calculus can clearly implement some kinds of parallelism such as the parallel reduction of the arguments of a lambda expression.--Carl Hewitt 8 July 2005 06:50 (UTC)
-
- It's a direct consequence of Berry's theorem (this is immediate by Thm 14.4.12 in Barendregt's book) that the only way to write down the parallel-or in the lambda calculus is to write down some lambda term that alternates between the left and right argument of the parallel-or. Consequently, the implementation is no longer really parallel (even if parallel reduction is used): it can happen, for example, that right argument of the parallel-or has been reduced to true already but that before we really obtain this result (as in reduction to the lambda term that represents), we first have to do some more reductions in on the left argument. If the lambda-calculus was truly parallel, however, this should not be necessary the result should be available immediately.
-
- That parallel reduction is possible at all, is just a consequence of the lambda calculus being side-effect-free. It does not mean the lambda calculus is truly parallel.
-
- I think it's best is the statement is changed in something much more concrete like: multiple beta-redexes can be reduced at the same time, but this can always be emulated by reducing a number of beta redexes one after another. Referring to sequentiality and parallelism is just too confusing in the light of Berry's theorem. Koffieyahoo 8 July 2005 09:35 (UTC)
- The article currently states "Theory of the lambda calculus says that lambda calculus computations can always be carried out sequentially not that they must be carried out sequentially. The lambda calculus is suitable for expressing some kinds of parallelism but in general not concurrency. On the other hand concurrent computation as in the Actor model and Process calculi can perform the parallelism of the lambda calculus."
- I do not see how Berry's theorem contradicts the above statement.--Carl Hewitt 21:22, 11 July 2005 (UTC)
-
- It contradicts The lambda calculus is suitable for expressing some kinds of parallelism as Berry's theorem implies that eventually you have to do something you didn't had to if this were a parallel formalism. By the way, I have no objections to the first sentence, altough I think it should be made more concrete as I already suggested above. -- Koffieyahoo 13:50, 13 July 2005 (UTC)
-
-
- The possibility of evaluating arguments to functions at the same time (possible in both CBN and CBV evaluation) is already a useful form of parallelism. And John Lamping's partial sharing graphs, used to realise optimal reductions in the lambda calculus, show that there is more parallelism than that available in the lambda calculus. Berry's theorem tells you that POR cannot be implemented given a particular encoding of the Booleans. It does not tell us what can be done if Booleans were encoded at higher types. --- Charles Stewart 20:39, 22 July 2005 (UTC) (PS. Indentation changed in above, to keep different speakers at same depth)
-
-
- Two comments on this:
- Berry does not specify which particular encoding of the Booleans you should choose, only that one must be assumed beforehand, which is completely fair, as otherwise you would never know what true would be and what false would be. So, higher-type (whatever that would be in the untyped lambda-calculus) is covered by the theorem.
- There is adifference between (a) parallel execution, which exists for the lambda calculus in the form of parallel reductions, sharing graphs, etc., and (b) a true parallel program which of which POR is the prime example: just start evaluating both arguments and when one eventually evaluates reduces to "true" we don't need to evaluate the other argument any further before we move on to the next part of our "program"; Berry's theorem shows that this "we don't need to evaluate the other argument any further" is impossible. Unfortunately, the most common reading of "parallel" is (b) and, to make things worse, at lot of people assume "concurrency" is synonymous with parallelism.
- Two comments on this:
-
-
- Of course parallelism is a special case of concurrency.--Carl Hewitt 11:13, 4 August 2005 (UTC)
-
-
- No it is not, you can easily do concurrency without any form of parallel execution. See for example the (single-core) single processor (hyper-threading off) you probably have on you desk: It does concurrency (in the form of preemptive multitasking), but not parallel execution (there is only one processor). -- Koffieyahoo 12:19, 19 August 2005 (UTC)
-
- Hence, my problems with the current text aren't resolved yet. -- Koffieyahoo 10:13, 4 August 2005 (UTC)
-
-
- I clarified the article as follows: "Theory of the lambda calculus says that lambda calculus computations can always be carried out sequentially not that they must be carried out sequentially. The lambda calculus is suitable for expressing some kinds of parallelism ,e.g., the parallel evaluation of the arguments of a procedure. However the lambda calculus does not in general implement concurrency, e.g., a shared financial account that is updated concurrently. On the other hand concurrent computation as in the Actor model and Process calculi can perform the parallelism of the lambda calculus."--Carl Hewitt 11:21, 4 August 2005 (UTC)
-
-
- I would drop everything from e.g. onwards (it is covered by the concurrency link) and move it to a slightly different part of the article, but this sounds more like it. -- Koffieyahoo 12:19, 19 August 2005 (UTC)
The paragraph doesn't explicitly mention – though alludes to – Church-Rosser, and is phrased in a confusing manner.
I do notice that I'm about a year behind the discussions, but would nevertheless like to rewrite the paragraph as follows, if no one objects or offers suggestions (tempted to cut the examples off after ‘process calculi’ though...): --Liyang 06:12, 27 July 2006 (UTC)
- The Church-Rosser property of the lambda calculus means that evaluation (β-reduction) can be carried out in any order. (Indeed, it is referentially transparent.) While this means the lambda calculus can model the various nondeterministic evaluation strategies, it does not offer any richer notion of parallelism, nor can it express any concurrency issues. Process calculi such as CSP, the CCS, the π calculus and the ambient calculus have been designed for such purposes.
Looks good. -- Koffieyahoo 07:00, 27 July 2006 (UTC)
[edit] Representation of integers, functions, etc.
The section "Arithmetic in lambda calculus" misstates the manner in which integers, functions, etc., are standardly represented in lambda calculus. In the standard rendering, mathematical objects such as booleans, integers, functions, etc., are all represented by lambda terms — they are not themselves lambda terms. Thus, Church numerals standardly represent integers (they are not themselves integers, despite the wording of the article), and certain lambda expressions represent functions on numerals (they are not functions on integers, as the article would have it). Is there a good reason for not rewriting this section accordingly? --r.e.s. (Talk) 07:01:46, 2005-08-05 (UTC)
- There is no practical difference, and very little philosophical difference, beteween saying that the lambda terms merely "represent" numbers, and going one more step and identifying the numbers with the lambda terms themselves. In the former case, one needs to prove that the lambda term λnfx.(f (n f x)) transforms terms to terms in a way that is isomorphic to the way the successor function operates on integers; in the latter case, one needs to prove that λnfx.(f (n f x)) has the properties that one usually associates with the successor function. Practically, these come to the same thing.
- One can make the same distinction in other areas as well. For example, one can quibble over whether a Dedekind cut is a real number, or whether it is merely representation of a real number. But the practical difference is minimal, and mathematicians do not encounter any problems when they identify real numbers with cuts and say that the cut is the number. Similarly one can quibble about whether the Von Neumann construction {}, {{}}, {{}, {{}}}, ... actually constructs the set of natural numbers, or only some object that is isomorphic to the set of natural numbers. But practically there is no difference, and mathematicians are happy to speak of the number 2 as being identical with the set {{}, {{}}} in the context of such discussions. Ignoring the distinction may be technically incorrect in some hyperstrict sense, but it is only a stylistic decision, and not one that affects the development of the theory.
- To answer your question, it seems to me that there is an excellent reason for not rewriting the section as you suggest. The rewriting to use the "representation" point of view adds an additional layer of verbiage, abstraction, and technical terminology that only serves to obscure the real point of what is going on. This Wikipedia article is not intended to be a perfectly rigorous exposition of the lambda calculus for specialists, à la Barendregt, but rather is for a general, nonspecialist audience. In my opinion, he rewrite you suggested is unnecessary and pedantic, and would accomplish nothing of value, while at the same time rendering the article more difficult to understand. -- Dominus 08:12, 5 August 2005 (UTC)
-
- I disagree with your opinions as follows:
- (1) First the concept ... A standard presentation in terms of Church numerals would be no harder for the general reader to understand than is the unnecessary innovation of so-called "Church integers" (sic), and in my opinion the distinction is neither "hyperstrict", "merely stylistic", nor pedantic.
- (2) Secondly the name ... Church numerals are universally known by that name. I suppose it is not Wikipedia policy to replace such a standard name with a newly-invented one -- especially in a separate article on that subject.
- --r.e.s. (Talk) 20:23:04, 2005-08-05 (UTC)
-
-
- I agree with you that the term "Church integers" is incorrect and should be replaced. Tony Sidaway already replaced one instance of this phrase; I have just replaced the other. -- Dominus 06:18, 6 August 2005 (UTC)
-
-
-
-
- I've changed one more (and I think the last) instance of "Church integer" to "Church numeral". Also, for what it's worth, I've added to the Church numeral article what I think is a more-standard presentation that emphasises the lambda calculus rather than just Haskell and Scheme. --r.e.s. (Talk) 08:01:26, 2005-08-06 (UTC)
-
-
[edit] Maybe add the syntactic convention for curried functions?
I think that λ x . &lambda y . ... should be changed to λ x y . ... , of course explaining this syntactic convention in the section on curried functions. --64.231.73.48 22:46, 23 October 2005 (UTC)
[edit] Recent Edits Need Cleanup
Edits added 11-06 need at the very least radical cleanup. I would revert to version from Nov. 2, but not sure if I'm missing the value in the Nov. 6 changes.Jyossarian 18:11, 7 November 2005 (UTC)
- I think they have negative total value. -- Dominus 21:57, 7 November 2005 (UTC)
-
- Deleted nonsense section "critics on Church" 130.94.162.64 07:12, 8 November 2005 (UTC)
-
-
- I reverted to the 11-02 revision. None of the changes improve the clarity of the explanation. They seem to be the editor's personal philosophical take on the Lambda calculus. - Jyossarian 18:52, 8 November 2005 (UTC)
-
[edit] Not for one turing machine only, but for all turing machines?
- JA: Not that the extra word wasn't helpful in that context, but, in the presence of a universal turing machine it's really an inessential generalization. Jon Awbrey 12:18, 28 January 2006 (UTC)
[edit] Programming languages
The section on programming languages seems to be POV. It refers to Emacs Lisp but not ALGOL 60 as "archaic" (only Emacs Lisp is still in use). It incorrectly lauds Common Lisp as lacking dynamic binding (all Lisps need dynamic binding, which CL calls "special variables"). The claim that binding in Lisps don't match the lambda calculus because their authors "misunderstand" the calculus (rather than because the calculus in pure form has poor efficiency) is just a damn lie. jyavner 17:36, 29 January 2006 (UTC)
- Wow, take it easy man. The use of archaic is only there to contrast it to the other variants of Lisp. Anyway, I changed this to older which is more neutral.
- For your claimes regarding dynamic binding: To my knowledge the current text of the article is more or less correct, and there was actaully a misunderstanding. Might my knowledge be incorrect, please provide references that argue as such. -- Koffieyahoo 08:51, 30 January 2006 (UTC)
-
- MacCarthy's original Lisp used dynamic scoping, and I find it rather unlikely that McCarthy "misunderstood" the lambda calculus.
-
- As for Emacs Lisp: Scheme actually predates Emacs Lisp by several years; it was designed in the late 1970s. In a 1981 paper on Emacs Lisp, Stallman discussed the pros and cons of dynamic vs lexical scoping, and why he decided on dynamic scoping for Emacs Lisp [1]. So the question is, can you provide references to back up your claim that there was a misunderstanding? -- CYD
-
-
- E.g. this report, a shorter verions of which was published in Lisp and Symbolic Computation (a refereed journal) claims this in the first paragraph of it's introduction: bug. So, you might guess that if this claim had been wrong a referee would have caught it. (I'll see if I can find something that make a more explicit claim. However, this is sort of the folk tale, accepted by a large number of people within the Lisp and lambda calculus world, of how dynamic binding came into being. So something in this direction absolutely needs to stay in.)
-
-
-
- Oh, and I don't think I made any claims about Emacs Lisp, now did I? -- Koffieyahoo 18:39, 31 January 2006 (UTC)
-
-
-
-
- Ahem: "Older dialects of Lisp, such as Emacs Lisp, still use dynamic binding, and so are not based on the lambda calculus. Rather, they are based on the syntax of the lambda calculus, together with a misunderstanding of the notion of binding and substitution in the lambda calculus." -- CYD
-
-
-
-
-
-
- I suggest that you look at the history of the text. You're mixing two issues here: archaic and misunderstanding. Anyway this is rather irrelevant. The question is: do we agree that we can call Emacs Lisp an older variant of Lisp, or don't we? -- Koffieyahoo 08:40, 1 February 2006 (UTC)
-
-
-
-
-
-
-
-
- It is newer than both Scheme and Common Lisp, so if anything it's a newer Lisp. -- 18.19.6.82
-
-
-
-
-
- It is difficult to find unbiased sources on this subject. [2] reports that McCarthy says he wasn't trying to match the binding-methodology of lambda calculus, then disbelieves him because he is "quite near to Scheme!" Like absoutely every article I've been able to find on the subject, this one takes as a given that McCarthy was "really" trying to write Scheme and so every way that Lisp is different from Scheme is a "mistake". If all the referees believe this meme, they will not think there's anything wrong with the use of word "bug" in a paper. Another subtle problem is that McCarthy *did* think that funarg represented a bug, but "funarg problem" ≠ "dynamic binding".
- There are other considerations besides theoretical rigor in language design. Scheme is the best language only if rigor is all you care about. But we are not talking here about programming languages. The topic is lambda calculus. As a side note to the calculus, we can mention that some programming languages match up with it quite well, others less so. But matching perfectly is not a goal for most language designers and this article should not be grading their work on that basis. I was planning to edit the article towards this end, but I see that User:CYD has already beat me to it. jyavner 07:01, 1 February 2006 (UTC)
-
-
- McCarthy himself writes: In modern terminology, lexical scoping was wanted and dynamical scoping was obtained. I must confess that I regarded this issue as just a bug and expressed confidence that Steve Russell would soon fix it (ACM SIGPLAN Notices, Vol 13, No. 8, 1978, p. 220). I can only explain these sentences as McCarthy admitting that he misunderstood what was going on in lambda calculus vs Lisp. The funarg device was the result of this misunderstanding. -- Koffieyahoo 08:36, 1 February 2006 (UTC)
-
[edit] Plugging Eiffel
Hello Fuchsias,
I see you've been rather enthusiastic with adding entire paragraphs on Eiffel's agents in articles on the lambda calculus. It makes more sense to put the examples on the Eiffel page itself.
The study of the lambda calculus is a large enough field on its own as to make the link to Eiffel rather tenuous. If you do decide to leave a link, then quick references other languages in this regard would be welcome too, in order to maintain a neutral PoV.
Cheers, --Liyang 19:01, 31 July 2006 (UTC)
Hi, Liyang, thanks for the comment on my Eiffel edit. The lambda calculus article has extensive sections on other languages on how to emulate lambda expressions, but nothing on Eiffel that has full-fledged support for the mechanism. So I added the information. I have refrained from adding similar sections on other languages since I prefer to make edits only where I know the subject matter very well, but may do so in the future, and of course others can provide info on other languages.
Fuchsias 19:27, 31 July 2006 (UTC)
You said:
The lambda calculus article has extensive sections on other languages on how to emulate lambda expressions
Please point out one or two such paragraphs. Lambda calculus mentions functional programming languages and cites Haskell, ML, Lisp as being the prominent ones. No examples in any of these languages.
You also said:
but nothing on Eiffel that has full-fledged support for the mechanism.
Eiffel's ‘mechanism’ for supporting closures / anonymous functions (or objects in Eiffel's case) is (in some cases not) as fully-fledged as that of Smalltalk, Ruby, Python, Perl, Scala, JavaScript or a dozen other general purpose not-primarily-functional programming languages; even C++ has support for this ‘mechanism’ via Boost. And let's not forget that Java has anonymous objects which can be used to achieve the same effect. If you mention one, you should at least mention the more widely-used ones from the rest too.
Having said this, the articles are mainly concerned with the lambda calculus formalism, as part of mathematical logic, theories of computation and type theory. Programming examples do not belong there, there are much more appropriate venues. Look: here's a section I found on Eiffel's agents.
Adding extra information to Wikipedia articles is generally appreciated, but please make sure it's being added to the correct page, under the right context.
Thank you, --Liyang 03:29, 1 August 2006 (UTC)
Regarding sections on other languages: they are not as extensive as I remembered them from a few months ago, or maybe I confused with another page I had just read, sorry.
But I still think it is important to mention that lambda-calculus mechanisms are not limited to Lisp and functional languages. There is a rather strong bias in favor of functional programming on many of these pages, and I think it should be corrected. What I propose to do is to add information about other non-functional languages, e.g. C#. That will take a couple of days.
Thanks,
--Fuchsias 03:47, 1 August 2006 (UTC)
You said:
But I still think it is important to mention that lambda-calculus mechanisms are not limited to Lisp and functional languages.
Please read carefully what I wrote above, in particular the penultimate paragraph.
You do not mean ‘lambda-calculus mechanisms’ (there is no such concept.) You mean anonymous function definitions, closures (related to continuations) and first-class functions (various hacks include Java's inner classes).
I acknowledge your expertise with Eiffel, but trust me, I am not at all unfamiliar with what I'm talking about either.
Thanks, --Liyang 04:16, 1 August 2006 (UTC)
Hi Liyang. I am a little concerned by your last message. Did I imply that I didn't value your competence? I would be very sorry if I had made any such comment.
Also, I don't understand your observation that
Having said this, the articles are mainly concerned with the lambda calculus formalism, as part of mathematical logic, theories of computation and type theory. Programming examples do not belong there, there are much more appropriate venues.
The article is not just about math since it includes a section "Lambda-calculus and programming languages" -- precisely the one we are discussing. That section should be as good as the math part, in particular it should be well-balanced and not limited to pushing the functional language POV. I think the paragraph on imperative languages as it stands now does a reasonable job of completing the picture.
Fuchsias 17:38, 1 August 2006 (UTC)
You said:
Did I imply that I didn't value your competence?
No, I was asserting that I'm familiar with the lambda calculus as well a range of programming languages: imperative, OO, functional and logical.
You said:
(Lambda calculus and programming languages) should be as good as the math part, in particular it should be well-balanced and not limited to pushing the functional language POV.
It's not pushing a functional PoV. The section was simply to point out that function languages are, to a large extent, extensions of the lambda calculus.
As an aside, let's have a look at what you've written:
Numerous imperative languages, e.g. Pascal, have long supported passing subprograms as arguments to other subprograms. In C and C++ the equivalent result is obtained by passing pointers to the code of functions (subprograms).
What you have described is certainly not equivalent. A function pointer/reference does not a closure make. Please fix this after you have read and understood the article on closures.
New mechanisms in imperative object-oriented languages have, however, started to provide the direct equivalent of lambda expressions of arbitrary order; this is the case "agents" in Eiffel and "delegates" in C#.
These mechanisms are not new. Unless you regard Smalltalk as new. Nor are such mechanisms directly equivalent. Saying that agent (x: REAL): REAL do Result := x * x end represents (λx. x*x) is firstly factually untrue: you're enforcing a particular evaluation order, namely passing the argument by value. Secondly there is no * operator in the lambda calculus.
What you are adding is – while appreciated – outside the scope of the article.
Wikipedia guidelines recommend that I simply change anything non-factual without discussion. I'm trying to do you a favour so you can fix the errors and place the information somewhere appropriate. I'm tired of this. If you still insist on tenuous additions which have more appropriate homes, please take this discussion to the talk page. –Liyang 19:03, 1 August 2006 (UTC)
You are right about Smalltalk -- I will fix this.
It is really a pity that you should get so fired up because other people are trying to improve the coverage of the page. There is more to life than functional languages. You should not get "tired" so quickly; listen instead to the arguments of other people (as I am doing to yours).
Even the part about functional languages was poorly written, and it still needs improvement.
Copied and pasted from our user talk pages. I'm tired of this. Someone else take over. –Liyang 19:11, 1 August 2006 (UTC)
Thanks for copy-pasting this discussion (although comments that sound fine in personal discussions may appear too strong in a public discussion -- none of mine are intended that way as I respect your expertise). The point is simply to have a balanced view, in the programming language section, between functional and imperative languages. In fact it is striking that Landin's original paper was focused on Algol 60, which is about as imperative as it gets. Fuchsias 19:33, 1 August 2006 (UTC)
[edit] λ calculus
The phrase lambda calculus should be λ calculus throughout, except the article title. If you disagree, explain why you don't support Pee = Enn Pee and so on. CMummert 22:49, 15 July 2006 (UTC)
- I don't think so. First, λ calculus should be spelled λ-calculus (with a dash), but that's just a small technical point. Second, in the literature there is no real preference for either lambda calculus or λ-calculus, so changing this potentially might give rise to a revert war. -- Koffieyahoo 05:59, 17 July 2006 (UTC)
I think the introduction should somehow explain that "λ" is the Greek letter lambda. It seems to state the proposition "lambda calculus = λ-calculus" in the introduction without much explanation that these two names are basically the same thing. --Whiteknox 21:32, 2 February 2007 (UTC)
[edit] A slight confusion
First of all, sorry if I'm being stupid; it's 2 in the morning and I stumbled onto this article and thought that I might try to see if I can manage to learn this. In the first definition of PRED, is the (lambda u.u) absolutely necessary (it doesn't seen to do anything, like multiplying by 1)? Also, wouldn't it better if the definitions were a bit more consistent (all with m, n AND f and x)? Now let's see what happens when I program this in Python... Zyxoas (talk to me - I'll listen) 00:24, 3 December 2006 (UTC)
- Yes, it is necessary. Consider the difference between P and (P λu.u); one is the function P, and the other is the result of applying P to the identity function, not the same thing at all.
- As for writing the code in Python, good luck. It can be done, but there are some complications. For example, IFTHENELSE will not work as written, because Python uses applicative order reduction, which is not always guaranteed to find the normal form of an expression. A discussion of these issues, and working demonstration code, can be found at Perl contains the λ-calculus. The translation from Perl to Python should be simple, since the two languages are so similar.
- I hope this was useful. -- Dominus 01:24, 3 December 2006 (UTC)
Yes, that was very useful, thanks. It's just that when I did pred 1 and pred 2 by hand the (lambda u.u) seemed to always just onto the left side, and didn't seem to have an effect (the (lambda g h.h (g f)) flips its parameters around) or am I missing something here? Give me Turing machines over lambda calculus any day!! ;) Zyxoas (talk to me - I'll listen) 11:24, 3 December 2006 (UTC)
- I wonder i fyou reduced something the wrong way? I think the reduction of (PRED 1) goes like this:
-
- (PRED 1)
- (λnfx.(n λgh.(h(g f)) λu.x λu.u) 1)
- λfx.(1 λgh.(h(g f)) λu.x λu.u)
- λfx.(λfx.(f x) λgh.(h(g f)) λu.x λu.u)
- λfx.(λx.(λgh.(h(g f)) x) λu.x λu.u)
- λfx.(λx.λh.(h(x f)) λu.x λu.u)
- λfx.(λh.(h(λu.x f)) λu.u)
- λfx.(λu.u (λu.x f))
- λfx.(λu.x f)
- λfx.x
- And the result is 0, as it should be. But if you leave out λu.u, it doesn't go correctly. It's pretty much the same up through step 7:
-
- (PRED 1)
- (λnfx.(n λgh.(h(g f)) λu.x) 1)
- λfx.(1 λgh.(h(g f)) λu.x)
- λfx.(λfx.(f x) λgh.(h(g f)) λu.x)
- λfx.(λx.(λgh.(h(g f)) x) λu.x)
- λfx.(λx.λh.(h(x f)) λu.x)
- λfx.(λh.(h(λu.x f)))
- λfx.(λh.(h x))
- λfxh.(h x)
- But then the λh.(h(λu.x f)) doesn't lose the h the way it needs to in order to continue. -- Dominus 14:54, 4 December 2006 (UTC)
[edit] Suggestions to add to article
The more confusing parts of standard lambda calculus:
- Note from D.Keenan: "Note that we must now be careful to use an explicit multiplication sign where juxtaposition might be wrongly interpreted as function application."
- Note from R.Rojas: "(...) we not give names to functions."
[edit] Sintax ERROR on examples??
Please, on article or on this Talk, need show HOW TO TEST, with a "testable notation". The more simple and ease to test is on UCSD/Barker Javascript page... please test there, it is easy to convert notation (check if ok my convertions).
HOW TO TEST IT (??):
0 := (lambda f (lambda x x)) λ f x. x :=? (lambda f (lambda x (x) )) 1 := (lambda f (lambda x f x)) λ f x. f x :=? (lambda f (lambda x (f x) )) 2 := (lambda f (lambda x f (f x))) λ f x. f (f x) :=? (lambda f (lambda x (f (f x)) )) ... SUCC :=? (lambda n (lambda f (lambda x (f (n f x))))) λ n f x. f (n f x)
There are a error on article's SUCC?
SUCC X := ((lambda n (lambda f (lambda x (f (n f x))))) X) SUCC 0 := ((lambda n (lambda f (lambda x (f (n f x))))) (lambda f (lambda x x))) BUT REDUCING == (lambda f1 (lambda x3 (f1 (lambda x4 x4)))) IT IS NOT 1!
— unsigned added by 201.83.176.224 at 13:43, 6 January 2007 (UTC).
Yes, UCSD/Barker is a very simple way to readers and collaborators check examples and test your understand about the article's syntax explanations... and the article have little problems, not have emphasis on the "primer problems":
- mistakes on interpretation of juxtaposition: multiplication? function application? new variable name?
- "<identifier> <identifier>" is a valid expression, and is not a application!
xyz
is not the same asx y z
: juxtaposition with spaces are sequences!
- "<identifier> <identifier>" is a valid expression, and is not a application!
- mistakes with parenthesis:
E=(E)
but juxtapositionE1(E2E3)
and(E1E2)E3
is the same? - how to translate to implementations, to run examples?
I think it is necessary more comments, review (homologation tests) and indication of "how to test examples" on article.
The unique problem with UCSD/Barker is about eta-conversion, not do it. See another Lambda Calculators (but not "all readers" accessible).
About the SUCC problem, test if:
SUCC M := ( (λ n (λ f (λ x (f ((n f) x))) )) M)
you used f (n f x)
and perhaps the correct is f ((n f) x)
. The use of parenthesis, on the UCSD/Barker interpreter (and many other practical ones), fix the application order.
Perhaps on article we need to use
- SUCC := λ n f x. f ((n f) x) instead f (n f x)
Homologations with 0 and 1 (Barker-interpreter reductions):
SUCC 0 := ( (λ n (λ f (λ x (f ((n f) x))) )) (λ s (λ z z)) ) = (lambda f1 (lambda x3 (f1 x3))) = λ f (λ x (f x)) = λ f (λ x f(x)) = 1 SUCC 1 := ...
See also R.Rojas PDF. -- Krauss 20:15, 6 January 2007