Talk:C (programming language)/archive
From Wikipedia, the free encyclopedia
[edit] 00:42, 3 June 2006 (UTC)
This archive contains comments that were largely (but not exclusively) written before 2006.
[edit] Notes on small changes.
"method" is an unusual word to use for C functions.
"The main
method is required in all C programs that are intended to be executed, but it is not necessary for those that are not so intended." sounds like a joke; in fact, AFAIK the difference between a "C program" and a bunch of C code that's not a "program" is precisely the presence of main. Even if I'm wrong and the sentence is perfectly correct, I don't think that sentence will help readers of this article.
Dereferencing a pointer does not change what that pointer refers to, but produces a new value which is distinct from the pointer's value, and which refers to the data at the address stored in the pointer. But that's too long-winded for the article; I think my new phrasing is okay.
The fact that void pointers exist does not "indicate" that they point to an object of unknown type; the fact that they are CALLED 'void pointers' does, but again that's too long to say and it can be finessed with a rephrase.
I fixed a few grammatical errors too.
DanielCristofani 14:10, 17 May 2005 (UTC)
[edit] "ALL" as a dangerous term
A recent editor reverted out an addition of mine, stating:
- All C compilers ignore // comments; the standard requires it. But that is not particularly relevant here anyway
(My addition was (Many modern C compilers can also ignore the //
comment delimiter used by C++ code, although doing so may require a command-line qualifier.)
I'm always amused whenever anyone suggests that all somethings do something or no somethings do something; such a statement is usually wrong. And it's certainly wrong in this case: There's no doubt that the class of C compilers also includes the compilers that implemented only the Old Testament and not just the ANSI standard, and those Old Testament compilers most assuredly do not implement that "//" comment delimiter. So the "modern" part of my comment was absolutely correct. And the portion about requiring a command-line qualifier was also correct; reasonably-recent and definitely ANSI-compliant versions of the Sun Workshop compilers most-assuredly DO require such a qualifier, although GCC doesn't.
- This depends on what you mean by "C compiler". I usually take it as an implementation of the C standard, and for the sake of clarity, this seems to make the most sense in an encyclopedic article. In which case the statement is provably correct. And if Sun Workshop does not accept it by default, then it's clearly not standards compliant. --Mellum 16:20, 18 May 2005 (UTC)
And, of course, my comment was inserted directly after the explanation of "/* */"-style comments, so it was in a relevant context.
- However, the point of that paragraph was not to give a complete explanation of comment syntax, but rather to give a brief overview of C's syntax in general; therefore, I considered these obscure details too distracting. But I don't feel very strongly about that. BTW, "//" is not from C++, but from BCPL. --Mellum 16:20, 18 May 2005 (UTC)
So I'm putting it back (although correcting my language as I do so). And if someone wants to go out on the limb and change it from "Many modern C compilers" to "All modern C compilers", more power to you. But you'd better be sure you've inspected all modern C compilers including some rather obscure ones!
Atlant 23:48, 17 May 2005 (UTC)
"remember, C is entirely the work of Dennis Ritchie, I am but a popularizer." (Kernighan in an interview)
Which raises the question: should we say it was "developed" by Thompson and Ritchie? What was Thompson's involvement apart from contributing the B language that C was based on? Maybe it needs a further rephrase...
DanielCristofani 23:31, 6 Jun 2005 (UTC)
[edit] Feature list - Metaprogramming
I disagree with the list of features C does not have - in that C supports Metaprogramming.
In the article on Metaprogramming, listed as an example of Metaprogramming, is the 'Quine' - "a special kind of metaprogram that has its own source as its output". There are plenty of Quines in the C language. Here's an example:
char*f="char*f=%c%s%c;main(){printf(f,34,f,34,10);}%c";main(){printf(f,34,f,34,10);}
Therefore, by at least this one example, C supports Metaprogramming. Many more are likely to be found as entries for the IOCCC.
Ben-Arba 07:08, Jun 18, 2005 (UTC)
- The concept of "metaprogramming" is rather difficult to define, as Talk: Metaprogramming shows. Under some loose definitions, even compilation is metaprogramming. I'd say that a C program, in general, cannot reason about or modify its own structure, except maybe with macros, and there only in a primitive sense. Deco 01:03, 19 Jun 2005 (UTC)
But do we have to be in the agreement whether C supports metaprogramming or not? Aside from the difficulty pointed out above, answering this question sounds rather original research. So I think we'd be better off if we simply list features of C and put some example like Ben-Arba gave in the article and let readers decide. -- Taku 06:35, Jun 19, 2005 (UTC)
- Okay, that's fine, I'll remove it. Deco 07:46, 19 Jun 2005 (UTC)
[edit] VfD: C Programming Mistakes
Since the watchers of this page evidently have interest in C, I thought some of you might be interested in commenting on the deletion vote Wikipedia:Votes for deletion/C Programming Mistakes for the article C Programming Mistakes. Deco 21:43, 14 July 2005 (UTC)
[edit] Removed lint comment
I removed this comment from the Problems with C section:
- Rather than placing these checks in the compiler, additional tools, such as Lint, were used.
This is true for some such checks, but others are made undecidable by the lack of limitations on the language. This comment misleadingly suggests that all problems with C can be overcome by a sufficiently intelligent tertiary static checking tool, which is false, and that Lint is this tool, which is even more false. Deco 01:28, 15 July 2005 (UTC)
- I think the point of the original text is that some semantic checks were intentionally left out of the compiler itself, and were intended to be implemented in tools such as lint (I don't personally know with certainty whether or not that is true, but I've heard the same statement elsewhere). I don't think it is suggesting that "all problems with C" can be resolved via the use of a static checker (which is obviously false). Neilc 01:34, 15 July 2005 (UTC)
-
- I agree. The reference to Lint should stay, but it should be shown to be only a partial solution. Something like "Additional tools, such as Lint, were used to attempt to do the checks left out of the compiler"? --A D Monroe III 01:45, 15 July 2005 (UTC)
-
-
- This is already discussed at sufficient length in the final subsection. I don't feel like it's necessary to reiterate it the section introduction. Deco 02:13, 15 July 2005 (UTC)
-
[edit] About "questionable choices of operator precedence"
Perhaps, this evolved out of a need for optimization? It is often possible to write a complex boolean expression as a binary expression, thereby removing the need for several condition checks and jumps, leading to more compact and faster code.
This is particularly useful in compiler scheduled VLIW architectures, although VLIW may not have been the reason for this
--User:Gauthamg123 04:49, 18 July 2005 (according to edit history)
- I haven't heard this. I have heard the C operator precedence is baroque and inconsistent, and I agree. Remember that at the time, operator precedence in general was ill-understood, as shown by other languages developed about the same time (Pascal has only two precedence levels, with logical "and" and binary "and" having the same level, different from logical "or" and binary "or"!). Perhaps there was some odd reasoning behind the C precedence, but it doesn't affect the fact that users question those choices. --A D Monroe III 17:15, 18 July 2005 (UTC)
-
- I read DMR talking about this somewhere; if anyone wanted to track it down, a good place to start would be his Bell Labs webpage. As I recall the story is like this:
-
- '&' used to be used for both logical and binary AND. And the same for '^' and '|'. After "&&" and "||" were introduced, they were sensibly given lower precedence than '&' and '|', but '&' and '|' were not moved up in precedence past the relational operators, since to do that would have broken several dozen programs where '&' and '|' were used as logical operators. (Quiet, heavy irony here.)
-
- In short, it wasn't lack of understanding of operator precedence; DMR knew what would be the best thing to do, but didn't do it because he underestimated the future popularity of the language by about six orders of magnitude. DanielCristofani 05:57, 3 October 2005 (UTC)
-
- The need for operator precedence is a symptom of an ambiguous syntax. [anon]
-
-
- Perhaps so, but the lack of need for operator precedence is a symptom of a verbose syntax. Just look at Lisp. Sometimes a little ambiguity really isn't so bad. Deco 02:27, 19 November 2005 (UTC)
-
[edit] Conditional operator syntax
While syntax readability is necessarily a subjective matter, I'm afraid I must disagree with singling out the ternary conditional operator ?: as a bad example. In many functional languages such as Lisp and ML, this is the only type of conditional available (usually with a somewhat difference syntax), and it can often greatly reduce redundancy. The problem is that C programmers are unfamiliar with it and don't format code that uses it in a readable way. For example, here's some imperative code:
if (x > a) { if (y > b) { z = x + y; } else { z = x - y; } } else { if (y > b) { z = -x + y; } else { z = -x - y; } }
Now if we write this on one line with ?:, the result is fairly horrific:
z = (x > a ? (y > b ? x + y : x - y) : (y > b ? -x + y : -x - y));
If we format it a bit differently, however, it's really surprisingly similar to the original code:
z = (x > a ? (y > b ? x + y : x - y ) : (y > b ? -x + y : -x - y ) );
Punctuation is still uglier than words, but anyone could read this. In short, I think the lack of readability of this operator arises more out of conventional formatting of C than a language problem. Deco 07:22, 2 August 2005 (UTC)
- I think it's excessively opaque. I would prefer the first (without curly braces) rather than the latter. ?: is fine when used judiciously; inappropriate use can make things look a mess. Dysprosia 08:05, 2 August 2005 (UTC)
- Well, besides the fact that Jim Greenlee (I presume you know him, Deco?) would have my head for inventing a mythical ?= operator and calling it the ternary operator, one could argue that the language makes it too easy to abuse. C++, for example, has in some cases intentionally uglified certain constructs (such as casts) to discourage their wanton use and abuse. Perhaps there should be a "pitfalls" section where the ternary operator can reside with goto? Ataru August 2
-
- It's ?:, not ?=. Something like ?= would imply it would be used in the sense that "a ?= b" is equivalent to "a = a ? b". Dysprosia 12:00, 2 August 2005 (UTC)
-
- Maybe - I admit that ?: is often abused, much as goto can be abused. One could say that common language use problems are something worth mentioning about the language. I would generally avoid ?: as a matter of convention, and it's also cumbersome for simulating long if-else chains. Oh, and yes, Jim Greenlee would have your head indeed. :-) Deco 16:46, 2 August 2005 (UTC)
-
-
- Where it comes useful is in implementing something simple like an absolute value function. Instead of having to use a long if-else, one can simply say int abs(int i) { return ( i > 0 )? i : -i ; } or something like that. Dysprosia 22:21, 2 August 2005 (UTC)
-
-
-
-
- I wouldn't overstate its brevity; just writing "if (i > 0) return i; else return -i;" on one line isn't that much longer (formatting can clarify or make brief either one). Its primary semantic benefit is that it's an expression, and so it can be used to eliminate temporary variables and to yield return values from complex macros. In any case I would still contend that with proper formatting it's no more prone to excessive visual complexity than if-else (Lisp is worse, really). Deco 06:34, 3 August 2005 (UTC)
-
-
-
-
-
-
- Discipline, Deco! I consider it good practice not to put ifs all on one line ;) Dysprosia 09:24, 3 August 2005 (UTC)
- Yeah, I normally wouldn't, if only so I can step a debugger through it. Sometimes I'll write it on two lines with the "return" keywords lined up, but usually use at least 4. It is an unfortunate disadvantage of ?: that debuggers aren't typically able to step into portions of these expressions. Deco 19:25, 3 August 2005 (UTC)
- Discipline, Deco! I consider it good practice not to put ifs all on one line ;) Dysprosia 09:24, 3 August 2005 (UTC)
-
-
-
-
-
-
-
- The "(Lisp is worse, really.)" comment is a matter of opinion. Also, since Lisp is almost universally garbage-collected,
- there is no need to worry about temporary variables.
- Sorry, I wasn't asserting that Lisp is worse than C, just that its syntax is more verbose, which I don't think any Lisper could disagree with. Deco 02:30, 19 November 2005 (UTC)
-
-
-
- Your code example seems to suggest another version, which is only possible using the ternary operator -
(x>a?x:-x)+(y>b?y:-y)
--68.58.69.117 18:24, 3 January 2006 (UTC) (Random832, not logged in))
[edit] Intermediate language wording
For the unnamed user editing the intermediate language section, why not go ahead and try rewording the first sentence or two in the section as well. Perhaps break the sentences up a little more and rewording phrases like "which then outputs finished object or machine code". Laundrypowder 20:44, 16 August 2005 (UTC)
[edit] C99 features
The reasoning behind current Visual Studio C99 support might as well be added. As seen on this transcripts http://msdn.microsoft.com/chats/transcripts/vstudio/vstudio_022703.aspx:
Q: Follow-up on the C99 "varargs" question, what, if anything, from C99 will we see in the future from VC
A: In general, we have seen little demand for many C99 features. Some features have more demand than others, and we will consider them in future releases provided they are compatible with C++. It is more likely we'll entertain C99 features if they are picked up in the next version of the C++ standard.
The phrase on their compilers could be worded a little more informatively as they consider new Visual C++ versions really only a c++ compiler. Laundrypowder 14:37, 25 August 2005 (UTC)
[edit] Hello, world
A lot of security holes result in the wrong usage of printf to output strings. Why the hell would someone believe
printf (some_string);
to be useful code, if not for all the misleading example code, which demonstrates that printf outputs strings.
An example should use functions in the way that they are meant to be used. Examples should not just happen to work for that special case.
Therefore I argue that
printf ("hello, world\n");
is to be replaced by
puts ("hello, world");
or
printf ("%s", "hello, world\n");
-- Hokanomono 15:40, 21 September 2005 (UTC)
- printf("some string") is perfectly fine C code. It's idiomatic and has been in C examples at least since the Old Testament. I have no idea how this could cause a serious security problem (mismatch between format string and arguments might cause a problem. sprintf() might cause a problem. scanf() might cause a problem. But plain printf()?). About the only reason that puts() might be preferable is efficiency, which only plays a role in very exotic circumstances. The "Hello, world" example with printf() is straight from K&R2, the standard C introduction. --Stephan Schulz 16:03, 21 September 2005 (UTC)
-
- I agree with Hokanomono here. Instances of printf("I ignorantly use %s here.") and printf(string_from_network) are real problems. Examples stick to a beginner's mind and they try to expand from them. I suggest puts("hello, world") if we don't go into explaining format strings here. ANSI C and POSIX.1 have puts so it's standard. --TuukkaH 17:09, 21 September 2005 (UTC)
-
-
- But the purpose of Wikipedia is not to teach bullet-proof coding practices, but to explain what C is. I claim that printf() is a much more useful feature than puts(), and, in existing code, much more used even with constant, zero-argument format strings. So it illustrates the existing language better.--Stephan Schulz 19:20, 21 September 2005 (UTC)
-
-
-
-
- Further, the purpose of Wikipedia is not to teach useful features such as printf :-) puts illustrates the language well, although perhaps not the prevailing usage. We have to agree that we're talking about an example which tries to illustrate what simple C looks like. C is not the tool of choice for string processing. If printf is kept, I hope the shortcomings are pointed out at least. --TuukkaH 21:15, 21 September 2005 (UTC)
-
-
-
-
-
-
- "printf" typifies C, "puts" does not. As noted, the example is from K&R! If K&R changes, we must match it, but not before. --A D Monroe III 01:38, 22 September 2005 (UTC)
-
-
-
-
-
-
-
-
- Only the first example is from K&R, I agree that it should be presented in original form. However, there is a modern example following K&R's that should be the subject to our discussion. – Hokanomono 06:55, 22 September 2005 (UTC)
-
-
-
-
-
-
-
-
-
-
- Which example? The second Hello-World closely follows K&R2 (except fot the added return). I think both should agree. I have no strong opinion on further examples (which probably means I'm a traditionalist and all my arguments are a rationalization ;-)--Stephan Schulz 08:34, 22 September 2005 (UTC)
-
-
-
-
-
-
-
- This is all much ado about nothing. True, printf("I ignorantly use %s here.") and printf(string_from_network) are real problems. But this is no reason to abandon printf entirely! Similarly, i=i++ is a problem, but this does not mean that we abandon the extremely useful ++ operator. Similarly, sqrt(-1) is a problem, but this does not mean that we ditto. You solve these problems by teaching people about them so that they can learn to use the language correctly and appropriately, not by castrating the language so that it's somehow "safe". This is, after, all, C we're talking about here, not Pascal! Steve Summit (talk) 01:00, 11 December 2005 (UTC)
-
[edit] Notes (by 68.199.16.197)
Someone has been adding a number of "Notes" to the article. While they seem well-intended, they are somewhat unclear and at least partially incorrect. I also think they don't belong into an encyclopedia as such - any useful information should go into the main text. I tend towards just removing most of them - any opinions? Here are mine:
- malloc() and calloc(): True but irrelevant. It is easy to avoid uninitialized memory, but it still is a frequent error. And there are good reasons to prefer malloc() in many cases.
- Modern garbage collectors do not add "tremendous overhead" to either runtime or (non-trivial) program code - indeed, they are often faster than explicit memory handling (Note: I'm a fan of explicit memory handling...;-).
- Pointer casting: This is plain wrong. void* pointers can be cast implicitely. This is not C++.
- Opaque structures: True, but it still is harder (or at least less convenient) than in OO languages with private members.
--Stephan Schulz 21:34, 27 September 2005 (UTC)
- I agree that they should be removed. Even if correct, they probably should have been in parenthesis rather than as a note. wrp103 (Bill Pringle) - Talk 14:23, 28 September 2005 (UTC)
- I also agree to these corrections. I'm sorry that they slipped by me. I do recall that I was the one who first asserted that opaque structures are difficult to implement in C. (perhaps my original wording was too strong) Deco 02:35, 19 November 2005 (UTC)
[edit] ISO copyright concerns
The C standard document on http://www.nirvani.net/docs/ansi_c.pdf may constitute unlicensed distribution of a copyrighted work. This document must typically be obtained from ISO for 340 CHF (quite a lot of money if you think about it) and under a restrictive license. It may be a freely licensed pre-standard document, too, but I have removed the link for now since it's better to err on the safe side. Aragorn2 19:19, 22 November 2005 (UTC)
- As far as I know the real C standard is not available for purchase in .pdf format. Also, you can get a copy of the C standard much more cheaply by purchasing the book The Annotated C Standard (ISBN 0078819520). Just don't read the annotations - they're pretty terrible. Deco 19:05, 8 December 2005 (UTC)
- In fact you can purchase the up-to-date ("C99") ANSI/ISO C Standard, in PDF format, for $18 from the online store at www.ansi.org.
- (The Annotated C Standard book, however, is for C90.) Steve Summit (talk) 00:53, 11 December 2005 (UTC)
[edit] Returns an integral?
"This line terminates the execution of the main function and causes it to return the integral value 0." Isn't it an integer, and not an integral that is returned? I'm not a nativ english speaker, but in norwegian an integral is a diffrent mathematical method (reversed derivation). I'm changing it so change it back if it's wrong ...
- Here, "integral" is being used as an adjective, not a noun. The adjective "integral" has several meanings, but one is "having the property of being an integer". Deco 18:58, 8 December 2005 (UTC)
- Thank your for the enlightenment, sorry for screwing up your article :)
[edit] notes on rewrite of Types section
I just made several significant changes to the Types section. Since the old text had been there for a while (and presumably accepted), I'd probably better explain my changes here, in case anyone has questions:
- Saying that a pointer is a "complex type" at compile time, or is nothing but a raw memory address at run time, are both misleading. Some C systems implement "smart pointers" which carry around e.g. bounds-checking information; these can be made completely compatible with the language definition.
- The statement "Unlike many other languages, C typically represents arrays just as it does pointers" is simply false. Arrays are not pointers and are not "represented as" pointers; the "memory address" and "associated type" of an array are perfectly analogous to the address and type of a single, ordinary variable.
- I deleted "reveal confidential data" from the list of potential consequences of array bounds violations, since if this is a possibility, it means you have serious problems at the OS level, far beyond any limitations of the C language.
- C does not formally have multidimensional arrays; I thought this was discussed to death under "confusion about arrays" above.
- No one "treats a double-precision value as an integer" in "low-level systems programming". Programmers in all sorts of applications often have to convert back and forth between integers and floating-point quantities; this is straightforward (and doesn't even require a cast). What's tricky (and more "low-level") is to interpret the bits of a value as if they had some other type, but without converting them. You can't use a simple cast for that! (Though you might use a cast as part of the more complicated pointer expression you were using as part of the low-level bit inspection).
Steve Summit (talk) 02:32, 11 December 2005 (UTC)
[edit] Is malloc the only dynamic memory allocator?
As far as I know, functions such as calloc, realloc also count. So, I changed the line
blocks of memory of any desired size can be requested at run-time using the library function malloc()
to
blocks of memory of any desired size can be requested at run-time using library functions such as malloc()
- No, but it is the standard memory allocator (iirc). Dysprosia 11:09, 29 December 2005 (UTC)
[edit] proposed new Overview
I'd like to replace the first two paragraphs of the Overview section with these three new ones:
- C is a relatively minimalist programming language. Among its design goals were that it be efficiently compilable, that is, that machine language instructions corresponding to its core language elements can be straightforwardly generated, without extensive run-time support. It is quite possible to write C code at a low level of detail analogous to assembly language; in fact C is sometimes referred to (and not necessarily pejoratively) as a "high-level assembler" or "portable assembler".
- In part due to its relatively low level and unpresumptuous feature set, C compilers can be developed comparatively easily. The language has therefore become available on a very wide range of platforms (probably more than for any other programming language in existence), and well-written C programs can therefore enjoy very wide portability.
- C was originally developed (along with the Unix operating system with which it has long been associated) by programmers and for programmers. Nevertheless, it has enjoyed very widespread popularity, finding use in contexts well beyond its initial systems-programming roots.
Any comments or objections? Steve Summit (talk) 08:55, 30 December 2005 (UTC)
- Looks good. Some tweaks for grammar etc.:
-
- C is a relatively minimalistic programming language. Among its design goals was that it be efficiently compilable — that is, that machine language instructions corresponding to its core language elements could be straightforwardly generated, without extensive run-time support. It is quite possible to write C code at a low level of abstraction analogous to assembly language; in fact C is sometimes referred to (and not always pejoratively) as "high-level assembler" or "portable assembler".
-
- In part due to its relatively low level and unpresumptuous feature set, C compilers can be developed comparatively easily. The language has therefore become available on a very wide range of platforms (probably more than any other programming language in existence), and well-written C programs can therefore enjoy very wide portability.
-
- C was originally developed (along with the Unix operating system with which it has long been associated) by programmers and for programmers. Nevertheless, it has enjoyed very widespread popularity, finding use in contexts far beyond its initial systems-programming roots.
- Also--what other language might possible be more widely implemented? And...in the last paragraph you say essentially "Originally, C was made for programmers; nevertheless, it has become popular and widely used [among programmers]." This sets up a contrast that fails to manifest. The language's history does involve a striking contrast--a language made for the use of a few Bell Labs employees, then used virtually unchanged by uncounted thousands of programmers for decades afterward. But this is not conveyed in the paragraph, and I can easily imagine readers being confused. I would lose "originally" and replace "nevertheless" with something like "subseqently" or "since then". DanielCristofani 10:13, 30 December 2005 (UTC)
-
- Yes, your "striking contrast" is closer to what I was trying to convey. The third paragraph is arguably out of place in this overview, though, perhaps belonging under History. Basically what I was getting at is along the lines of what's already in the Introduction: "It is also commonly used in computer science education, despite not being designed for novices".
-
- In terms of "what other language?", I certainly don't know of one, but I also don't know of a definitive source to cite to back up the availability claim, thus the "probably". Steve Summit (talk) 21:15, 30 December 2005 (UTC)
- It's not terrible, but this glosses over several things:
- There's a two-way relationship between instruction sets and C - the popularity of C has driven hardware designers to continue to support its key features in ever more efficient ways and not to remove things that it depends on. At the time of its design the fit wasn't quite as universal.
- C is in many ways suboptimal as a "portable assembly" - in the words of the (admittedly biased) designers of C--, referring to the use of C as a translator's target language, "You might generate C, if you can live without multiple results in registers, proper tail calls, computed gotos, accurate garbage collection, and efficient exceptions." [1]
- One of my favourite sayings is: it's easy to write a C compiler, but it's extremely difficult to write a good optimizing C compiler. It's also nearly impossible to prove a C program fragment correct (in fact, one generally has to assume certain border cases won't occur). C is very difficult to analyze.
- Agree on change of nevertheless. Would like to see unpresumptuous changed also. Deco 10:44, 30 December 2005 (UTC)
-
- "Not terrible." You're too kind. :-) But since what's there now is terrible, I guess I won't pull out the seppuku knife just yet.
-
- We could debate the three "glossed over" points you menton, but since this is an Overview I think it's appropriate to gloss over them in any case. (But to clarify: when I wrote "machine language instructions corresponding to its core language elements" I wasn't actually alluding to instruction set compatibility at all, but rather the goal that C's types and operators be chosen such that each operator could typically compile to a single instruction or a handful of instructions, as opposed to elaborate run-time subroutines.) Steve Summit (talk) 21:15, 30 December 2005 (UTC)
Here's a next iteration:
- C is a relatively minimalistic programming language. Among its design goals was that it be straightforwardly compilable — that is, that just a few machine language instructions would be required for each of its core language elements, without extensive run-time support. It is quite possible to write C code at a low level of abstraction analogous to assembly language; in fact C is sometimes referred to (and not always pejoratively) as "high-level assembly" or "portable assembly".
- In part due to its relatively low level and modest feature set, C compilers can be developed comparatively easily. The language has therefore become available on a very wide range of platforms (probably more than for any other programming language in existence), and well-written C programs can therefore enjoy very wide portability.
- C was originally developed (along with the Unix operating system with which it has long been associated) by programmers and for programmers, with few users other than its own designers in mind. Nevertheless, it has enjoyed very widespread popularity, finding use in contexts far beyond its initial systems-programming roots.
Steve Summit (talk) 21:15, 30 December 2005 (UTC)
- Be bold and actually edit the article.
-
- Done. :-) Steve Summit (talk) 18:24, 6 January 2006 (UTC)
- Editing and iterating here before you edit is just duplication. Dysprosia 04:30, 31 December 2005 (UTC)
-
- Well, intro edits can be controversial. But I like the changes. Deco 04:48, 31 December 2005 (UTC)
- Well now wait one cotton picking minute! I have a serious objection to the following:
-
- and well-written C programs can therefore enjoy very wide portability.
- First of all, look in a dictionary for the definition of portable. This is a subversion of the usage of the word that is very C centric. This is not an article meant to only target people from the C community who already have ideas about the language. A Java-centric person would probably say something like "you can write code in C that works in many compilers, but the result is NOT portable from machine to machine". We're not barring Java-centric people from reading and understanding this article are we? Qed 19:07, 3 January 2006 (UTC)
-
- In the mainstream computer science sense (and you can google on "define: portable" or see our own porting article to confirm this), "portable" means "can be recompiled with little or no change for another platform". The JVM notion of a binary that can be run on multiple platforms withut recompiling it is a whole nother kettle of fish. (Our stub machine-independent article is currently slanted towards that usage, though I think it could use some work, too.) But there's certainly no "subversion of the usage of the word that is very C centric" here. Steve Summit (talk) 19:29, 6 January 2006 (UTC)
- Second, whether code is portable or not is no contingent on how well it is written. That's a subjective thing anyways. In any event, badly written code can be portable (gets() is available in basically all C compilers), and well written code may have limited portability (vsnprintf isn't even available on many compilers, yet its a very useful call that's hard to duplicate with other functions).
- I would instead prefer the following language:
-
- and portably written C programs can therefore enjoy wide availability.
- This makes the therefore in the sentence more meaningful (as originally proposed, it is not correct to use therefore since the claim does not follow from what was stated earlier) and the sentence both more precise and more correct. Qed 19:07, 3 January 2006 (UTC)
-
- I don't like the word "availability" there. It sounds like "there are many places you can get the program from" which is nonsense. What was meant is that a portably written C program can be run on more different kinds of computer than a portably written program in some language X which is such that fewer kinds of computer have X compilers than have C compilers. How about "a portably written C program can therefore be compiled for a very wide variety of computers"? DanielCristofani 03:05, 4 January 2006 (UTC)
-
-
- Hmm ... I'm not sure I understand your objection, however your proposed change also seems acceptable to me, and in fact says even more (after all wide availability can also come from just supporting x86 on Windows alone). So I will endorse your version. Qed 18:08, 4 January 2006 (UTC)
-
-
-
-
- Thank you. I just meant that "wide availability" and "widely available" usually describe something that can be acquired in many places, not necessarily something that can be used in many places. DanielCristofani 08:56, 5 January 2006 (UTC)
-
-
- I stand by the sentence as originally wrritten, but the comments here are well-taken. I was talking about portability due not just to the wide availability of C compilers, but (perhaps even more importantly) to the portability-fostering aspects of the language itself. But I couldn't just say "C programs can enjoy very wide portability", because of course portability is not automatic, and it's just as possible to write rampantly non-portable C programs. Thus the "well-written" (where I tacitly assume that all agree that "portable" is one of the several programming virtues which travel under the rubric of "well-written"). [...]
-
- Well, that's one problem. Especially when talking about C code, its not clear at all that "portable" and "well-written" are the same thing. If you made this absurd assumption you would have to conclude that every GUI application [...]
-
-
- Please, there's no need for this extreme rhetoric. My claim was not absurd, as I certainly did not state that "portable" and "well-written" were the same thing. Also, you fall into a naive trap if you slide down a slippery slope and conclude that just because certain aspects of a program (e.g. its GUI) are system-dependent, the rest of it cannot be written portably. Steve Summit (talk) 10:09, 8 January 2006 (UTC)
-
-
- ...every application that was multithreaded, and even regular expression parsers (which needs to know the char representation of the system to correctly execute the "[X-Y]" range matcher) was not well-written code. For the C language, at best, "Well written" implying portability represents a particular point of view. Its not borne out by the reality of real world application development. Qed 23:36, 7 January 2006 (UTC)
-
-
- Relax. The sentence in question did not say that (all) "well-written" programs are ipso facto portable, or that nonportable programs are therefore not well-written, nor indeed did it say anything about the sad reality of real world application development. Note that little word "can".
-
- Yes, "Portably-written C programs can enjoy very wide portability" would have been more strictly accurate, except that it's then a ridiculous near-tautology. [...]
-
- But that's all you have. You can say C is a language where its easier to write portable code than many of its predecessors (like Pascal, Fortran, etc) because the standard forces a common syntax for a certain set of implied functionality for the language. But in a modern context, or from the point of view of someone learning languages or having real application development needs today, C is NOT, relatively speaking, an appropriate language for portability. Comparing C to Python, Java or Lua, makes this point abundantly clear -- there is a level of portability assumed and delivered on by those languages that C does not. The main portability feature of C is that there is a common standard which implies a portable subset of functionality on all compliant compilers, but the language does not venture at all beyond this. So DanielCristofani's final revision of the comment is extremely accurate and appropriate: "a portably written C program can therefore be compiled for a very wide variety of computers". Qed 23:00, 7 January 2006 (UTC)
- Anyway, the version I just posted is revised further to address these concerns. Steve Summit (talk) 18:24, 6 January 2006 (UTC)
-
- Sorry, no. You put "availability" in there, but as DanielCristofani pointed out, it isn't the most precise consequence of the fact that many C compilers exist. I've changed it to a slight variation of Daniel's version (I added "standard's compliant", since that's the real prerequisite for C's asserted claim to portability).
[edit] Reversion
Stephan Schulz, why did you revert my entire edit instead of the one misguided change to the first example? --68.58.69.117 12:09, 3 January 2006 (UTC) (Random832, not logged in)
- While the rest is not necessarily wrong (being speculative), it is still misleading. There is e.g. no guarantee that there even is an OS to return a value to (consider the difference between free-standing and hosted implementations). I also think that the return value of a program under some implementations is a very specific (and implementation-defined) point that just does not belong into this general example. --Stephan Schulz 12:37, 3 January 2006 (UTC)
- OK - the point I was trying to make was just the fact that on many common implementations you get an incorrect value (13 on any intel x86 system, and for some reason 16 on PDP-11 unix (not exactly common, but the _first_ c implementation))
- You do not get an incorrect value. You get an undefined value (that may, or (more likely) may not, be equal to EXIT_SUCCESS, which customarily, but not necessarily, is 0). And I bet you do not get 13 on any intel x86 system, either. It depends on your compiler and run time system much more than on your processor. Anyways, all this is very peripheral to C (the language), that's why I don't think it should be discussed in detail. Are you happy with my latest version?--Stephan Schulz 19:09, 3 January 2006 (UTC)
- OK - the point I was trying to make was just the fact that on many common implementations you get an incorrect value (13 on any intel x86 system, and for some reason 16 on PDP-11 unix (not exactly common, but the _first_ c implementation))
[edit] K&R Supplements
I removed two entries from the list:
- Unique name space for each struct
- Single point float data type
Both of these features appear in K&R. They might not have been in early versions of C, but by K&R's time, they were standard features. wrp103 (Bill Pringle) - [[User talk:Wrp103|Talk]] 01:48, 9 January 2006 (UTC)
Also, I seem to remember that K&R C would actually accept =+ as an operator for backwards compatibilty reasons, although it did produce a warning if you did something like a=+10, telling you to add a space to the statement. (This was the official compiler for AT&T Unix, which claimed to be K&R.) It is possible that it didn't actually support =+, but rather produced a warning message whenever it saw a construct that could have been meant as =+. What is clear, however, is that the =+ assignment operator doesn't appear in the K&R text. wrp103 (Bill Pringle) - [[User talk:Wrp103|Talk]] 20:47, 10 January 2006 (UTC)
[edit] explaining some changes
Someone at 217.247.66.87 "corrected some minor grammatical errors and made the wording cleaner" at 18:55 on 3 January 2006, but introduced a few errors in the process, which I'm correcting. A few of the corrections may seem subtle, so I'll explain them here:
- the programmer can override the checks in various ways, either... or by using pointers or unions to reinterpret the underlying bits of a value in some other way.
That is, if you want to inspect (say) a float
value as a series of raw bytes, one way is certainly to use union of a float and an array of unsigned char, but another is simply to use an unsigned char pointer:
float f; unsigned char *p = (unsigned char *)&f; int i; for(i = 0; i < sizeof(f); i++) printf("%02x\n", *p++);
- ...at compile time, a pointer variable's type includes the type of the data pointed to...
Yes, the pointer variable's type, not its value. (At compile time, pointers have no values.)
- Text strings are commonly manipulated using pointers into arrays of characters.
When a string is manipulated as a char *
, this is a pointer to a character, not a pointer to an array. But the character is almost always within an array of some kind, thus the word "into".
- Declarations either
declare ordefine new types...
This one's tricky to say right, and still isn't perfect. When we're declaring variables, we can distinguish between the definitions which allocate space and the declarations which merely access a variable (and its space) defined elsewhere. But there's also a third case (which our treatment, perhaps confusingly, presents first), namely a declaration which doesn't declare a variable at all, but merely introduces (or "declares", or I'd say "defines") a new type.
[edit] Variadic functions / Overloading
Isn't this just compile time method overloading? I've not heard of "Variadic functions". -- Tompsci 16:16, 16 January 2006 (UTC)
- Perhaps you're more into C++? Variadic functions is how I always knew functions such as printf etc. Camillus (talk) 16:31, 16 January 2006 (UTC)
-
- I have never programmed in C++, (IMO) it just seemed an awkward name, maybe a reference could be made to polymorphism for those readers who are better aquainted with Object Orientated languages (including C#)? Has the term been superceded? -- Tompsci 16:38, 16 January 2006 (UTC)
- I'm not familiar with that term, but the content of the section looks fine. I had simply heard of them as functions taking a variable number of arguments. Typically, you would use the varargs function when working with them. wrp103 (Bill Pringle) - [[User talk:Wrp103|Talk]] 16:42, 16 January 2006 (UTC)
- It's not a made-up term. It occurs in the standard, in the GCC documentation, and in other notable locations. It's also not at all the same thing as overloading - an overloaded function has multiple versions specialized to certain fixed numbers and types of parameters. A variadic function can take an arbitrary number of parameters of arbitrary type. Even in C++ there's no typesafe equivalent of variadic functions. Deco 21:20, 16 January 2006 (UTC)
[edit] Move "Philosophy" section up and rename?
Really accessible article. I think moving the "Philosophy" section up be the first section would greatly improve it. The material there expands on the lead and sets up the rest of the article much better than the History. --Tsavage 08:04, 23 January 2006 (UTC)
- There's been some debate on the Java (talk) article about which section should appear first. It's mostly an argument between History and Philosophy. Generally speaking I think either way works pretty well. If there's not much resistance to the suggestion, I'd say go ahead and give it a try. --Flash 21:03, 23 January 2006 (UTC)
-
- OK, I'll wait a bit and will do it if there's no more comment. For me, it's not a matter of format, it's simply what's easier on the eyes (user friendly :) as it were after the summary, for the no-background reader. The Philosophy section helps ease you into the rest, whereas History gets a bit specific and harder to follow right off the top. (In the current edit of Java, IMO, same difference.) In general, it's the news-style pyramid development thing, broadening from the highlights, which I think Philosophy does better... Thanks. --Tsavage 04:33, 24 January 2006 (UTC)
-
-
- Which type of presentation is more appropriate for an encyclopedia, news-style or textbook-style? News-style favors leading off with Philosophy, or maybe even Usage (establishing how the language is significant right away, and capturing the attention of the casual reader before presenting the historical context); textbook-style favors leading off with History (establishing as much of a historical context as possible before presenting anything else).
-
-
-
- In news-style, you have a fleeting audience whose attention will drift quickly, and you have to grab them right away with interesting, useful info. In textbook-style, you basically have a captive audience (i.e. They've been assigned to read about the subject, and they pretty much have to whether or not they find it interesting). Is an encyclopedia somewhere in between, or is it clearly one or the other? --Flash 15:22, 27 January 2006 (UTC)
-
- I think philosophy should be first, but I would consider possibly intermingling them - a discussion of the people and processes behind C and their motivations. Deco 06:07, 24 January 2006 (UTC)
- It's worth noting that what's currently called the "Philosophy" section was not too long ago called "Overview", until I rewrote its lead paragraphs and (somewhat inadvertendly) made them more philosophical, and then someone renamed it, and then people started rearranging it... Steve Summit (talk) 06:17, 24 January 2006 (UTC)
-
- This is fun! ;) Yes, I didn't mention the section heading somewhat in deference to world peace, but, indeed, in this case, at least, "Philosophy" is a little...obscure, and "Overview" or similar would IMO be a little more descriptive (and generally inviting)... --Tsavage 06:34, 24 January 2006 (UTC)
-
- I've been using "Philosophy" to encompass both "the ideas behind the language" and "the basic nature of the language". For most of the programming-language articles, there's enough overlap between the two to where it's impractical to have separate "Philosophy" and "Overview" sections (respectively). Plus it might confuse people a bit if both sections existed for an article. I chose the term "Philosophy" rather than "Overview" thinking that it'd be a term that could be more widely applied, that could cover more ground, but it was a somewhat-arbitrary choice. --Flash 18:17, 24 January 2006 (UTC)
- So, I just did the switch, putting Philosophy first. I don't see the pressing need to change the section title from Philosophy. That's really it for me. I was checking this article out as part of a Featured Article survey I started, and that was the main thing I noticed... As one additional comment, IMO, the Philosophy section is almost at the point of being "widely accessible" (lots of people will understand it), but I don't think it's there yet. As it is now, many readers will glaze over. Explaining in a little more plain English the what and why of C (just an adjustment to what's already in the Philosophy section) would make this article hugely better. Thanks. --Tsavage 02:43, 25 January 2006 (UTC)
- No. No. History should almost always come first. History shapes the philosophy of the language. The article should be ordered to reflect this. Dysprosia 13:42, 25 January 2006 (UTC)
-
- See also: related Java discussions. --Flash 15:57, 25 January 2006 (UTC)
-
-
- This being a general encyclopedia, I think that making the entire article understandable to "most readers" is main thing. I read the discussion from Java talk mentioned above. In general, I agree with the idea that the primary article about a current language should be about what it IS more than where it came from. Of course, history is part of that story, but it isn't necessary to explaining what C is. So, the Philosophy section (which could also called Overview, or Design, or such) should be an extremely accessible introduction to what it is, why it was developed, where it fits among other languages. I don't suppose it matters what the section is called, as long as that info is included first, so that a context is developed. Points summarized in the introduction should be more fully developed before getting into other details, for example:
-
-
-
-
- one of the most widely used programming languages
- prized for its efficiency
- the most popular programming language for writing system software
- commonly used in computer science education, despite not being designed for novices
-
-
-
-
- Here, History doesn't do that (Philosophy doesn't either, but comes closer). History, after the Space Travel legend, gets pretty murky. By the K&R C subsection, you definitely need some computer background to make sense of things. At that point, the "average reader" is likely...lost. IMHO. :) --Tsavage 02:58, 26 January 2006 (UTC)
-
-
-
-
- I think what you listed above is a good outline for the Usage section. --Flash 18:02, 26 January 2006 (UTC)
-
-
-
-
-
- If History is inadequate in some fashion it should be rewritten to improve flow. Arbitrarily reordering sections is not the best way of proceeding. Dysprosia 11:01, 27 January 2006 (UTC)
-
-
[edit] pointers
"Pointers are a primary source of danger."
- Instead of begining the introduction about pointers with such a -ve note, cant we say that pointers are very powerful and then mention this? I do agree that if you use pointers the chances of your program crashing is more :) , but you cannot imagine a world without pointers, i think we can be a little more +ve here --vineeth 17:57, 27 January 2006 (UTC)
- The section starts out with this because it's a subsection of the "Criticism" section, intended to describe features in a critical light. I can very well imagine a world which does not give you access to raw pointer arithmetic, unsafe pointer type conversion, and conversion from integral to pointer types at every place in an application when only a tiny fraction of places need these features (besides the common case of array indexing). See pointers in the Cyclone manual for Cyclone's many types of pointers intended to address these issues. A world without references of any sort would be dismal indeed, but pointers are far more dangerous than any reasonable application requires. Deco 19:36, 27 January 2006 (UTC)
-
-
- I think a "potentially" needs to be in that sentence as well. Yes, they are "dangerous", but they are not universally dangerous. Dysprosia 05:32, 28 January 2006 (UTC)
-
-
-
-
- Dangerous implies potentially. If it weren't potentially, it would be harmful, not dangerous. But if you want to tone it down that's fine. Deco 09:12, 28 January 2006 (UTC)
-
-
-
-
-
-
- I don't know, I would interpret dangerous as "even with proper use, things could go wrong". I'll make an edit. Dysprosia 09:56, 28 January 2006 (UTC)
-
-
-
[edit] image of C crashing
I don't see how the image contributes to the article. It gives the misleading impression that many C programming mistakes manifest themselves in those symptoms, which is not the case. Therefore I removed the image. Neilc 00:55, 29 January 2006 (UTC)
- C generally has a bad rep for quality, but there certainly are high-quality C apps. I agree that the image is misleading. It's hard coming up with images for an abstract programming language, but more suitable might be a picture of something famous/infamous written in C, like the logo for the Linux kernel. Deco 01:09, 29 January 2006 (UTC)
- Usually on 'nix things just go belly-up with a segmentation fault error or something (and not something as "spectacular" as depicted), but the image appears to be a program running in DOS/Windows, which doesn't afford the same kind of protections that 'nix does... Dysprosia 04:51, 29 January 2006 (UTC)
-
- I didn't look at the image, but I don't think the last statement is true. Windows protects against null dereferences and references to code areas in pretty much exactly the same way Linux does, via page mapping and trapping system exceptions. It only differs in its response after killing the process, which instead of printing "Segmentation fault" to the console and dumping core is to pop up a window saying that the application terminated abnormally with some debugging data. Deco 09:44, 29 January 2006 (UTC)
-
-
- Perhaps I should have qualified first by saying that I don't really know about how DOS/Windows deals with things on a deep level (Sorry about that). I know Windows tends to handles things better, and definately more so with the later versions, but I still think DOS (more) and Windows tends to give greater latitude to errors than 'nix or the later versions of Windows do. My point still remains that an image of a mangled screen selectively chosen to demonstrate what goes wrong isn't really appropriate or common to what really happens when errors occurs. Dysprosia 13:09, 29 January 2006 (UTC)
-
As the originator of the image, I take exception (pun intended) to it's removal and the reason given. I contend that the image perfectly illustrates what I think of as a feature of C, rather than a criticism - in C, if you don't program robustly, you don't get robust programs. I've been programming in C since 1988, and taught the language at college for three years, and while I have moved on to the beautiful abstractions and environment protection of ++ and .NET, I still like to code test-routines in good old vanilla C - if I can get it to work, and work every time, in the unforgiving world of C, I know it's gonna perform at the higher level. I got the image just yesterday, when I miss-cast a double to a long pointer, and ending up "pointing" at screen memory - but hey, sometimes you want to "point" to screen-memory! C enthusiasts love the language's brutal, down-to-earth honesty, and part of the fascination is that programming in C is difficult - but people don't build Cutty Sark's out of match-sticks because it's easy, they don't build houses of cards without ever expecting them to fall; sometimes it's fun to bike without a crash helmet, and that's what C can be like. But with dedication and attention-to-detail, C forces you to program robustly, exposes you to spectacular falls, and you become a better programmer for it. (Even in ++, I never use automatic garbage collection, I always handle my own exceptions - I don't want the system to tidy up after me, I take responsibilty for that myself, as any good craftsman does).
In my experience of teaching C, I often found that students would initially baulk at C's rigour, but many ended up loving it - C's a hard task-master, but if you program diligently, the results can be highly rewarding.
I concede that the tag-line I added to the image may have been a bit flippant, and if it gave the impression that such symptoms are common, then that is misleading - in 18 years, I've only seen such a crash twice. But I contend that the image adds an important idea to the article - in C, if you program sloppily, you get sloppy programs.
So I am going to restore the image, with a better explanation. Please respect the wishes of an experienced C programmer, and Wikipedia inclusionist. Camillus (talk) 11:37, 29 January 2006 (UTC)
- Incidentally, when I tried to preview this comment, my machine completely froze - I had to re-boot, I hadn't saved my remarks, and I had to start all over again - but hey, who's fault is that? As the image illustrates, shit happens!
- I'm afraid we discuss things to reach consensus, not "respect the wishes" of just one user. I reiterate my point that an image of a mangled screen is not really representative of what happens most of the time errors occur in modern C environments. Dysprosia 13:09, 29 January 2006 (UTC)
- I don't believe the image portrays anything particularly unique to the C language. You can achieve the same result by misusing pointers in Pascal or assembly, for example. Also, the submitted image seems to be illustrating a crash inside a virtualised DOS window running Turbo C, however modern day operating systems won't let you write directly to video memory from userland, so the image isn't a common depiction of what would happen after a C program has crashed, assuming the article needs one. See Crash (computing). ozzmosis 13:15, 29 January 2006 (UTC)
-
- Me, I quite liked the image. It reminded me immediately of a spectacular bug which (A) I perpetrated 20 years or so ago, and which (B) occured under Unix. I was trying to add visual attributes (bold, inverse video, underline, etc.) to a full-screen, emacs-like editor I was writing, and I was trying to save memory by doing it with 4 bits per character cell, or two character's worth of attributes per stored byte. Naturally this involved some concise, cryptic pointer manipulations, and naturally I got it wrong at first. The etiiology was different than Camillus's, though: instead of scribbling on screen memory (which this editor didn't use; it was doing everything with ANSI escape sequences sent down the RS232 wire to a terminal), I had accidentally used a random pointer as the source of my array-of-nybbles screen attribute overlay. The rest of the code actually worked perfectly, displaying the proper underlying text, but with each individual character cell receiving one of 16 different random combinations of attributes. It was a marvelous piece of screen confetti, and it ended up being a nice affirmation of the fact that the rest of my attribute-rendering code was robust and properly general.
-
- No, Camillus's image is not of a particularly common, specific consequence of a pointer error, nor are these errors unique to C, but it is a plausible consequence, and it is certainly a vivid, demonstrative, accessible image, particularly for the novice or non-programmer. As Deco notes, it's hard coming up with images for programming-related concepts, and I think we ought to celebrate the fact that we've actually got a half-decent one here. Steve Summit (talk) 16:30, 29 January 2006 (UTC)
-
-
- Thanks for that, Steve - I knew an old hand would appreciate the image. It was meant to be slightly tongue-in-cheek, and appeal to the programmer and non-programmer alike. One of the things I love about C is that when you get spectacular crashes, you always go back to the code and find yourself saying - well yeah, it did exactly what I told it to do! (GIGO) - and sometimes you actually learn something quite useful! I don't do much C now, having moved on to the wonders of ++, but I was wanting to try something yesterday while I was away from my machine, so downloaded Borland's free C++ v1.01 - when I got the crash, I thought - hey - er, nice! and I thought other programmers would appreciate it. Apparently not. Anyway, I've zapped the image, as the consensus seemed to be arising that it was not representative. Oh well :( Camillus (talk) 18:28, 29 January 2006 (UTC)
-
-
-
-
- The issue is not one of "appreciation" whatsoever. C does not always crash like that, and to have an image of a very rare form of crash like that may give the impression that this is normal behaviour in relation to C -- it isn't. It's really a matter of NPOV policy. Dysprosia 20:31, 29 January 2006 (UTC)
-
-
-
-
-
-
-
- I'm sorry? There was an issue at hand here, and I responded to it. This is what these discussion pages are for, discussion about issues and facts that are relevant to the article. I was not intentionally provocative, I made a reference to NPOV policy for your benefit, in case you were not familiar with it, and I don't see how participating in a discussion is in some way mounting a "high horse". Dysprosia 01:02, 30 January 2006 (UTC)
-
-
-
-
-
-
-
-
-
-
- Well, for what it's worth, I thought your "issue is not one of 'appreciation'" reply was, if not provocative, certainly more strident than it needed to be. Yes, this page is for discussion of issues relevant to the article, but there can be legitimate differences of opinion on those issues. In particular, the issue here arguably is one of "appreciation" -- whether the reader would appreciate, and learn something useful from, the image. I believe that many readers would have learned something, and that with an appropriately-worded caption, there would have been little risk of misrepresentation. I'm sorry Camillus capitulated so soon in the face of a 2- or 3-critic "consensus". Finally, your definition of NPOV is very different from mine if you think it applies here. This was a valid illustration of a stray-pointer symptom -- a vivid one, albeit not an everyday one. But there's no POV involved. (The NPOV claim is particularly odd in the context of an article that has as many significant POV problems as this one does -- but that's an argument for another day.) Steve Summit (talk) 04:42, 30 January 2006 (UTC)
-
-
-
-
-
-
-
-
-
-
-
-
- I said so in response to the comment that "I knew an old hand would appreciate the image", since debates about content inclusion are rarely about appreciation of the content and more about suitability for placement in an article. If that came over as overly "strident" then I apologize for the connotation. I was merely trying to draw attention back to the suitability of the image. However, an argument can be made, that, because of an image being placed on the article of a rare and uncommonly spectacular error occuring due to a C mishap, the reader may arrive at the mistaken conclusion that all C errors are as bad as this and thus form a negative opinion of the language -- clearly straying outside the bounds of NPOV. Dysprosia 10:45, 30 January 2006 (UTC)
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- Ok, I've provided one below ;) Dysprosia 12:59, 30 January 2006 (UTC)
-
-
-
-
-
-
-
-
- After thinking about this I've somewhat changed my perspective. It makes a lot more sense in the context of the "Criticism" section where it was placed - if it were inserted as a lead image that would be misleading and inflammatory. If I could see it I might support inclusion, but it's gone now. Deco 23:05, 29 January 2006 (UTC)
[edit] 00:33, 3 June 2006 (UTC)
This is an archive, which contains full sections which themselves only contain (to my knowlege) comments made before 2006.
[edit] Confusing wording
>Parameters that are always passed to functions by value, never by reference
It is typical in C programming, and promoted by the C syntax, to pass pointers to functions. I agree that pointers are passed as parameters and by value, however, functions typically do end up with pointers. Therefore, a function may manipulate the value of variables, arrays and structures which have been passed to it using pointer parameters, which is the point of the issue. I suggest this to be more concise, and accurate:
Parameters and references are always passed to functions by value.
What ??? this whole item is silly and wrong. this isn't C++, its C. Pointers are refererences: I make a struct. I pass the address of the struct to a function. What about this isn't pass by reference?
- Actually, scalar arguments are passed by value, but arrays are passed by name.
- No, they aren't. They decay to pointers when passed to a function (like they do in almost any expression), which are then passed by value. --Mellum 06:09, 13 September 2005 (UTC)
- Pointers themselves (being scalars) are passed by value, but the pointer may be be used to pass by name / address (e.g., &num). (Remember to sign your contributions by entering four tildes) wrp103 (Bill Pringle) - Talk 01:38, 13 September 2005 (UTC)
- Passing by name is different from passing by reference (which is similar to what happens if you pass the adress obtained with &num). Call by name reevaluates the expression every time it is needed, and is not supported by C. --Stephan Schulz 22:18, 18 September 2005 (UTC)
- The statement is well worded. It is true that parameters are always passed by value in C, and it is the only parameter-passing mechanism C supports. However, one can emulate pass by reference to some degree by passing pointers (which are also passed by value) to a data object. Denis Kasak 11:12, 23 September 2005 (UTC)
Some features that C lacks that are found in other languages include: >Automatic garbage collection
Local variables defined in a function are unallocated when the function is exited, unless defined static. Thus, C implementations do automatically free memory without the programmer needing to explicitly unallocate memory. Unless the memory has been allocated dynamically using malloc(), in which case, specific deallocation is required. I suggest:
Automatic garbage collection of dynamically allocated memory
- I believe the issue here is that in some languages (e.g., Java), you can explictly allocate data but you don't have to (or can't) explicitly free data. In Java you can use the
new
keyword to create an object, and Java will free it when it goes out of scope. You can do the same thing in C++, but you have to explicitlyfree
each object. Local variables defined within a function are automatic variables, not dynamically created variables. wrp103 (Bill Pringle) - Talk 01:38, 13 September 2005 (UTC) - Automatic garbage collection commonly refers to automatic freeing of dynamically allocated objects. Local objects in C are not dynamic and therefore not garbage collected as such; their automatic nature is rather a consequence of the design of the language. Their scope is limited to the function in which they had been defined and they do not exist outside it. This kind of behaviour is not commonly referred to as 'garbage collection' and in my opinion nothing needs to be reworded. Denis Kasak 11:58, 23 September 2005 (UTC)
[edit] Reorganization!
I propose a reorganization of the series on the C programming language. I put forth the following four general article headings:
- History of the C programming language—The creation and original uses of C, its development at AT&T/Bell Labs, the various attempts at standardising
- Syntax of the C programming language—The syntax (grammar rules, not behavior) of C
- Semantics of the C programming language—The semantics (behavior and relationships of syntatic components) of C
- Standard C library—History of std library, list of headers, platform variations (BSD, GNU, VC++/Win32, Borland/Win32, etc.)
I think that the current situation is a mess and needs reform. Comments welcome, of course.—Kbolino 06:00, Apr 7, 2005 (UTC)
- Sounds good. I assume you will also have a main article that will provide an overview of the language and pointers to the subarticles w/ explanations of their content. I would suggest that we also add a section on common techniques, which would cover and/or refer to additional articles that cover other subjects (e.g., malloc, the dynamic array issue, etc.) wrp103 (Bill Pringle) - Talk 16:03, 7 Apr 2005 (UTC)
Any reorganization should probably reflect that an encyclopedia article on a programming language might have several goals to address different kinds of readers, but that being in any way a tutorial (or even a reference manual) is probably *not* one of those goals. I can think of three worthy sections, all worthy of inclusion (i.e. as subsections):
- classification (among other programming languages)
- history (including versions of standards)
- description of salient features (minimal, lotsa punctuation, much functionality deferred to library functions, etc.)
There's definitely a place for a few examples, if a reader wants to be able to recognize a C program when he sees one. But I doubt that a full syntactic or semantic definition of the entire language is necessary or appropriate. Steve Summit 05:53, 13 July 2005 (UTC)
Am I the only one that thinks this article needs a serious trim? Large parts of it should be split onto their own page; the main page should be about 20% of what it currently is. Akihabara 09:23, 22 September 2005 (UTC)
[edit] O Operators
can somebody please expand on the O(1) req on operators? O(1) as a function of what? Is a C impl on x86 for example is not complaint if for example the expr:
' x >>= c; ' is O(c) ? (since x86 shl reg,cl is O(cl)), how about mult/div? --Oyd11
I would try to explain it if I could find it, but I can't. Does this comment refer to some obsolete text? Steve Summit
Do you think it's good to use Evolution for writing down the changes, or K and R C will suffice? Thanks, Uriyan
Actually I don't think either topic deserves a sub-page. I think a section on the main page would do. --drj
[edit] Eccentricity
- In C, the months of the year are numbered wrong. They are all low by 1. For example: January is month 0. December is month 11. This is not an obvious problem to English-speakers, but if your native language uses numbers to name the months of the year...
Numbering from 0 is not eccentricity. It's how computers think. --Taw
- That also has nothing at all to do with the C language, but with the standard libraries--that distinction should be made. --LDC
-
- Then why aren't days-of-month numbered starting from 0? 193.167.132.66 08:53, 10 Feb 2005 (UTC)
- Because you probably don't have an array that stores the days of the month in text format. It is quite common to have an array of character strings w/ "Jan", "Feb", "Mar", etc. The same thing applies to week days ("Mon", "Tue", etc.) wrp103 (Bill Pringle) - Talk 15:43, 10 Feb 2005 (UTC)
- Then why aren't days-of-month numbered starting from 0? 193.167.132.66 08:53, 10 Feb 2005 (UTC)
I don't even think thats an eccentricity since its fairly common in programming to do it that way --Alan D
- Fairly common today, but it wasn't in the Good Old Days. That makes it a fad, albeit a fairly long running one. Hint: make a list of languages since Day 1 and see when numbering from 0 began. GregLindahl
Java uses the same numbering scheme. Possibly because there is a zero-based array of month names. Although numbering from zero in both C and Java is more of a convenience for the routines that perform array handling than anything else. If your array starts at location AC00, the address of the 0th element is AC00, the address of element 1 is AC00 + element_length, the address of the 2nd element is AC00 + 2 * element_length, and so on. I prefer to start at element 1, but we're all pretty much stuck with the convention. (Perhaps I'll step up and start writing about software engineering, something I actually have some expertise in.) Ed Poor
Example date (works for C and Java): May 29
Month=4, day=29
Why is the month shifted but not the day of the month? Besides, if it is "the month is not a number", you deserve a slap in the face from your Korean (or Japanese or Chinese) secretary; ask her about it! In all these languages, May is literally "five month". (I think.) -- Juuitchan
Yes, but you'd translate it into english as 'month five' if you didn't want to use the name - 五月 is how you'd write it in chinese characters.
If you want to blame someone for the system in use, I'd suggest starting with the Jesuits, who as the scientific wing of the catholic church spread clockwork and steel cannons across much of asia. They also brought their date systems with them, which is why a 24 hour clock and seven day week is pretty much universally accepted.
The difference between day and month in terms of indexing can be reduced to that of the difference between both nominal and cardinal values and ordinal values.
The system we use uses nominal values for months, and ordinal values for days and years. It should probably use ordinal values for the lot. Note, though that Korean (and I expect Japanese and Chinese) uses cardinal values, rather than ordinal values for month. O-Ueol (五月) vs' O-Beon-JJae-Ueol (五番째月), except that normally the chinese characters aren't used in the second case.
This was probably far more than you wanted to know, but the point is that given that there are trivial mappings between these forms, the particular representation format chosen isn't that big a deal.
If you want to see _real_ problems with computational notions of time, I'd refer you to 'A Long, Painful History of Time' (Erik Naggum [2]).
-- 203.231.161.129
You know, some research shows the idea of clock work and units of 12 come from a much older basis even if promotion of that method was performed by others: This page examines the history of the 24 hour analog dial
-- laundrypowder
The claim that C is the dominant microcomputer applications language is now somewhat dubious, IMHO. In the Windows world, it's probably a toss-up between Microsoft's C++ and Visual Basic, I'd guess. C still rules for embedded systems (that is, the ones not written in Assembler), in the Unix world (particularly for apps that don't have a GUI), and people who can't be bothered remembering C++'s arcane semantics for multiple inheritance and operator overloading :) --Robert Merkel
- How much Visual C++ is actually plain C with a tiny bit of C++ here and there? GregLindahl
-
- Good question, but one which I don't know the answer to. The only big Windows development project I ever saw was in "real" C++, with a class hierachy etc. etc. It was an absolute PITA to work with, though - in the end I just gave up and wrote the code I needed by monitoring a socket interface this code had with a socket monitor I ginned up with Cygwin.--Robert Merkel
- Any windows programming that uses MFC is making extensive use of C++, just by virtue of modelling windows in terms of objects. I think that's actually a good portion of Visual C++ programming.
- Yup. Visual C++ (the language) is mostly C++. Of course, the product Microsoft Studio, Visual C++, has compilers for both the language C and the language C++ (and options to turn microsoft extensions to both of those language on or off). Secretly the two compilers are the same, but accepting different input languages.
Recent stats of what percent of code in RedHat–dash is written in which programing language, clearly shows that C# is dominant, at least on Unices. It's very probable that C++ is much more popular on Windows Word, but I seriously doubt that many apps are written in VB. --Taw
It would be nice to have links to an online manual and online tutorials. There probably are some with under the GNU Free Documentation License. --Hirzel
comp.lang.c (a wonderful resource for C, btw -- some very competent people there) recommends Tom Torfs' tutorial at http://www.geocities.com/tom_torfs/c.html and Steve Summit's class notes at http://www.eskimo.com/~scs/cclass/cclass.html.
Honestly, most online tutorials for C are terrible and demonstrate that the author has little clue about the actual C standard. --mgmei
Moved from article:
- C is a high level language, meaning that the source code of a program can be written without detailed knowledge of the computer's CPU type. Before the program can be used, the source code must be translated into the required machine language by a compiler. In contrast, programs written in an assembly language can only be run on one type of CPU.
- (The above definition of high level language is not meaningful. Consider the evolution of the x86 instruction set toward virtual machinehood. Does this mean that x86 assembly is a high level language?)
-
-
- If we insert "of abstraction" between "level" and "langauge", the phrase becomes less ambiguous, but also
- somewhat awkward. Regardless, can a language which saddles the programmer with responsibility for
- memory management really be considered to have a high level of abstraction? C is clearly more abstract
- than most assembly languages, but that hardly makes it high level.
-
I'm not sure what 203.231.161.129 means about "virtual machinehood", but I can see that under this definition, x86 assembly language could be considered a high level language due to the existence of emulators. -- Tim Starling 07:54 26 Jun 2003 (UTC)
Modern x86 implementations are more and more moving toward risc cores running virtual machines (in microcode or whatever) which provide x86 compatible instruction sets. Hyperthreading for example, is a case of attempting to exploit such an underlying architecture without affecting the definition of the x86 machine (ie, mapping many registers to the x86's few, so that you can increase some hparallelism).
In this regard the x86 architecture is shifting toward defining a virtual machine, much like JVM, rather than specifying a hardware cpu. Not that there is a meaningful distinction in any case.
Likewise, we can see C programs running in a virtual machine defined by the C standard, and supported by the runtime structure of the binary produced.
High and low level are fundamentally ideological terms, and have almost no objective meaning, nor objective definition what-so-ever. So far the only meaningful definition of level that I've found has been in terms of 'the ability to express invariant structure' with more being higher level. Note that by this definition, python ends up being only slightly higher than assembly, since it has almost no ability to define invariant structure.
I wish people would stop using these terms, anyhow, as they are very silly.
-- 203.231.161.129
[edit] B origin from Wombat encyclopedia
The page says the language B got its name from BCPL. But Wombat's encyclopedia says this is wrong. It says "B had nothing to do with BCPL. B was in fact a revision of an earlier language, bon, named after Ken Thompson's wife, Bonnie."
Jay 22:44 5 Jul 2003 (UTC)
Maybe it does but, at best, Wombat is a tertiary source, quoting a secondary source, Foldoc, and we have to ask ourselves "How do they know that ? What is their primary source for that information ?" Our source for the statement that "B is based on BCPL" is the primary source, User Reference for B, written by Ken Thompson, in which he states that:
-
- B is a computer language directly descendant from BCPL. B 1s running at Murray Hill on the DEC PDP-11 computer under the UNIX-11 time sharing system. B is good for recursive, non-numeric, machine independent applications, such as system and language work.
This is a pretty unequivocal statement of B's ancestry from Ken Thompson himself. Having said that, I have no doubt that Ken reused code from his work on the bon compiler in writing the B compiler -- that's the nature of programming -- but according to Ken himself, the language design was based on BCPL, not bon, so it doesn't really matter what others say. -- Derek Ross | Talk 19:01, 2004 Jul 14 (UTC)
- We're dealing with name-origins here, not the development of programming languages themselves, which you would realize is a different issue. You can have a look at the link in the below discussion #B from Bon or BCPL ?, for a primary source. Jay 19:16, 14 Jul 2004 (UTC)
And indeed I did as you will see if you read my comments there. However it turns out that the document concerned is a description of bon. It has nothing to say about B. The best information that we have about the link between the names is that which comes from Dennis Ritchie, and he favours the B from BCPL explanation.
I would also like to point out that the Wombat article which you quote does not appear to be talking about name-origins of B, but rather about the development of the programming languages themselves, which is, of course, a different issue. It states that the B language is a revision not of BCPL, but of bon (which I have shown to be contradicted by Ken Thompson's statements) and that the origin of the name bon is Ken's wife's name (which may be true) but it does not say anything about the origin of the name, B.-- Derek Ross | Talk 22:07, 2004 Jul 14 (UTC)
- On looking more closely, the page talks about both name origins and prog. language development, and it claims B is from bon both ways, which as you say is contradictory. The languaging suggests that it is intended to debunking the B is from BCPL myth. The page also gives a reference to backup its claim "["The Programming Language B", S.C. Johnson & B.W. Kernighan, CS TR 8, Bell Labs (Jan 1973)].". So I guess until someone reads up the book and doesn't find the statements as mentioned, whats written on that page will get the benefit of doubt. Jay 19:06, 16 Jul 2004 (UTC)
Dennis Ritchie has made "The Programming Language B", available at [3]. It is in two sections, both of which I have read. Both mention the connection between B and BCPL. Neither mentions any connection between B and bon. In fact neither mentions bon at all, so I guess that we can discount what Wombat says. -- Derek Ross | Talk 20:56, 2004 Jul 16 (UTC)
[edit] Spelling of behaviour
I see that the spelling "behaviour" in place of "behavior" is used in the C programming language article. My copy of the C Standard uses ";behavior". Should this Wikiopedia article use spelling consistent with the C language Standard? Or, maybe there is more than one English language version of the C Standard? LarryW 01:57, 3 Aug 2003 (UTC)
- I think we should use the spelling of the ISO C standard for technical terms such as undefined behavior and unspecified behavior. I'm pretty sure that there's only one official English-language version of the ISO standard (at least for C99). --Zundark 10:31, 3 Aug 2003 (UTC)
-
- OK -- LarryW 23:16, 3 Aug 2003 (UTC)
[edit] B from Bon or BCPL ?
There were no responses to my comment of 5th July suggesting B was a revision of bon and not BPCL as claimed. Hence I've made the change in the main page.
Jay 11:42, 10 Aug 2003 (UTC)
- Jay,see http://cm.bell-labs.com/cm/cs/who/dmr/chist.html. I think the language there implies that C descended from BCPL, not bon. (At the end of the History: the setting section). -- Kushal kumaran
-
- The line is not conclusive enough. Ritchie says "most probably", and also does not claim that the Bon theory is wrong. Moroever, the creator of "B" is Ken Thompson and the Bon theory is taken from a paper/book (Thompson 69) written by him (this is mentioned in the same line). Jay 18:35, Aug 20, 2003 (UTC)
- Firstly, Ken Thompson states in the User Reference to B that B was designed and implemented by D. M. Ritchie and the author., so I think that we have to give quite a lot of credit to what Dennis Ritchie believes and if he says most probably, then as the acknowledged co-creator of B, he is in a much better position to know the truth than anyone other than Ken Thompson.
-
- You're right, I didn't about know the contribution of Dennis Ritchie to B. The first paragraph of B programming language doesn't have a mention of Ritchie. Jay 19:06, 16 Jul 2004 (UTC)
- Secondly, the [Thompson 69] reference is to Bon — an Interactive Language, which is an undated AT&T Bell Laboratories internal memorandum from around 1969 which describes the Bon language. B did not exist when it was written and thus it has nothing to say about the relationship between the two languages, although it may explain why Bon is so named.
-
- How can you say B didn't exist when the memorandum was written (in 1969)? The same page (C history paper) says in the Acknowledgments section, "Ken Thompson created the B language in 1969-70". Moreover the author of the paper (Dennis M. Ritchie) has listed the references he used in writing the paper, and when he has mentioned Thompson 69 as a reference to the alternate theory, it does mean he has read up the memorandum to get the info. Considering that Ken Thomson initially developed B and Ritchie pitched in later, its safer to assume that Thompson's words have more weightage. Jay 19:06, 16 Jul 2004 (UTC)
-
- The memorandum was written "ca. 1969" which could easily mean 1968 or 1970. Since it is a technical description of a MULTICS language, I think that it's highly likely that it was written between the time that Thompson finished work on bon and the end of Bell Labs involvement with MULTICS. Bell pulled out in April 1969, so that would place the memorandum sometime before May 1969. Work on Unix started in the summer after Bell Labs pulled out of Multics. At first Unix was written in assembler. Work on B started after Unix was substantially complete which puts it towards the end of 1969 or the beginning of 1970 (In fact according to Bell Labs pages on Unix History at [4], work on B didn't start until 1971 which can't be correct). So the memorandum on the bon programming language was probably written between 3 and 18 months before work on B started. -- Derek Ross | Talk 22:09, 2004 Jul 16 (UTC)
- In any case the conclusive line implying that C descended from BCPL can be found in the User Reference to B as I stated earlier. -- Derek Ross | Talk 19:20, 2004 Jul 14 (UTC)
-
- Did you mean B ? Jay 19:06, 16 Jul 2004 (UTC)
- I meant C. I was referring to Kushal kumaran's response above. In any case saying that the statement is true for C, implies that it's true for B. -- Derek Ross | Talk 22:09, 2004 Jul 16 (UTC)
[edit] Other libraries
It would be nice to have a paragraph about other important libraries besides the standard C library.
- Which other libraries?
Maybe glib, for instance. -- Taku 01:38, Nov 16, 2003 (UTC)
- By 1973, the C language had become powerful enough that most of the kernel of the Unix operating system was reimplemented in C, perhaps following the examples of the Multics system (implemented in PL/I), Tripos (implemented in BCPL), and perhaps others.
But according to the TRIPOS page, TRIPOS was was developed in 1978, five years after Unix was re-implemented in C. Which is right? -- Khym Chanur
I would like to split off anatomy of C section to its own aritcle and merge ANSI C and ANSI C standard library with this article. It is more relevant to discuss history or standards than syntax details in this article, I think. Any thought or objection is welcomed. -- Taku 01:38, Nov 16, 2003 (UTC)
" During the late '70s, C began to replace BASIC as the standard microcomputer language," : BASIC was never a "Standard" programming language in the '70s. It was a teaching language. Only with the rise of MS's Visual basic did it become a real-world language, and it's still very limited in its application (it's mostly useful for GUIs). orthogonal 04:01, 26 Nov 2003 (UTC)
- I think it depends on how you interpret the word "standard". If you mean "defined by formal written standards", then I'd agree that BASIC is not, and has never been, a standard language. But I don't think that's what's meant here. During the late 1970s, microcomputers were typically programmed in either assembly or BASIC - consider as examples the MITS Altair 8800 & 680, Commodore Pet, TRS-80, Apple 2, VIC-20 and Commodore 64. I'm rather suspicious of the IBM(1981) reference, though: I was using IBM PC's in 1989 and there wasn't a hint of C on them. -- Ortonmc 04:15, 26 Nov 2003 (UTC)
-
- For amateur end users, perhaps; but I was taking it to men that most applications were written in BASIC, which I find dubious. orthogonal 04:26, 26 Nov 2003 (UTC)
- It wasn't only a teaching language. At least one OS, DEC's RSTS/E, was implemented nearly entirely in BASIC (DEC's BASIC-PLUS in this case). - Lady Lysine Ikinsile 08:45, Jun 9, 2004 (UTC)
I have removed the "standard" comment, since it is ambiguous. As well as the 1981 comment, since this text is not overly clear on this. Please direct such specific comments to my talkpage, to be sure I noticed them. Lirath Q. Pynnor
'C++ (thus avoiding the issue of whether the successor to "B" and "C" should be "D" or "P".)' Why would it be "P"? -- Ortonmc 04:15, 26 Nov 2003 (UTC)
Removed the image. To my understading of the ANSI C standard, void main(void); is not an acceptable prototype for main. Dysprosia 10:39, 27 Apr 2004 (UTC)
- This is fixed now, but some question remains about whether an image is a useful thing to have. It's displayed on the right. Dysprosia suggests, quite reasonably, that a text box might be more appropriate (see User talk:aarchiba); I think the image is useful to have for possible use on the front page and for its syntax highlighting and smaller size (also the text box does not format correctly on my machine).
#include <stdio.h> int main(void) { printf("Hello World!"); return 0; }
A C program that prints "Hello world!"
- The text box is as shown here. --Andrew 11:15, Apr 27, 2004 (UTC)
One could probably use the new/nifty table image syntax to achieve a better result, but I'll muck around with trying that later. Dysprosia 12:33, 27 Apr 2004 (UTC)
I think the syntax highlighting is a mistake. While syntax highlighting is useful to programmers, anyone reading the article to learn about C won't know enough to benefit from the syntax highlighting, and worse, might mistakenly conclude that the syntax highlighting is essential to the code, or that one set of highlighting rules is standardized across the C Programming Language. (Furthermore, I think anyone learning the language should avoid highlighting until they fully understand the various entities being highlighted.) orthogonal 09:18, 15 May 2004 (UTC)
- I agree, I think the syntax highlighting is distracting and possibly confusing for those unfamiliar with it. -- anonymous, 17:25, 15 May 2004 (BST)
- Thirded. I am a huge fan of syntax highlighting, but I don't think it serves to help anyone understand C; one might potentially think "oh, so if I write a phrase in pink, it gets printed on the screen." -- Wapcaplet 16:24, 15 May 2004 (UTC)
- I removed the syntax highlighting and the references to it in the explanatory text. I also removed the 'hello, world' image; seems silly to have a picture of something that can be found on the page in text. -- Merphant 22:02, 15 May 2004 (UTC)
[edit] Handwritten assembly is always better...
It should be noted, however, that for complex modern processors the machine code generated by C compilers is usually faster than hand-written assembly.
As Robert Dewar has pointed out, that is not a meaningful statement. The human can not approach the speed with which a computer compiles code. But if there is no time limit put on the human, eventually she will produce code that is optimal. In fact, a common first step in producing hand-tuned assembly is compiling the code with a good compiler, which makes it hard to be slower than the code generated by that compiler.
Any real comparision is going to involve a particular set of compilers, certain time limits and certain programmers (some of the published comparisons involve undergrads, which is certainly different from results using RMS and Mel), and is not going to be useful much beyond that. --Prosfilaes 01:40, 31 May 2004 (UTC)
Well, better is a tricky term. I think you have actually advanced an argument that a human, under the right circumstances, can make (a little bit of) a program run faster than the compiler can. This is surely true. One old joke was that you could always take out 10% (it just kept getting harder to do). On the other hand, there is a famous cautionary tale, I can not recall the source at the moment, about an "optimization" that did not get called for years. There is another school of thought which says that people can better spend their (valuable) time thinking about the choice of algorithm and overall design, and that the same effort spent in this way usually results in much larger payoffs in terms of speed.AJim 05:56, 31 May 2004 (UTC)
I think they meant faster as in faster speed in execution, not writing. Dysprosia 05:59, 31 May 2004 (UTC)
I also think they meant speed of execution. What I meant to say was that an hour invested in working on the overall design was likely to yield more of a speedup in execution than an hour invested in trying to write better machine code than the compiler. It is well known that given the same problem to solve, a group of programmers will sometimes produce programs with an enormous range of execution times. We are talking orders of magnitude here, not a few percent. Experienced programmers usually do much better than novices in terms of speed. What distinguishes the faster programs is not faster low-level optimizations, but faster algorithms. AJim 16:11, 1 Jun 2004 (UTC)
[edit] confusion about arrays -- should we describe a rarely-used "feature", or a common work-around ?
Think I disagree with this sentence:
- The arrays can appear to have more than one dimension; dimensions are laid out in row-major order (stricly speaking, they are arrays of arrays).
I believe it could be
- Two-dimensional arrays can be implemented, for example, as arrays of pointers to arrays of pointers, with higher-dimensioned arrays defined analogously: arrays of pointers to arrays of pointers to arrays of pointers, etc.
-- anonymous
-
- This statement used to make sense. Some ill-informed contributor made it confusing. Static multidimensional arrays are in fact, single-dimensional in reality, and laid out in row-major order, but they can be implemented as arrays of arrays. I'll try to fix it. Derrick Coetzee 20:59, 10 Jul 2004 (UTC)
Summary
On one hand, I want to say your proposed replacement is incorrect -- the original correctly describes 2 dimensional static arrays (and higher dimensions), while your replacement does not.
On the other hand, static arrays of more than 1 dimension are very rarely used in real programs. Nearly everyone builds higher-dimensional arrays out of pointers and (1D) arrays, as dynamic arrays, in exactly the way your proposed replacement describes. (A few people use "sparse arrays", which are dynamically built out of pointers and 1D arrays in a completely different way).
Details
The data structure that C creates when the source code says
double NewYork [10][10] = {0.0};
(a 2 dimensional static array) is simply a block of 100 double variables packed one right after the next; there is no array of pointers.
The handling of 2 dimensional static arrays in C is widely believed to be seriously flawed; see the C FAQ:
- http://www.eskimo.com/~scs/C-faq/q6.15.html
- http://www.eskimo.com/~scs/C-faq/q6.19.html
- http://www.eskimo.com/~scs/C-faq/q6.21.html )
"Rather than worrying about all of this, one approach to using multidimensional arrays of various sizes is to make them all dynamic" -- http://www.eskimo.com/~scs/C-faq/q6.20.html
So when you want what mathematicians would call a 2D array of real numbers, most people end up using a (1D) array of pointers to (1D) arrays of double. We end up using an "array of pointers to arrays of pointers to arrays of double" for 3D arrays. That's all described in http://www.eskimo.com/~scs/C-faq/q6.16.html .
Because it's a bit of a hassle to do this for every array, many people wrap this all up in a matrix-manipulation library -- and unfortunately, there are tons of incompatible libraries for manipulating matrices in C.
--DavidCary 21:47, 10 Jul 2004 (UTC)
- You make a strong case; however, the issue is not that I consider static arrays more important, but that this is the Types section being written in, and dynamic arrays are not a type supported at the language level. It certainly deserves mention, and perhaps reference from this section, but not in this section, which I consider to be reserved for language-supported type constructs. Deco 18:31, 11 Jul 2004 (UTC)
I'm the person who inserted the comment about multi-dimensional arrays being arrays of arrays. According to K&R section 5.7 Multi-Dimensional Arrays (I have the original here at work, so the updated version for ANSII C might have changed), when explaining the array day_tab, they state:
a two-dimensional array is really a one-dimensional array, each of whose elements is an array. Hence subscripts are written as
day_tab[i][j]
rather than
day_tab[i, j]
- I might be misunderstanding you, but here's my response. Many dynamic arrays literally store arrays of array pointers, and static arrays have this semantics, but static arrays are not stored this way, in a literal sense. They're laid out as one-dimensional arrays in row-major order, and pointers to subarrays are computed, rather than retrieved. That's why this code example produces "1234":
#include <stdio.h>
int main() { int a[2][2] = {{1,2},{3,4}}; int* b = &a[0][0]; int i; for(i=0; i<4; i++) printf("%d", b[i]); return 0; }
- Deco 02:39, 13 Jul 2004 (UTC)
-
- Actually, I wasn't talking about dynamically allocated multi-dimensional arrays. I was talking about the fact that in C, there aren't really multi-dimensional arrays, but rather arrays of arrays, which produce the same effect.
-
- I do a lot of dynamic arrays (and in fact created an ADT that is available from my web site), but only single dimension arrays. Generally, I create a dynamic array of pointers to structures, so I haven't ever bothered to try multi-dimensional arrays. I have always felt that multi-dimensional arrays were mostly useful for matrix algebra and other math applications, not general data structures.
-
- You are correct that C will allocate a single block of code, and map the coordinates to the proper memory location. That's why I didn't understand what was meant by metadata.
-
- Wrp103 18:12, 13 Jul 2004 (UTC)
As for allocating multi-dimensional arrays, I had never heard of anyone doing it the way you describe, although it is no doubt possible. I'm not sure what you meant by metadata, however, since an array is simply a block of memory of the necessary size to store the data. In the case of a multi-dimensional array, each element would be a pointer to the second array.
BTW - I almost never use multi-dimensional arrays, and encourage my students to use arrays of structures instead. IMHO, they are a carry-over from when we didn't have data structures, and would use multi-dimensional arrays where the rows were for a given entity, and the columns were the fields of that entity.
- It's true that multidimensional arrays were used for this purpose, but this was never the intended purpose nor even a good idea. Static multidimensional arrays were more useful in numerical computational of the sort normally done with Fortran. Derrick Coetzee 02:39, 13 Jul 2004 (UTC)
BTW2 - I admit I'm an old fogey, but I really object to adding the comment lines for the hello, world application. Actually, I don't care for the return 0 being added for that matter. The beauty of that programming example was in its simplicity. The fact that the program is well explained immediately below doesn't (IMHO) the addition.
- return 0 is necessary. main returns an int, and leaving it off is sloppy. Dysprosia 23:43, 12 Jul 2004 (UTC)
- Actually, in C99 this is not true anymore. To quote the standard:
- 5.1.2.2.3 Program termination
- 1 If the return type of the main function is a type compatible with int, a return from the
- initial call to the main function is equivalent to calling the exit function with the value
- returned by the main function as its argument;10) reaching the } that terminates the
- main function returns a value of 0. If the return type is not compatible with int, the
- termination status returned to the host environment is unspecified.
- Forward references: definition of terms (7.1.1), the exit function (7.20.4.3).
- —Lady Lysiŋe Ikiŋsile | Talk 07:54, 2004 Jul 13 (UTC)
- My comment on the return 0 was because it wasn't included in the original "hello, world" program.
- Wrp103 18:12, 13 Jul 2004 (UTC)
- Actually, in C99 this is not true anymore. To quote the standard:
- I agree that the comments are misplaced; explanation belongs in the text. The return 0 is important, not so much because of what's allowed, but because it helps demonstrate the procedural paradigm in a very simple setting. Deco 02:39, 13 Jul 2004 (UTC)
I think part of the confusion that arose from my original edits were because I made two unrelated changes that were perceived as related.
I commented on the nature of multi-dimensional arrays in C, and I added an example of how to create single-dimension dynamic arrays in C. I think somebody assumed I thought they were related, which is what started this whole thread. I almost never use multi-dimensional arrays, and never even considered trying dynamic multi-dimensional arrays. However, I have noticed that many of the C text books I have used for various classes claim that you can't have dynamic arrays in C, which of course is wrong. So, when I'm teaching C, I make it a point to mention those two things.
Sorry for any confusion. I'm still getting used to this sytem. Wrp103 18:27, 13 Jul 2004 (UTC)
Wrp103 16:17, 5 Aug 2004 (UTC)
I noticed that somebody deleted my example of a constructor, and replaced calls to malloc followed by clearing memory to calloc. The purpose of the constructor is to initialize the data structure to default/valid values; my example of zeroing the structure was for simplicity. BTW - NULL isn't always all zeros (I recall working on a mainframe where this messed up a lot of programs that used things like "if (ptr)", assuming NULL would return False.)
- That was me. Yes, I'm aware that (although NULL is always 0), the value of a NULL value might not be 0. However, I think I only removed the code which called memset() with '\0' as the argument, which is not compatible with systems where null isn't 0 anyway, and is basically identical to what calloc does. The code which explicitly assigns ... = NULL was left.
(Christopher Thompson added the next bit) The following makes no sense: In C, sizeof('a') == sizeof(int); In C, sizeof('a') == sizeof(char) (For the record this has been corrected now. Cyoung 04:12, 23 Sep 2004 (UTC))
- Wrp103, the statement you make above: BTW - NULL isn't always all zeros (I recall working on a mainframe where this messed up a lot of programs that used things like "if (ptr)", assuming NULL would return False.)
- While it is true that NULL isn't always all zeros, "if (ptr)" will ALWAYS return false if ptr is NULL, since the compiler will recognise that ptr is a pointer, and compare it against whatever bit pattern is used to represent NULL on the architecture. This means that if (!p) and if (p == NULL) and if (p == 0) are all equivalent and correct, regardless of what bit pattern is used to represent NULL. See Steve Summit's comp.lang.c FAQ Q5.3 Cyoung 04:12, 23 Sep 2004 (UTC)
[edit] bottom line on "confusion about arrays"
Formally, C has only statically-allocated, fixed-size, one-dimensional arrays. (I ignore for the moment the variable-length arrays of C99.) But you can build, using "off the shelf parts", any of:
- a statically-allocated, fixed-sized, multidimensional array (using arrays of arrays)
- a dynamically-allocated, variable-sized, one-dimensional array (using malloc and pointers)
- a dynamically-allocated, variable-sized, multidimensional array (using malloc and pointers to pointers)
There are also variations on the last theme; for example dynamically-allocated simulated multidimensional arrays can be "ragged" (each row can have a different number of allocated columns).
All of these techniques are reasonably straightforward and very widely used.
Steve Summit (talk) 02:49, 11 December 2005 (UTC)
- This is a nicely-worded summary and is in line with the comp.lang.c FAQ and standard. Well put. Deco 03:48, 11 December 2005 (UTC)
-
- Not surprising. Did you notice who wrote it? -- 66.96.28.244 05:15, 12 December 2005 (UTC)
[edit] Array of structs?
In all honesty I find the example of converting a multidimensional array to an array of structs not only unnecessarily long and drawn out, but also misled. Multidimensional arrays should never be used as arrays of records; this may or may not be obvious, but in any case, multidimensional arrays have perfectly legitimate uses, such as representing matrices, that have nothing to do with structs, and it was for these purposes that they were designed.
Much of the material in the example perhaps should be in the article somewhere, or some other article, but its current placement isn't useful. In particular, constructors and destructors as used in C are a practice, and are not relevent to the C language per se. The same is true of the previous content I added related to creating dynamic arrays, which I admit now is misplaced. There are dozens of mentionable practices in C, including placing constants on the left side of equality comparisons, Duff's device, arrays that extend off the end of records, setting pointers to deallocated objects to NULL, parenthesizing macro arguments, and so on.
In short, if we are really to tabulate C practices, this should either be done in a separate section, or eventually, a separate article. Let storage talk about the basics of storage as built in to the language itself. Keep common practice cleanly separated from core language features. I am prepared to make this change. Is there any opposition? Deco 04:08, 31 Oct 2004 (UTC)
- I went ahead and created the section, entitled Common Practices, and I moved my bit on allocating dynamic arrays there. If there is no objection, I will remove the code on converting multidimensional arrays to arrays of structures, and insert a section on constructors and destructors into the new section. Deco 05:04, 31 Oct 2004 (UTC)
-
- With no objections thus far, this change has been performed. If objections later arise, feel free to edit or revert and discuss. Deco 01:37, 1 Nov 2004 (UTC)
- The contents of Common Practices section are fine, but I think that perhaps they better belong in someplace like wikibook's Programming:C tutorial, rather than in the main encyclopedia article. The section is no longer describing the language, but various people's preferred use of the language... - Key45 19:15, 17 Nov 2004 (UTC)
-
- I agree. I introduced this section mainly as a compromise (in order to get that content extracted from other sections without upsetting anyone), and don't really think it belongs in an article about the language. Deco 19:19, 17 Nov 2004 (UTC)
-
-
- This section now lives in its new home at Wikibooks' C common practices. I invite those interested in this section to contribute to it at its new location. Deco 10:14, 16 Jan 2005 (UTC)
-
[edit] C90
C90 redirects here. But "C90" isn't anywhere on this article. Brianjd 09:28, 2004 Nov 29 (UTC)
- C90 is an informal term for the C language as described by the 1990 international ISO standard. C89, similarly, describes the ANSI 1989 standard, which is almost identical. They're not mentioned here mainly because they are informal and not totally universal, although the standards are described. Deco 22:11, 15 Dec 2004 (UTC)
[edit] large programs
This is no accident; C was created with one important goal in mind: to make it easier to write large programs with ...
If we mean it's original creation circa 1972, this is certainly not the case. C was at that stage for a machine with max program size 12K, there was no way you could write a large program in C. Rather, C was created so they would have something slightly nicer and more portable than assembler to use on a machine that was too small to fit almost any other language then in existence. Some support for large programs emerged later, but really large programs have never been C's strength. It shines for small to medium sized, fast, close to the metal stuff.
- It's a matter of what you mean by "large." C was an important early structural programming language, and structural programming enables the creation of more modular and organized and so larger programs. Its type system also facilitates maintenance of larger programs that are difficult in assembly. Even a program of a few K in code bytes is quite an undertaking in assembly, while something that could be written as a small assembly program really doesn't require a higher-level language like C (assuming porting it isn't a big effort.) Maybe this isn't reflected in the text, but I'm not sure how to word it. Deco 22:06, 15 Dec 2004 (UTC)
At any rate, I wouldn't say that C was created with "one" goal in mind, and certainly not that "writing large programs" was the primary goal. C, like Unix, was created by programmers for programmers, as a pleasant tool/environment for writing the kinds of programs they wanted to write, using the philosophy they were then exploring and inventing. The most important aspects of that philosophy (again applying to both C and Unix) were probably: minimalism, and freedom from arbitrary limitations or other strictures. Steve Summit 05:24, 13 July 2005 (UTC)
[edit] far too many external links
There are far too many external links, some are for rather unimportant programs, almost seems like a bit of advertising or fanfare has slipped in here. I'm taking a bush-whacker to it, but feel free to add anything back that seems critical. Daniel Quinlan 08:39, Feb 10, 2005 (UTC)
- I agree, but think some of these should later reappear in articles about "C tools" and so on. Deco 19:28, 10 Feb 2005 (UTC)
[edit] Array changes
For what it's worth, dynamic arrays are not really supported by the C language proper. Arrays whose size are known at runtime can be allocated using the standard library, but these do not have array types — they are assigned to pointer variables and merely treated syntactically in a manner similar to arrays. This is the original reason I originally omitted them. I realise it seems counterintuitive to say C doesn't have dynamic arrays, but really it's something you implement in C, not something C gives you. This is why I've softened the language of this part.
Oh, and multidimensional arrays are not arrays of arrays. This is just syntax — they're laid out in row-major order, and the index is computed using multiplication, not obtained by successive pointer fetching as in a true array of arrays. I've attempted to clarify this. Deco 04:45, 6 Apr 2005 (UTC)
- We've discussed this before, and again I refer you to K&R, which specifically states that C doesn't have multidimensional arrays, but rather arrays of arrays. As for dynamic arrays in C, the reason I keep bringing it up is because my students keep refering to books & articles claim you can't have dynamic arrays in C. If you want to call it something else, that's fine, but I think it is important that people are aware that they can implement dynamic arrays in C. wrp103 (Bill Pringle) - Talk 03:49, 7 Apr 2005 (UTC)
-
- I guess you're right. Semantically, they are arrays of arrays, even if they're not necessarily stored in that manner. I agree with respect to dynamic arrays (let's mention it like we are now but just not use that term). Deco 01:22, 9 Apr 2005 (UTC)
I think there are a lot of areas of overlap between the C language, C syntax, and C standard library articles. I do think that this needs to be resolved.—Kbolino 04:20, Apr 7, 2005 (UTC)
[edit] Text added to C syntax section
Before anyone complains (or reverts me), I wanted to explain the text I added to the C syntax section — the idea is to provide a supercondensed summary for those who don't want to dive into all the details of the full article. This is a very common thing to do wherever you see a main article note, and I think it's appropriate here, although the summary is necessary technical and missing lots of information. If they need to know more they can visit the main article. Deco 04:11, 18 Apr 2005 (UTC)
- Oy! You removed the comment bit that I'd just rewritten. I agree with your changes b.t.w. -- it should be in syntax, but you coulda checked if it was there first! Never mind, I moved it myself. I still think the main page is far too heavyweight and detailed. Most of the information should be moved to separate articles, with just very brief coverage on the main page. Akihabara 08:22, 18 Apr 2005 (UTC)