Wikipedia talk:WikiProject Logic/Standards for notation

From Wikipedia, the free encyclopedia

Socrates This article is within the scope of the WikiProject Philosophy, which collaborates on articles related to philosophy. To participate, you can edit this article or visit the project page for more details.
??? This article has not yet received a rating on the quality scale.
??? This article has not yet received an importance rating on the importance scale.
WikiProject Mathematics This page is part of WikiProject Mathematics.


Contents

[edit] Starters

To start of with it would be useful to have an agreed set of symbols. Not only do symbols vary from author to author, but any symbol may be written in a variety of fonts which may or may not appear on various browsers. I have started us off with a little table for the truth functional connectives & one for quantifiers.

The aim is consistency and legibility

--Philogo 13:02, 16 August 2007 (UTC)
We might make us of:
Table of logic symbols

[edit] Templates

I don't know what the long term solution is. However, I recently saw how they are doing it with suits for cards {{Cs}} ♣. We are saving a few characters by using

  • {{and}} - &
  • {{or-}} - \lor
  • {{imp}} - \to
  • {{eqv}} - \leftrightarrow

[[*{{not}} - ¬

  • {{xor}} - \nleftrightarrow
  • {{exist}} - \exists
  • {{all}} - \forall


but it seems like a wash for using... hmmm.

  • {{nor-}} -
  • {{nand}} -
  • {{cnv}} -
  • {{cni}} -
  • {{nonimp}} -


Gregbard 01:34, 19 August 2007 (UTC)


{{eqv}} (\leftrightarrow) can be used instead of for <math>\leftrightarrow</math> (\leftrightarrow)
I do not know how to set up such a template.--Philogo 23:32, 22 August 2007 (UTC)

These templates are easy to set up. Anything that comes after "Template:" in the name will show up as a template when you enclose the name in curly brackets. It's pretty neat. You can do a similar thing with whole pages using translusions also. Gregbard 23:51, 22 August 2007 (UTC)

The aim is consistency and legibility
A lot or articles I cannot read the connective because it appears as a little box. If I cannot read 'em, neither can others. We should ensure the "Preferred symbol" is legible to all. This is a FONT issue. &

The second issue, and Wikipedia being what it is it may cause excitement, we should agree on consistency in the symbols themselves (as opposed to the Font)
Eg do we want to use
'¬' or '~' for negation?
'&' or '&' for conjunction?
'\supset' or '\to' for implication.
--Philogo 12:31, 20 August 2007 (UTC)

[edit] implication symbol

I quite disagree that \supset should be preferred over other symbols for implication. It has fallen almost completely out of favor in mathematical logic texts, and I think it is almost unused in mathematical logic articles on WP right now. While each article should be internally consistent, I don't think there is a pressing need to standardize the notation between different articles, and I would oppose the standardization if it uses notation that is not common in the references used for the article. — Carl (CBM · talk) 13:59, 21 August 2007 (UTC)

In many areas (such as modal logic) \supset is the prefered symbol in what I've been seeing. (Although one sometimes sees \to.) This is obviously subfield dependent. That said, I think Wikipedia should standardize on SOMETHING, with side remarks in the article on other notations people are likely to see for each area. Nahaj 17:01, 18 September 2007 (UTC)

The aim is consistency and legibility This does not imply any paticular symbol
--Philogo 13:38, 22 August 2007 (UTC)

In a similar vein, few contemporary mathematical logic texts use \equiv for the biconditional because this symbol is commonly used to indicate an equivalence relation in mathematics. — Carl (CBM · talk) 14:02, 21 August 2007 (UTC)

The aim is consistency and legibility This does not imply any paticular symbol
--Philogo 13:38, 22 August 2007 (UTC)

All the symbols in the table are legible. The main source of confusion for newcomers, I think, is the use of \supset and \to for implication. But \to is widely used in mathematical logic texts and papers, while I think \supset may still be in use among philosophers. So the consistency you are looking for doesn't exist in the real world. Here are the general principles I think WP standards should follow, in order of importance:
  1. Notation should be consistent within each article.
  2. Notation should be consistent with the most common notation used in academic works on the subject
  3. Notation should be consistent between articles on similar topics
I would be happy with a compromise that says that an article should use the notation adopted by a majority of its references. That would come closer to meeting my three points than trying to shove all the logic articles into one lump. — Carl (CBM · talk) 13:50, 22 August 2007 (UTC)

All the symbols in the table are visible but a large number of pages use symbols that appear as boxes to me and therefore I assume others. I chose the the alternatives to be visible. My further suggestion that Wikipedia should be consistent even if (as you quite rightly say) such consistency doesn't exist in the 'real' world; this is more that being "consistent within each article". My little table sets out a "preferred" symbol for each connective - not mandatory. If we agree that there should be such preferred sysmbols to encourage consistency, then we have but to decide which symbols are preferred. Presumably the currently most commonly used would be the ones to go for. But firstly do we agree that we should indeed have set of preferred symbols to encourage consistency thoughout Wikipedia Logoc Articles? If not there is no point in having prefered sysbols or discussing the matter any further. --Philogo 22:52, 22 August 2007 (UTC)

It looks like good points on all sides so far. I still don't know what we should do. I know that there is a logic notation tool on wikimedia. Perhaps we can import it to wikipedia? Perhaps we have to take it case by case. Some topics will be that way. However there should still be some set default. We should designate those, and then assign those to the template that will go with it (like the "and" and "or" templates above). I agree that the arrow is better than the "cup" for implication. It's more easily understood by the general public, as is & for "and." Gregbard 23:31, 22 August 2007 (UTC)

You will see that I have changed the heading from 'Preferred Symbol' to 'Preferred Symbol(s)' so it perhaps a little less controversial. You make new good point about & for "and." instead of \wedge. It may be that & is not currently most used in mathematical logic journals, but if (I emphasises if) it is easier to understand by the general public (for whom we are writing are we not) then there is a strong case for using '&'.
Next up: Symbol for True
--Philogo 00:04, 23 August 2007 (UTC)

The place where I have trouble is the xnor/equiv. I think we should designate two bars as "same number as", three bars as "same truth value as", and if we can create it, four bars for "same set as" for similar reasons as above. —The preceding unsigned comment was added by Gregbard (talkcontribs) 00:14, August 23, 2007 (UTC).

Re Philogo, it would be fine with me if we had a minimal set of symbols (say \to and \supset for implication, & for and, \lor for or, and \equiv and \leftrightarrow for biconditional, ~ for negation) that still allowed editors to choose the right implication symbol for the article but specified the other symbols. In general, I don't think it's worthwhile to look for complete uniformity across all articles when such uniformity doesn't exist in the real world, because our role here is not to create things but to describe them.

Re Gregbard, I am fine with & for and, but as I have pointed out \equiv is not used much in math logic for biconditional, and the standard symbol for set equality in set theory is =, not a symbol with four bars. To be honest, if I saw a symbol with four bars I would have no idea what it means; others would have the same issue.

Also, I'll point out that although I'll work with the consensus we come to here, if someone starts making mass changes to articles without advertising them widely first there will likely be numerous complaints about it. Compare WP:ENGVAR, which is the result of a lot of wasted time. — Carl (CBM · talk) 00:35, 23 August 2007 (UTC) somebody redefined {{all}}. ...now sorted--Philogo (talk) 12:55, 27 November 2007 (UTC)

[edit] Need math tag codes

We should give the <math> codes for all the preferred symbols, even if they have a template. When people use \forall or \exists, for example, they need to stay in math mode. MilesAgain (talk) 09:03, 1 January 2008 (UTC)

Y Done MilesAgain (talk) 09:48, 1 January 2008 (UTC)

[edit] Use v mention

Further information: Use-mention distinction

We should come to agreement on the notation for distinguishing use from mention. Also whether or not we will just skirt the issue at times or attempt a rigorous adherence to a policy for clarity. I think it is a sign of good quality in the Wikipedia if we can get to this level of clarity. I have used single quotes to do this myself, however we could elect to use italics, or perhaps there is something else.

I think we could also choose to use this distinction in paragraph format text (i.e. right now it's '(P&Q)') and not use it when, for instance, we have formulas that are set apart by themselves and indented, etc:

(P&Q)

--

Proposal:
Mention of a phrase shall be indicated by enclosing in single quotes:
'Alex' has four letters.
Use of a phrase shall be indicated be the phrase itself:
Alex is 21 years old.
Furthermore, iterations of being enclosed in quotes shall be allowed to indicate further levels of language.

Pontiff Greg Bard (talk) 17:24, 17 January 2008 (UTC)

This is already dictated by the manual of style Wikipedia:MOS#Italics; it can't be changed here. — Carl (CBM · talk) 17:48, 17 January 2008 (UTC)

Proposal withdrawn in favor of the established standard. I will change the format on substitution instance accordingly. Pontiff Greg Bard (talk) 18:44, 17 January 2008 (UTC)

Not so fast myt fine featered friends. The manual idicates that words and letter mentioned shuld be in itlaiscs:

"Italics are used when mentioning a word or letter (see Use–mention distinction)"

But by tradition a prediciate letter is already in italics.... so we cannot used italics to indicate mention rather than use. It is my belief that the use of single quotes to indicate mentioning is well established in Philosophy and Logic. I think this is a case of technical use rather than style. We would write: 'Snow is white' is true just in case snow is white. --Philogo (talk) 01:52, 9 February 2008 (UTC) —Preceding unsigned comment added by Philogo (talkcontribs) 01:47, 9 February 2008 (UTC)

[edit] 2centsworth

I gather that the use of \supset for implication has failed for lack of a second. Good.

I'm old enough to prefer 'cap' and 'cup' for 'and' and 'or'. The discussion in favor of '&' is that & is widely understood. I find that a defect rather than an advantage. Because & for the English word "and" (and @ for the english word "at") are so common in webspeek -- I get them in term papers! -- I see a real problem with people mistaking the logical "and" for the English word "and". Consider, for example, this sentence,

"'\wedge' & '&' are both used to mean "&" & '\vee' is used to mean "or"."

Rick Norwood (talk) 19:15, 27 May 2008 (UTC)

On a frivolous note. I read a paper once that investigated the advantage of using symbols in computer language for operations which bore some resemblance to the maings of those symbols (inc words) in the everyday world. Eg Using IF and THEN and ELSE. They taught two classes of students the same language one with the mnemonic symbols and one with completely made up symbols. Like Ek instead of IF and Ugh for AND and so on. You would expect the students with the menmonic language to learn more quicly than the other group. They did not. Explanation may be that it takes as long to shake off the surplus of the normal sense of a familiar word than to learn a new word with no previous connations. I have some really old logic books that use "." for "and". I cannot help but read "p.q" as p multiplied by q.--Philogo 22:04, 27 May 2008 (UTC)

[edit] Preferred symbols

I don't agree with the recommendation of &, but I may reconsider if anybody can show me a single book on model theory, published after 1990 by a normal scientific publisher, that uses this symbol. Or an introduction to mathematical logic on the graduate level that uses it. I have never encountered the symbol in recent publications, either. I have also never seen the arrow notations for NOR and NAND outside Wikipedia. From which field do they come? (In model theory we just don't use the operations.) I haven't seen the negated left-right arrow used for exclusive disjunction in the wild either, but it's self-explanatory.

In other fields the notation is less standardised. See also the related discussion here. (I wasn't aware of this page when I started that.) For articles on Boolean logic in engineering I would agree with its use, for example, except that in that field multiplication seems to be a more standard notation, while OR is most often written as +, TRUE as 1, FALSE as 0, and NEGATION as ~ or by overlining.

I think the superset symbol used to be used for implication because there wasn't much choice before computer typesetting. I was surprised by the comment by Nahaj above claiming that it is still used in modal logic. I checked the first freely accessible hits on Google Scholar for the search term "modal logic", and I did find an apparently important book from 1996 that used this notation (Hughes and Cresswell, "A new introduction to modal logic"). It seems that this field has a subculture with diverging terminology (also including the use of &), but when people like Moshe Y. Vardi write about modal logic they use the (nowadays) more standard arrow notation. In my experience it's also the notation that finite model theorists use when giving talks related to modal logic.

The addition symbol + is used much more often for OR than for XOR. Allowing it for XOR but not for OR is absurd and likely to mislead. --Hans Adler (talk) 13:08, 1 June 2008 (UTC)

I don't understand the symbol tables. What is the "Symbol(s)" column in the first table supposed to mean? All symbols that are in wide use for a connective, or just those that are permitted for Wikipedia? If the former, then they are horribly incomplete and misleading, see some of my remarks above. If the latter, then the choice is not good. I would like to edit the table, but for this I need to understand it. --Hans Adler (talk) 20:33, 2 June 2008 (UTC)

[edit] Terminology

Philogo suggested that I take the discussion about terminology (mainly for first-order logic) from non-logical constant to this page, and that I make a table. So here it is. Please edit the table if you have further information. I suggest restricting the cited sources to significant books. --Hans Adler (talk) 13:34, 1 June 2008 (UTC)

It is my personal opinion that Mendelson's terminology is generally obsolete. The book was written at a time when the subject was very new, and terminology and notation were much less standardised than they are now. Originally he even used the old notation \supset for implication, and Dirk van Dalen in his review of the second edition specifically mentioned the change to the usual arrow notation as significant. But in contrast to some of the notations, the terminology was never updated and continues to influence some authors. However, as far as I can tell it has no effect on the mainstream, only on some isolated areas.

One thing I learned when doing the research for the table below is how few significant books on mathematical logic there are. There seems to be no alternative to Hinman's book for a thorough introduction on the graduate level. --Hans Adler (talk) 23:44, 1 June 2008 (UTC)

Mendelson 1964 uses E not \exists, and () not \forall. Mates 1972 has \exists but still () not \forall.
Yes, the notation has been updated, but sadly not the terminology. --Hans Adler (talk) 20:27, 2 June 2008 (UTC)

[edit] Key to the books

  • Elliott Mendelson, "Introduction to Mathematical Logic", first edition (1964)
  • Joseph R. Shoenfield, "Mathematical logic", first edition (1967)
  • Benson Mates, "Elementary Logic", second edition (1972)
  • Chen-Chung Chang and H. Jerome Keisler, "Model Theory", first edition (1973)
  • Jon Barwise (ed.), "Handbook of Mathematical Logic", first edition (1977) [especially Jon Barwise, "An introduction to first-order logic"]
  • Heinz-Dieter Ebbinghaus, Jörg Flum and Wolfgang Thomas, "Einführung in die mathematische Logik", first German edition (1978)
  • Chen-Chung Chang and H. Jerome Keisler, "Model Theory", third edition (1989)
  • L.T.F. Gamut, "Logic, Language and Meaning" Vol. I, "Introduction to Logic" (1991)
  • Wilfrid Hodges, "Model Theory" (1993)
  • Heinz-Dieter Ebbinghaus, Jörg Flum and Wolfgang Thomas, "Introduction to Mathematical Logic" (1996)
  • Elliott Mendelson, "Introduction to Mathematical Logic", fourth edition (1997)
  • Peter G. Hinman, "Fundamentals of Mathematical Logic" (2005)

Mendelson (1964, 1997) seems to be still popular in some places. Chang & Keisler (1973) was the canonical book on model theory, superseded by Hodges (1993). Ebbinghaus et al (1996) has been the canonical introduction to logic in Germany since 1978, but its English translation was too late to have a big direct impact on English terminology. Hinman (2005) seems to be unrivalled as a modern graduate level text on mathematical logic. --Hans Adler (talk) 00:34, 2 June 2008 (UTC)

It would be intersting to discover what are the most popular books used in teaching Logic at universities, not JUST at graduate level, and in any department. (actually of surce this is going to be either maths or philosophy, because I have not come across any other disciplines that teach the subject. Further it is mainly (if not solely) philosophy departments a who insist of the study of (elementary) logic (I suspect it is not compulsory in most math degrees) so we should discover what terminology is taught to most people at universities when they study logic, because they will surely represent our readership. The terminology may not be the best there is, and the books used may not be the best or most up-to-date either. But that's not unique to Logic. When I studied Physics at school for A-level, apparently nothing much had happened since Newton. F=ma and there you go. In Wikipedia we should write using the most commonly used terminology. If there is a newer better terminology than we should certainly describe it thus enabling readers to understand stuff written in the newer better terminology. --Philogo 00:55, 2 June 2008 (UTC)

I agree in part, but not completely. Here are some points that you are probably not aware of.
  • Logic is indeed not compulsory in maths, at most universities. But:
    • From Logic in computer science: "The study of basic mathematical logic such as propositional logic and predicate logic (normally in conjunction with set theory) is considered an important theoretical underpinning to any undergraduate computer science course. Higher order logic is not normally taught, but is important in theorem proving tools like HOL." I would guess that there are significantly more computer science students than philosophy students.
    • Logic, especially Boolean logic, is also important in electrical engineering. Again, there are probably significantly more electrical engineering students affected by this than even computer science students. If we want to cater mainly for the largest numbers, then the standard for propositional logic will have to be 0 for FALSE, 1 for TRUE, + for OR and multiplication (with or without a dot) for AND. I don't think this is the way to go.
  • The differences are not just in having different words for the same thing. The entire concept of a signature is completely missing in the books by Mendelson and Mates. I just looked at the philosophy/logic shelf in our library: Apart from the incompleteness theorem, which seems to have a lot of appeal to philosophers, the books there are mainly doing trivial things in a lot of space. They don't need signatures because essentially they are not doing anything with logic other than define it. The philosophical approach can be explained easily as a special case of the mathematical/computer science approach, but not the other way round. By the way, there are also slight differences in signatures as used in maths and signatures as used in computer science; but they are insignificant enough to be glossed over easily.
I will explain the significance of signatures below. I am confident that we can find a good solution for this problem, and I will make a proposal below for explaining the two options at one or two places and avoiding the issue completely where signatures are not needed, by using ambiguous language. --Hans Adler (talk) 11:41, 2 June 2008 (UTC)
I propose that in First-order logic#Non-logical symbols, we start by explaining the fixed signature of the traditional system. Then we say that in mathematics and computer science one instead uses arbitrary non-logical systems, with no restrictions on their number, so that the previous convention is just a special case. In other places we just use the standard signature (without talking about it), except in examples taken from maths and computer science, where we use (and talk about) the signature. Does this sound convincing? --Hans Adler (talk) 12:52, 2 June 2008 (UTC)
Yes that sounds fine, the overriding considerations being precision,clarity and readabilty. Precision and clarity is assited by a uniform terminology and this is what we I think are agreed we are are aiming at. If a large number of users are used to some other terminolgy we readabilty is assisted by mentioning them. This is not a luddite position at all: if we want people to recognise or adopt some (to them) new terminolgy it will help them to do so if we help them keep their bearings by referring/cross-referring them to similar or synonymous older terms. We seemed to have pulled it off OK with the table of logical symbols: You will see that there is a column of alternative symbols and then the "preferred" symbol. There were a few sulky remarks concerning "~" and the like but it settled down. Do you thnk we can finish up with a similar table for the terms now under discussion? --Philogo 20:05, 2 June 2008 (UTC)
I have marked my suggestions below by using bold. If we find agreement I will make a table for the main page, and then we should also discuss the symbols. Here are the reasons for my choices:
  • Non-logical symbols is a self-explanatory term, much less confusing than "non-logical constants" (which seems to be more common in philosophy) and probably acceptable to all. Mathematicians also need a term for the (varying, in their case) set of non-logical symbols, and signature is the recently established standard term for that.
  • The three kinds of non-logical symbols should have similar names. "Individual constant" is probably supposed to mean "constant for individual elements", but that's not clear, and the term looks weird to me. Everybody knows that in mathematics a "constant" is normally just a number, so here it would be just an element. This makes "individual" redundant. Since a "constant" is really the element itself and not the symbol representing it, the best choice is constant symbol. Then clearly we should also use function symbol and either predicate symbol or "relation symbol". As a mathematician I prefer the former for unary predicates, and the latter predicates with more places; but with philosophers in mind let's take the former, which is also acceptable.
  • On the semantical side, only "model for" and structure are serious candidates for what mathematicians need. Let's take the latter to make sure that philosophers don't confuse it with "model of". Maybe the term "interpretation" needs to be discussed as well, but from a mathematical POV it's ill-defined and very hard to handle. I would have to read a lot of books I don't like before I can contribute to this. Domain is an acceptable word both for philosophers and mathematicians. Currently we are overusing "domain of discourse", which sounds pompous to a mathematician's ears.
  • All terms for the logical connectives are equally acceptable to me, with a slight preference for "logical connective". As I have returned all the philosophy books to the library I have no idea what the standard terms there are, and so I don't want to make a suggestion about that. --Hans Adler (talk) 21:03, 2 June 2008 (UTC)

[edit] General term for the symbols used for constants, functions and relations

Term Comments Notable books
non-logical constants Refers to functions with varying interpretation as "constants". Both authors work with a fixed set of non-logical constants. Mendelson 1964-1997, Mates 1972
non-logical concepts Shoenfield 1967
(primitive) non-logical symbols Barwise 1977
similarity type Most often used in universal algebra, and often seen as a sequence of numbers (arities of function symbols) with no provision for relation symbols. (So that the primary meaning is strictly speaking something slightly different.)
"stock of constants and predicate letters" only used in passing Gamut 1991
vocabulary Clashes with the standard metaphor in formal language theory: A "vocabulary" should not consist of "letters/symbols" and be a subset of an "alphabet".
Gamut 1991 uses the term to refer to the entire alphabet, including the logical symbols.
first-order language Ambiguous term: does it refer to the non-logical symbols or to the first-order formulas? Not suitable for non-first-order contexts, where exactly the same information is needed.
symbol set As in "the symbol set of L". Abuse of language: does not include the logical symbols. Ebbinghaus et al 1996
language Ambiguous term: does it refer to the non-logical symbols or to the (first-order) formulas? Clashes with formal language. Barwise never defines the term officially, but calls the "set of non-logical symbols" L and calls it "language" in the examples. Chang & Keisler 1973-1989, Barwise 1977
signature Term originated in computer science? Neutral: Does not suggest a logic context, is equally appropriate for purely semantic use. Hodges 1993, Hinman 2005

An early approach in first-order logic was to have one universal language of first-order logic, containing a fixed, countably infinite, supply of non-logical symbols. For example constants c0, c1, c2, ..., and binary predicate symbols P20, P21, P22, ... . This was a major obstacle for early mathematical logicians, and many results that are now considered trivial and perfectly obvious were for them non-trivial and often very hard to prove because of this restriction.

In computer science it is a problem with the old approach that the implicit signature (i.e. the fixed set of non-logical symbols) is infinite. Many arguments in computer science only work for finite signatures, and most absolutely need a finite upper bound on the arity of relation symbols. So in computer science it is customary to restrict consideration to, e.g., only the first 3 constants, no unary predicate symbols, the first binary predicate symbol, and no predicate symbols of higher arity. This information (in the example it could e.g. be coded as 3; 0, 1, 0, 0, 0, ...) is called the signature.

In mathematics it is often necessary to extend the supply of non-logical symbols. One could of course just continue with cω, cω+1, ..., but it's much more natural to allow arbitrary sets (which are not already logical symbols) as non-logical symbols. And while we are at it, we can get rid of the "standard" symbols c0, c1, ... etc. This covers both the original situation (just take the signature consisting of the countably many old-fashioned non-logical symbols) and what the computer scientists do (just choose finitely many of the old-fashioned non-logical symbols).

Much more importantly, the signature in the mathematical sense is exactly what we need in applications, both in mathematics and computer science.

  • Let's say that an abelian group is a set A together with a group operation +, a neutral element 0, and a unary negation operation -, such that certain axioms hold. In the old-fashioned system we would code the sentence "for all x, x+(-x)=0" as \forall v_0(f^2_0(v_0,f^1_0(v_0)))=0. Now we can just code it as \forall v_0(+(v_0,-(v_0))=0, which is much more readable.
  • Let's say we have an employee database with one table BASEINFO connecting personnel number, name, gender and date of birth, and another table SALARY connecting personnel number and salary. The tables define a quaternary relation (quaternary predicate) BASEINFO and a binary relation SALARY. The fact that there are just these two tables, and no others, and how many places (columns) they have, is much more fundamental for the database than the actual content. Occasionally a database programmer will have to change this kind of information, but in normal everyday use only the strings and numbers which the database contains, and the rows in which they occur, will change. The fundamental, normally unchanging, information is the signature {BASEINFO, SALARY}.

Signatures were invented because they were needed. At first it was just a convention: People worked with them without saying so. So people started working with several first-order languages instead of just one. It took several decades for the modern standard term "signature" to be invented and to become established, and a few old-fashioned people still say "the non-logical symbols of the language" or just "the language" when they mean the signature. But in all fields that actually need signatures there is a clear trend to use the term "signature". (Sorry that this became so long.) --Hans Adler (talk) 12:25, 2 June 2008 (UTC)

[edit] More specific terms distinguishing between functions and relations etc.

Term Comments Notable books
individual constant Mendelson 1964-1997; Mates 1972; Gamut 1991
constant Shoenfield 1967, Hodges 1993, Ebbinghaus et al 1996
(individual) constant symbol Chang & Keisler 1973-1989
constant symbol Barwise 1977, Ebbinghaus et al 1996, Hinman 2005
Term Comments Notable books
function letter (arity >0) Mendelson 1964-1997, Gamut 1991 [hypothetical]
operation letter/symbol (arity >0) Mates 1972
function symbol (arity ≥0) Includes constants as a special case. Shoenfield 1967
function symbol (arity >0) Chang & Keisler 1973-1989, Barwise 1977, Hodges 1993, Ebbinghaus et al 1996, Hinman 2005
Term Comments Notable books
predicate letter (arity >0) Mendelson 1964-1997, Gamut 1991
predicate symbol (arity ≥0) Includes propositional variables as a special case. Shoenfield 1967
relation symbol (arity >0) Chang & Keisler 1973-1989, Barwise 1977, Hodges 1993, Ebbinghaus et al 1996, Hinman 2005

Note: Gamut 1991 do not allow functions at all; if they did, they would no doubt use the term "function letter".

[edit] Terms for the logical connectives

Term Comments Notable books
logical connective Barwise 1977
connective Chang & Keisler 1973-1989
logical operator
propositional operator

Hodges 1993 speaks about "logical symbols" (including quantifiers and equality), but doesn't have a term for the connectives.

[edit] Terms for the semantics

Term Comments Notable books
[universal] algebra Only in universal algebra, usually no relation symbols allowed.
interpretation Ebbinghaus et al 1996 uses the term for a structure + interpretations of the variables. In model theory the term also has another meaning. Mendelson 1964-1997; Mates 1972
model for A model for a signature as opposed to a model of a sentence or theory. This is standard usage in model theory in informal contexts. Chang & Keisler 1973-1989, Barwise 1977, Gamut 1991
structure Standard usage in model theory in formal contexts. Shoenfield 1967, Barwise 1977, Hodges 1993, Ebbinghaus et al 1996, Hinman 2005
Term Comments Notable books
domain Mendelson 1964-1997, Shoenfield 1967, Hodges 1993, Ebbinghaus et al 1996
domain of discourse Gamut 1991
universe Most suitable in set theory. Chang & Keisler 1973-1989, Hinman 2005
carrier Ebbinghaus et al 1996
underlying set Standard term for this kind of thing in algebra and in category theory.

There is a clear trend: Authors who work with a fixed first-order language rather than with signatures use the term "interpretation". This term is usually defined rather sloppily, so that it is not clear whether an interpretation of the sentence \forall x(P^1_0x)\wedge\forall x(P^1_1x) is also an interpretation of the sentence \forall x(P^1_0x), or whether it just gives rise to one. The issue is that an interpretation of the first sentence must supply a meaning to P^1_1; while an interpretation for the second sentence need not do that, it's not clear whether it's allowed to do it. This and related issues make interpretations completely unsuitable for mathematical use, and also for many applications in computer science.

Authors who work with signatures, and languages over them, use the term "algebra" (only in universal algebra), model or structure. A model/structure for a signature has a domain and interpretations for exactly the non-logical symbols that are in the signature. A formula over a signature uses at most the non-logical symbols in the signature. A structure over a signature is a model of a sentence over the signature if the sentence is true in the structure. I think the term "model for" is popular mainly because the field that studies structures is called model theory, not "structure theory". --Hans Adler (talk) 12:39, 2 June 2008 (UTC)

Question. Instead of saying a wff is true under an interpretation would you say "true in some model" and would you say true in all models instead of "true under all interpretations" or what?
I believe, but will not swear to it, that the term "constant" and "variable" crept into maths from physics (where two or more variable physical values can be related by mean an equation involving some constant value. The letters representing these in an equation became known as "variables" and "constants". When pltting on graphs two variable values would be represented on the "x" and "y" axis. When later symbolc logic was developed from arithmetic by Frege and Russell, it came naturally to them to see somethings as the "variables" and ohere as the "constants". If you call everything that is not a variable a "constant" your finish up wit two types: the "logical cosstants" = symbols for "and" etc. and the others which of course became the non-logical constants. This terminlgy therfore is utiantely dereived from a physics metaphor. I believe that the term "function" came to maths from physics by another metaphor. And the use of the term "model" is surely another metaphor perhaps as easier onthe tongue than isomorphic. There is an intersting history too, as to wht we use the word "third" in two quoiete distinctive ways, as the third (in line) and "a third" part, from teh days when in what we callel fraonal artihmetc could be poeformed with teh equivalent of fraction where the denumerator could only be 1. --Philogo 22:26, 2 June 2008 (UTC)
Answer: Formulas that are true under all interpretations are completely irrelevant in model theory. There is no reason to talk about them. They may be of some relevance in proof theory (I just don't know), but I believe in truth theory models are completely irrelevant. – And if we do want to say such things, there is still the more formal sounding word "structure".
Your description of the (presumably) intended metaphor with "logical constants" is almost exactly what I came up with myself. It makes sense if you talk about the logical formalism because you are deeply interested in it for its own sake and if you don't care at all about its applications. But as soon as you start applying it, the other, mathematical, metaphor is much more immediate (to physicists as well as to mathematicians, I dare say). If you apply first order logic, then the logical connectives and quantifiers are not in your focus at all because their interpretations are constant. What is remarkable, however, is when you can find constants in the varying part ("individual constants"). For a physicist something like the speed of light or Avogadro's number is a natural constant. They don't consider more fundamental facts such as "The world can be described by mathematical formulas.", or just the logical connectives, to be natural constants, because 1) they can't be described as numbers, and 2) they are not in their scope. For mathematicians doing first-order logic, the connectives are not in their scope, and they are also not elements of the domain (i.e. generalised numbers). Therefore it's unnatural for a mathematician to call the connectives constants. --Hans Adler (talk) 22:56, 2 June 2008 (UTC)
PS: If the point in your first paragraph was really about how to express it (i.e. "true in" vs. "true under", then the answer is simply: yes, we use "in" instead of "under", both for "models" and for "structures". --Hans Adler (talk) 14:00, 9 June 2008 (UTC)
I can see and empathise. I consider conjunction a function that has truth values for its arguments and range, and product as a function has numbers for its arguments and range. I see predicates (properties and relations) as functions that have anything in the domain for its arguments (belonging to a sub-set of the domain specified per predicate) and truth-value for its range. I see that an element of the domain can be assigned a symbol as a name, but the names are assigned temporarily, as per Euclid Let A be a point equi-distant from all points on some circle B. In other words lets call some circle 'B'and some point 'A'.

So far then in Logic we just have functions and elements. There are two types of functions; they both have truth-values as their range, but the first type have truth values for their arguments, and the second has elements. Since they both types return truth-values, so they both could be called truth-functions (by analogy with numeric function). I do not think I would bother to diffentiate "property" from "relation" (although in Philosophy properties have proved particulary troublesome.) It might be useful to have different names for the two types of function, but nothing elegant comes to mind. For a working title we could call them truth-truth functions and object-truth functions. I THINK Frege would call them concepts) So conjunction is a truth-truth function , and is-yellow is an element-truth -function.

Since there are just 16 arity-2 truth-truth functions + 1-arrity truth-truth function we could dignify them with names like "disjunction" and give them a permanent symbol like "v". We could instead give them mames formed from their truth tables, like "TTTF" for disjunction, and we could use the string "TTTF" as its symbol. And "FT" for Not, and "TFFF" for "and" Then we would have instead of PvQ & ~P therefore Q we would write: TFFF((TTTF(P,Q),FT(P)) therefore Q. An expression like TFFF((TTTF(P,Q),FT(P)) would be a push over (pun intended) to machine parse.

Thus in summary our terminology would admit elements, and truth-functions (element-truth function and truth-truth functions. For convenience we can assign symbols to each of these and it does not matter what we use. We can add non-truth function functions if we like for terms but they are a derived concept, IMHO. We should add variables and quantifiers, but they do not represent elements or fucntions and I am pretty sure we can define in terms of the latter, syntactically.

Our terminology has been, is and probably always will be based on metaphors in all fields: think of force, particle and wave. We get a long way with conceptualising with metaphors but in the end the metaphor breaks down, a new metaphor is intruduced and and we get cross with each other. --Philogo 00:14, 3 June 2008 (UTC)