User talk:Linas/Archive11

From Wikipedia, the free encyclopedia

Contents

[edit] Original documents for Principle of Least Action

Hi Linas, it's Willow again. I added three of the original documents (along with their translations) in the development of the principle of least action to their respective Wikisources; see my userpage for more details (under "Inter-Wiki stuff"). They still need proofreading by others, but I think they're more-or-less OK for reading, and thought that you might enjoy them. It's strange and interesting how vehemently Euler defends Maupertuis' priority in 1752, when it is clear that Maupertuis asserts his principle in 1744 only for light (not matter) and does so with little justification. Maupertuis' one interesting argument is that space and time should be equivalent but, in the refraction of light, time is minimized (Fermat's principle) but not distance. On that basis alone, Maupertuis asserts that the principle of least action is more fundamental than Fermat's principle. Euler, on the other hand, is the first to assert the principle for material particles, and the first to note its requirement that speed be a function of position alone (i.e., that the particle's total energy be conserved). Euler's later misrepresentation of Maupertuis' achievements is really odd, and almost makes one wonder whether Euler was being blackmailed or trying to gain some professional benefit. But perhaps we're still missing some documents that might shed more light on the story. Willow 11:33, 12 July 2006 (UTC)

I'll have to digest this slowly. At Accord des diferntes... you indicate "trouvé à Gallica", but there's no URL ... did you go to the library? Similar remarks apply to the other texts. Lovely picture of the knitter, by the way. linas 00:56, 13 July 2006 (UTC)

Hi, linas, I like the Bouguereau painting, too; there aren't so many flattering pictures of us knitters!

I translated the part of Maupertuis' 1746 article that concerns mechanics (the first two parts concern proofs of God's existence) and was dismayed to find several things. Maupertuis takes credit for having invented the principle of least action as a general principle, although it's clear that he proposed it only for light in 1744. He cites Euler's 1744 book and thanks him for his "beautiful application of my principle to planetary motion". Even worse, when Maupertuis tries to apply "his" principle to elastic and inelastic collisions, and to the equilibrium of a lever, he seems to mis-apply it. When you get a chance, could you please look over the latest article and see whether you agree? Perhaps I'm being unfair to Maupertuis. I confess, I'm even beginning to suspect that he didn't know any calculus (e.g., what an integral is); if so, it would be a strange quirk of history to credit him with a principle that relies so much on an integral. ;) Willow 17:17, 18 July 2006 (UTC)

I think that perhaps you want to start thinking about writing an essay on this topic. I'm not sure where to put it: on some blog somewhere, where you can try to generate interest? At a minimum, you may want to post to Wikipedia:WikiProject History of Science, and stir it up there. If you find your essay starts gaining length and heft, then publication is some journal of history starts becoming an option. Anyway, I shall try looking at the translations -- but again, I ask, will I be able to find the Latin originals online? linas 19:40, 18 July 2006 (UTC)
I copied this to Wikipedia talk:WikiProject History of Science. Oh, and so the pressing question seems to become "why did Euler go ballistic in Maupertius defense?" Was he really that forceful? linas 20:02, 18 July 2006 (UTC)
You might also try "google scholar" to see if anyone has written about Maupertius or lest action recently, and then contacting them for an opinion. I'm trying it now:
Seem like http://scholar.google.com/scholar?q=Maupertuis+Least+action+history yeilds some good hits.linas 20:23, 18 July 2006 (UTC)

[edit] Advisor

I saw your comment above, but I'm being a little old-fashioned and so far preserving anonymity. (Mine and, um, his. Or hers.) Nothing to do with you particularly, and I'd be happy to discuss it outside of WP....

I realise this is probably a terrible faux pas round here, and I expect I'll get over it. But there you go.

Fixed some of those minor points you referred to in the article. Still pondering the stochastic.--Jpod2 23:17, 27 August 2006 (UTC)

No, that's fine. I've been in a number of battles over pseudoscience content (not your content, you're fine), and just don't like it when total strangers show up (not you, but User:FireFox above) to antagonize. I'm sorry for the display of bad behavior, its my faux pas, not yours. linas 02:32, 28 August 2006 (UTC)
Sure. I have changed some more scale invariance, think I need to take a break from it now. Or at least devote some time to other articles (I do know more physics than just scale invariance!). Or maybe even my thesis.
I hope you don't mind but I took up your offer to hack out some of universality, since you said you had put it on the universality page. As well as hacking, I added the specific statement about ising type systems (i.e. =liquid-gas transition). I actually would be interested, do you know what universality class those other examples you put in there belong to, and what their exponents are? I've left it with the fairly vague statement that they `belong to a universality class'. Do you know more?
So anyway, I hope I've struck a compromise between accessibility and technicality. Perhaps if people don't get past the introductory bullets they can just link to e.g. the universality or phase transitions pages if they want to. Or stick around, hopefully. I think I'm way too close to this now to know which.
Oh, I was kind of surprised to see your dispute with Firefox above, mainly surprised that you hadn't come across each other before since you both have N thousands of edits where N is large. --Jpod2 20:00, 28 August 2006 (UTC)
Can't give an answer about the universality question. It is now firmly on my to-do list, but may take me a very long time to get around to it. I'm trying to solve a certain set of problems; its giving me a chance to read widely on many topics, all somehow tangential to randomness in some way. My contact with other wikipedians is quite slim, as I edit almost exclusively only in math/physics. linas 20:43, 28 August 2006 (UTC)
Well if either of us come across the details let's resolve to put it on there. I could talk to people I know who work on fluids etc, but they may not be so familiar with the CFT language. It definitely would be kind of interesting to know more about exponents for those examples. I kind of suspect that in 3d many of them may be equivalent to the Ising model; there are `not many' fixed points for scalar field theory in 3d. All the best--Jpod2 21:57, 28 August 2006 (UTC)
Try this: http://scholar.google.com/scholar?q=percolation or this: http://scholar.google.com/scholar?q=universality+scaling

[edit] Link at Modular group

Hi Linas. I just noticed that the link http://www.linas.org/math/chap-takagi/chap-takagi.html seems to be a 404. I hope I am correct in assuming that www.linas.org really is your website - if not, please accept my apologies! Madmath789 08:54, 28 August 2006 (UTC)

Yes it is, its been replaced by a pdf http://www.linas.org/math/chap-takagi/chap-takagi.pdf I removed the link, its not appropriate for that article. linas 13:46, 28 August 2006 (UTC)

[edit] Fractals/QFT

Hi Linas, I saw you wrote something about fractals/Ising/QFT on your page. i have a couple of very quick questions/comments:

(1) What is the textbook you mentioned you were using? Or other references?

(2) I guess you are taking a continuum limit (in some sense) of the states of the ising model when you speak of (the lack of) continuity of the energy as a function of position along the interval. is that right?

(3) One thing that occurs to me is that your way of assigning states to numbers on the interval is slightly strange. Two intuitively nearby configurations sometimes are *not* nearby on the real line (e.g. 00...0 and 10...0), but sometimes they are (e.g. 0...00 and 0...01). I haven't thought this through, but are we taking a morally smooth function, energy H(s) as a function of configuration, then mapping states to the real line in a weird way f(s), and thus ending up with an apparently fractal function H(f)? Maybe the fractal part comes purely from your choice of f(s), which is in a way arbitrary.

(For example consider scalar field theory, where H[\phi]~\int d^3x (\partial\phi)^2+V(\phi). I guess if you choose a functional f[\phi] from phi(x) to the reals appropriately then probably H(f) can be fractal?)

(4) the argument apparently doesn't depend on criticality, so is not obviously connected to scale-invariant theories per se

(5) it is not obv related to SLE, where we have that an alternative representation of a state in the continuum limit of a 2d lattice model is as a (fractal) curve in the plane. It is not I think saying that the Hamiltonian as a function of configuration is fractal, though maybe I need to think about that more carefully.

I guess the point (3) is the one that is the most pertinent? All the best--Jpod2 12:11, 31 August 2006 (UTC)

I think I can rephrase more clearly (3) above. In the systems of interest, you are given a map from a classical configuration to the real numbers: the Hamiltonian, H(s). At the moment I think what you are talking about is a second map, f(s), also from states to real numbers. But since this second map isn't given in the problem, isn't it arbitrary? So when you compare H and f for a given configuration, s, any fractal behaviour in H(f) is artificial. That seems too simple, and perhaps you mean something else.... --Jpod2 12:59, 31 August 2006 (UTC)
Yes, its arbitrary. Much of the fractal behaviour may possibly be attribued to the mapping itself -- but this is not clear to me. In order to be able to define mathematically rigorous integrals, one must first define a topology: that is, a covering of the space with open sets. Next, one takes the topology, and assigns it a measure, in order to get a sigma algebra. All integrals are then performed with respect to this topology.
I'm not entirely sure what the correct topology is, but I beleive that it is the product topology, for which the open sets are the cylinder sets. I beleive that the cylinder set definition generalizes to latices of artrary dimension (although the WP article is written for 1D). Notions of continuity, differentiability and integrability are with respect to that topology. I like to visualize it eithe as a fiber bundle, or as a sheaf, but neither is partiularly enlightening.
OK, let me rephrase the problem this way: *If* we beleive that the configuration space of the lattice models is smooth and well behaved and not fractal in any way, then the question becomes this: why is the graphing of lattices onto the real numbers seem fractal? Should we use this to reverse the direction of study: is it possible that the correct way to study and understand fractals is to map them over to a lattice configuration, where everything is smooth, and study them there? Where does the self-similarity come from? Is it entirely an artifact of the mapping? The answer to all of these questions might in fact be easy; I just don't exactly know right now. I'm not on solid ground here. linas 19:21, 31 August 2006 (UTC)
Hi, I think at the moment it is an artifact of the mapping, if only because I can come up with other 1-1 mappings f(s) from the space of configurations to R, for which H(f) will not be fractal. For example, suppose there are n states. List out the states in order of energy (obviously there will be a degeneracy at most of the energy levels for the Ising model). Then assign the states *in that order* to the numbers 0,1...n. H as a function of *this* f on the real line is monotonic with a series of steps, right? I'm not sure why this mapping is any worse than yours.
Let me say the words "topology" again. With the p-adic mapping, I know what the open sets (the cylinder sets) corespond to on the real number line -- they correspond to (more or less, if not exactly) the "natural" topology on the real number line. All notions of continuity and integrablity are with regards to the topology. If I order the states by increasing energy ... I don't know if that respects the topology. You might be right, but not obviously so. I'll try to figure this out. linas 00:05, 1 September 2006 (UTC)
So my feeling at the moment is that the fractal is coming purely from the way you have mapped the states to R, and that there is nothing inherently fractal about the Hamiltonian, for the Ising model or field theories.
I don't know, you may be right. However, if that is the case, then we can reverse the problem: map the fractals onto lattice models, and study the lattice models to get insight into the structure of the mapping. The point here is that the Minkowski question mark function (which is what the Potts model p-adic mapping looks like) does in fact have deep conections into modular forms, and hyperbolic spaces in general. It is known that chaos and hyperbolic spaces fit like hand-in-glove: the Anosov flow on (any?) hyperbolic space is chaotic, and thus "fractal" in that sense. The earliest example of chaos is Hadamard's billiards, from 1897, wherein Hadamard showed that the motion of a free, point particle on a hyperbolic surface is chaotic (actually, he showed that the trajectories are exponentially diverging). (More interesting: this was apearently popularized in a Victorian-era pop-sci book, which appearently no physicists of the time bothered to read, including Poincare). Are hyperbolic surfaces smooth? Utterly. Is motion on them chaotic? Yes. Smoothness and fractals are flip-sides of the same coin.
Follow me, if you will, on one of my patented wacky daydreams: Its been long hypothesized by many that the confinement mechanism of QCD is a manifestation of some unknown hyperbolic structure of QCD. Namely, the quarks "can't get out" because the "edge" of the nucleon is "infinitely far away", in the same way that the edge of the Poincare disk is infintely far away when endowed with the Poincaré metric. We can "see" the whole disk, although the poor flatlander living in the disk can never get to the edge of his infinite universe; and so analogously with quarks. Of course, no one has been able to derive the hyperbolic geometry by starting with the second-quantized Yang-Mills equations; if they could, they'd have one of the Millennium prizes in thier pocket. But ho: here we have a starting point: a crazy p-adic mapping of QFT field configurations that suggests hyperbolicity. Wow. linas 00:05, 1 September 2006 (UTC)
I could be missing something, but that's the way it seems---is there a ref?
I think I read of this in http://www.amazon.com/gp/product/0198596855/104-1505235-2326308?v=glance&n=283155 Ergodic Theory, Symbolic Dynamics, and Hyperbolic Spaces (Oxford Science Publications) (Paperback) by Tim Bedford (Editor), Michael Keane (Editor), Caroline Series (Editor) with chapter 6 or 7 reviewing the 1D potts model. linas 00:05, 1 September 2006 (UTC)
(This is all different from SLE, where I agree a configuration of (the continuum limit of) a 2d lattice at criticality can be represented as a fractal curve in the plane. As we discussed, the latter is a statement about domain walls in the continuum limit, not the properties of the hamiltonian or partition function as a function of configuration.) All the best,--Jpod2 20:20, 31 August 2006 (UTC)

(unindent) OK. I'm not sure why the topology is relevant at the classical level. To me it is far from obvious what picks out your mapping. If you can tell me what uniquely picks it out, OK. Are you claiming that the only continuous maps wrt to the topology end up with H(f) fractal?

As for learning about fractals, I'm not sure. My instinct is that it's like having a fractal curve, x(t), mapping from R-->R^2, and saying can we learn anything about the fractal by going back to a physic theory defined on the original R. Obviously not, but maybe that is over-simplifying. I guess what I'm saying is that the fractal-ness (in this sense, as opposed to SLE) might just enter and leave purely with your mapping f(s), so the lattice theory might not really know anything about it. I guess if you want to take the ideas further you'd have to come up with a specific connection, or as I said pick out the mapping uniquely somehow.

I'm not sure how the fractal idea would work for say a field theory, as opposed to a lattice model, so can't really comment on the QCD ideas. All the best--Jpod2 08:15, 1 September 2006 (UTC)

You are uniquely resistant to new ideas.
I find that a surprising and unjustifiable comment! How many other wikipedians have taken the time to think about your idea(s) and get back to you for a technical discussion? That is a strange way to respond.
I'm sorry; you are quite right, and I did not mean to insult. I was a bit frustrated: your response seemed to be "this cannot ever be made to work", instead of "how can this be made more rigorous?" or "what can we actually construct from this circle of ideas"?
Topology is always relevant. It is impossible to define integration or continuity without topology:
Agreed. Integration isn't involved at the classical level, which was my point above. So I think what you are talking about is continuity. Fair enough. So I think your statement about mappings `respecting' the topology means that they are continuous wrt to some given topology. Is your mapping at all picked out by this requirement?
I don't know what you mean by "the classical level". The space of interest is the space of all possible configurations of field values of a lattice model. One typically integrates over some or all of this space. One typically wants one's integrals to be continuous, in that, if I integrate over a set, and then integrate over a set that is slightly, delta, larger, I expect the integral to be epsilon larger for a suitably small epsilon. The cylinder set provides this notion of the continuity of integrals performed over the set of states of a lattice model.
see for example, the Vitali set, or the Banach-Tarski paradox to see what happens if you define a topology in some willy-nilly fashion (such as ordering states by energy). Particular care needs to be taken when working in inf-dimensional spaces; physicists often assume all operators are trace class, when many important operators are not, e.g. the baryon number (or just about any oper in qft).
I don't know why you are resisting the p-adic maping. It puts configuration states that are "close", next to each other, which surely is desirable. Anything that you would care to integrate over only a portion of the configuration space results in a continuous curve. Continuous curves are good.
Far from resisting it, I am merely pointing out that it seems arbitrary, and I can't think of any reason at the moment that makes it less so. Can you? The question for me is: why are you attached to it? I don't buy the comments about `close' states being close to each other (my comment (3) at the beginning of this thread explains why I don't).
I would be very interested if there were something inherently fractal (in the sense you mean) in the ising model. Unfortunately, for reasons I've explained, the fractal features you are referring to seem artificial. If you can explain to me why not, please do. All the best--Jpod2 14:39, 1 September 2006 (UTC)
Read the article on cylinder set more carefully. The product topology has a natural measure on it, and its the measure appropriate for lattice problems. Alternately, get the book I mentioned above; if your library doesn't have it, then surely an inter-library loan can find it. This will give you the core to at least see the analogy I'm trying to sketch. The rest of what I write above is purely speculative; and so its fine to be skeptical of it. I think part of what you are reacting to is the ambiguity of "what do I mean when I use the word "fractal"?" and the answer is: I don't know. As I mentioned, flows on hyperbolic surfaces are fractal, or at least ergodic. Does this mean that the space of all possible flows is somehow "fractal"? I don't know. I'm trying to understand the general structure of this (kind of a) space, but I don't have a rigorous argument one way or the other at this time. linas 15:31, 1 September 2006 (UTC)
I got interested in math in part because I got tired of the crazy, hand-wavy arguments that physicists often make. I've since learned that "truth is stranger than fiction": that the detailed, mathematically derivable and provable results are actually considerably stranger than the made-up, glossed-over world that physicists work in. To each, his own, I suppose; I am happy doing this, and I am under no pressure to "publish or perish", and thus do not have to limit my reading to a narrow, focused topic. As a result, I've finally "shifted out of first gear", of running real fast and while not moving far. I've gotten to see a bit of the world, and its thrilling. Later. linas 14:03, 1 September 2006 (UTC)
Oh, last remark: as to learning about R by studying mappings to R^2 ... I've learned a lot about the real number line R recently: its isomorphic to the Cantor set, for starters, which is the "true font" of fractal-ishness. Thus, in this particular sense, the real-number line is fractal. This can be most easily seen in the article on the j-invariant, the real number line is wrapped around the perimeter of the disk shown there. The real number line is fractal because the rational numbers are fractal: consider, for example a square lattice; plot the distance of points of the lattice visible from the origin as a function of p/q of the coordinates (p,q) of the point. Another example: consider the number of points in a square lattice that lie to the lower-left of a hyperbola. This function is divisor summatory function. The rationals are fractal because the integers are fractal: the insanity of the integers, and prime numbers in particular, can be clearly seen in the pictures at Mertens function or Chebyshev function. All of modern cryptography is based on the insane and undecipherable fractalishness of the integers. Erdos had a good quote: "God may not play dice with the universe, but there's something funny going on with the prime numbers." linas 14:33, 1 September 2006 (UTC)

[edit] reprise

(unindent) Well, I guess you are lucky I'm not too thin-skinned.

By classical level I mean, forget about the partition function for the moment. The question we are asking is just looking at the classical hamiltonian as a function of configuration. Unless I misunderstand what you mean, there is then no integration necessary.

Yes OK.

However, I can understand you might want f to be continuous wrt some appropriate topology. You say that for `a point on the unit interval, there is a unique corresponding configuration of the system'. Perhaps this is my problem, but I have never understood in this conversation precisely how the gaps are filled in. Take the Ising model: only 2^N numbers map to lattice states in the model. What do the other points on the interval map to?

For a lattice with N locations, consider only the binary or "dyadic" numbers with N digits. Sooner or later, we are interested the limit of large N. Even for N as small as 32, one has that 2^32 is quite large, and might be taken as getting "close" to the continuum limit.

Is the true space of states you are talking about much bigger than the Ising model?

No, and thus I don't understand and can't answer the next two questions.

If so what is the space, and what is the topology you want to put on it? Are the open sets still cylinder sets, in some sense?

OK, assuming we have sorted out what the map f(s) is, precisely, then:

By f(s), I assume you mean "the classical Hamiltonian, as a function of the configuration"? Or, by f(s), do you mean, the map that, given the 2-adic string of symbols b0b1b2...bN where b_k\in\{0,1\}, asigns to this string the rational number r=\sum_{k=0}^N b_k 2^{-k}? Let me call this second map "the dyadic map", OK? (Its essentially a variant of the Cantor function). I assume below that the questions are directed at the second map, and answer as such. (However, after reading further ahead, I see that this p-adic map is a source of confusion, and is irrelevent to the issue, and so it perhaps should be put aside. It is not required to make a general scaling argument).

(a) Is your f(s) continuous?

For finite N, I guess that it is trivially so, if we stick to topological continuity.

(b) Is it the unique map that is continuous?

For finite N, clearly no: I can certainly consider left or right rotations, replacing b_k by b_{k+1}, etc. Also, the map r=\sum_{k=0}^N b_k x^{-k} for just about any value of x is going to be an invertible map between binary strings and a set of real values.

(c) If (b) is true, is the topology we started with uniquely appropriate? If so, why? (d) if none of the above is true, why have we picked out this f(s)? I could get the book, but surely these are questions you've already asked yourself.

For finite N, there's no particular mystery. For the limit of N to infty .. before I give some long-winded answer, what's the question again? I don't think the question should be "what's the classical limit", I think the question should be "how does one integrate over a set of states."; these are distinct.

To be honest, the mapping doesn't seem very natural to me at all (see my comments in (3) at the top about nearby states). As always in physics, it could be my presuppositions that are not natural. But it still seems to me it is the mapping that is introducing the fractal features, not the lattice model itself, and you haven't really tried to convince me otherwise.

Hmm. Well, I was hoping you'd ask "what do you mean when you say fractal"? To which I was going to respond like so: Consider a small subset V of states of the total space of states U. Now consider integrating over various subsets of V. That is, I have a function that assigns a value to various subsets of V, and that function is continuous, and that function can be interpreted as an integral. Consider now a different subset W of states of the total space of states U. I claim that the value of my function over subsets of W "looks a lot like" the value of my function over subsets of V, with the only difference is that its scaled by a scaling factor. Furthermore, there are many many many such sets W, and some contain V, and therefore, it deserves being labelled "self-similar". Is that clearer? I'm not talking about the "classical" problem, I really want to talk about integration. I also claim that the scaling/self-similarity has nothing to do with the p-adic mapping, although the p-adic mapping is a way of visualizing the 1-D case. However, I don't think you will accept the p-adic mappng as a legit way of visualizing things, and I don't think its worth arguing over it, because, for now, it just clouds the issue. Instead, it seems that I will need to construct a delta-epsilon style proof, showing how V and W can be delta-close to each other (thus earning the name "similar"), and showing that there are many, many of these nested in one another (thus deserving the name "self-similar"), and finally, constructing an explicit map from V's to W's, thus showing the "group" (technically only a monoid or semigroup) of self-similarities. Oh and finally, I have to show that this set of subsets covers all of the space, and is endowed with a natural measure, so that I regain the partition function correctly, without having gaps or holes or double-counting. I think this program can be accomplished. Does the above make sense?

I think one can probably always introduce a fractal funciton into a physical problem if one wants to. But in something like the SLE stuff, the fractal dimensions actually come into quantities like scaling dimensions. I can't see that here, and it seems like the mapping to the real line is just introducing an extra layer of formalism. All the best--Jpod2 17:30, 1 September 2006 (UTC)

PS I have read a little more about the cylinder sets. Considering a map from states to the integers 0...2^N, isn't pretty much any map continuous? I could be missing something, but what map isn't? Taking the discrete topology on the integers, I mean. So at that level it's not clear to me why f(s) is preferred.
Yes, but lets forget this map, its a red herring, and obscures the issue. I beleive the argument can be made without appealing to the map.

Maybe there is more subtlety if you consider an onto map to an interval of the real line, but as I said above I've not yet understood what such a map would be mapping *from*.

I hope all of the above doesn't come across as too challenging, it is difficult to get tone right sometimes. I don't mind if you answer `I don't know', or `I think' to some of the questions above.--Jpod2 17:54, 1 September 2006 (UTC)

I think we're still talking past each other. Perhaps the longer paragraph above, about U,V,W, makes more sense? I will try to formalize that paragraph, but it will take more than an hour at the keyboard to do so. linas 20:41, 1 September 2006 (UTC)
BTW, re your earlier question about exponents, try this: http://scholar.google.com/scholar?q=percolation or this: http://scholar.google.com/scholar?q=universality+scaling


[edit] Fractals summary

So let me summarise what I have been trying to say above. I was responding to the section on your userpage, which might not be quite related what you are arguing.

Take the dyadic map, f(s), from the ising model states to the real line. Consider the classical hamiltonian as a function of position along this line via \tilde{H}(f)=H(s(f)), where H(s) is the hamiltonian as a function of state, s. (Let me just call \tilde{H}(f), H(f), for convenience). I thought you were claiming that this function H(f) in some sense is fractal, and related to the Minkowski ?-mark function. Isn't that what your userpage says?

Yes. The hamiltonian is actually that for the Kac model, with is like the ising model but with a long range force. The part that looks ?-function-like is the integral over over sets of states. If you graph just the "classical Hamiltonian", you get an utter mess. The problem is that classical states is sets of measure zero -- the classical problem is a red herring.
Hi. Your user page says `plot the energy along the real number line. The resulting graph looks horridly discontinuous everywhere; on closer inspection it can be recognized as a fractal'
Either have you changed what you mean since writing this, or this is very confusingly phrased. What else do you mean by energy other than the classical hamiltonian?
Oh, sorry. If we abandon topological continuity for finite sets, and go back to high-school ideas of continuity, then a graph of the classical energy is discontinuous-everywhere;
Sure. This was my point in (1) below. Another example of us talking past each other, since I assume you were trying to address a different question when you replied to (1).
this is the case for any ordering of the states you might choose. Upon integrating, you get a smooth function whose first derivative is discontinuous everywhere. This is independent of any ordering you may choose to use. You can graph the energy or any other observable, you will have the same effect.


My objections/comments to that were/are:

(1) H(f) is a map from integers to R, because most of the points along the real line don't correspond to states. Whether the large-N limit changes this seems slightly subtle, but OK maybe it can be defined.

Irrelevent. Just introduce a cutoff. The map from "integers to reals" is very simple a division by 2^N, so that all states are mapped into the interval [0,1]. This makes sense because we want the total measure to be normalized to one.
Wh-a-a-at? I really must be misunderstanding you here. In what way does dividing by 2^N mean that you are mapping `onto' the unit interval? You are mapping from integers to a subset of the rationals on the unit interval. What difference does that make?
I want to able to say "the total volume of the space of all states is one". This is a convenient normalization.

(2) f(s) is arbitrary, and I could find other (topologically) continuous maps for which H(f) doesn't `look' fractal.

Except that then, you would be placing states that are unrelated to one another next to each other, which is a "bad thing". There is a certain sense in which the state 01010 is "close to" the state 01011, but is "far away" from 11101, even if all three have almost exactly the same energy. The dyadic map aka Cantor-function-map have the property of keeping many "nearby" states nearby, even if some get moved far away.
This is *very* handwaving. Keeping `many' nearby states nearby, but not all? What is the `certain' sense of nearby you mean anyway? AFAICT you just mean that states that are nearby after the dyadic mapping are nearby after the dyadic mapping. The only way given in the problem to assign a number to a state is the Hamiltonian. My point for the last N edits has been that you can choose any mapping you want, inlcuding the dyadic mapping, but it is arbitrary. I don't understand why you are so attached to it.
Its not hand-waving. There are several metrics which can be rigorously defined. The p-adic metric is a natural metric for one-dimensional evenly-spaced tilings; it generalizes to tilings in arbitrary dimensions (and even lattices in curved spaces), including regular lattices. The tiling metrics are desirable, in that when two points of a tiling are close to one another, then the energies (or other physical observables) are also close to each other. This is in the sense of high-school calculus: that when one is delta-close, the other is epsilon-close, with epsilon getting small when delta gets small. WP does not currently have an article on tiling metrics.
Again, it is not clear to me what picks out this metric uniquely, from the perspective of the ising model. doesn't mean it isn't an interesting object of study.

Therefore I concluded that this fractal feature was artificial However, I think you agree with me on that. Is that so?

No.

If so your userpage stuff is a bit misleading (or at least the above is how I understood it).

Fractal scaling in lattice models has been investigated since the 1970's. There are probably thousands of papers published on the topic. The google searches above will get you started. I'm sorry my userpage stuff seems misleading.
This I do not doubt. SLE stuff is a prime example. But just because fractal scaling exists in certain senses in lattice models doesn't really bolster your current proposal. Or if it does, please just give me the precise arguments.
SLE appears to be something discovered in 2000. Fractal scaling predates SLE by several decades. If I were to give you a precise argument, I would first need to give you a tutorial in measure theory and metrics and the like. And then I could try to make precise claims based on dimly-remembered formulas using non-standard notation. I would need to refresh my understanding of some topics I never actually studied in the first place. Or I can tell you to go off and study the classic papers on the topic, such as percolation and universality. The second approach seems easier.
I only mention SLE as an example of where fractals crop up in lattice models.

Anyway, perhaps this is not really your point. Let me know if we agree on the above while I think about what you have said (UVW).

OK. I now realize the map f(s) is a red herring that obscures the issue. I thought it was a clever way of visualizing things, but clearly that won't work for you.
It's not that it won't work for me. It's that the map does not seem to be uniquely picked out in any way. Therefore the fractal features seem artificial. It's not a question of `working'.
Note, however, that the Cantor function and/or the ?-function shows up in many physics problems, including the fractional quantum Hall effect, and the phase-locked loop (see circle map). Although the fractional quantum Hall problem is now considered to be "solved", as evidenced by the Nobel prize awarded for its solution, the phase-locked loop is not. linas 15:35, 2 September 2006 (UTC)
Again, pointing me to specific examples where fractals *do* show up doesn't bolster your current argument.
I am interested in the UVW, and have some comments, but first I really think we should clear up what your discussion on the userpage is supposed to mean, or else we certainly will be talking past each other. So what do you mean by: `plot the energy along the real number line. The resulting graph looks horridly discontinuous everywhere; on closer inspection it can be recognized as a fractal', if not what I have summarised above? All the best--Jpod2
UPDATE have you been referring to the expectation value of the energy all this time, is that what I am missing? Perhaps it is, but if so it's not obvious to me why/how you would think of this as a function of state (`plot along the real line'). It would be a fractal when thought of as a function of...? If not, then I guess I am still baffled by your comments, but I've said it all above. --Jpod2 17:03, 2 September 2006 (UTC)
I can't unbaffle you by waving a magic wand. You can try drawing some of these graphs yourself. Or I could try drawing some of them, and then provide a guided tour. However, I am concerned that you wouldn't find this satisfying, because you are interested in the physical theory, and the relation to CFT and QFT, whereas I'm interested in the structure of the real numbers (in the same way that number theorists are interested the structure of the integers) and the structure of spaces used to understand the reals, and the ability to integrate and differentiate on such spaces. I want to know why the divisor summatory function has the shape that it has. I want to know how to integrate turbulent differential equations. The p-adic mapping of the 1-d ising and potts and kac models is interesting to me because it is self-similar, and not because its "good physics". I want to understand why it has the shape that it has, and how to characterize that shape. The fact that such an exercise doesn't result in "good lattice model physics" is entirely irrelevant to the quest. linas 19:07, 2 September 2006 (UTC)
Hi. A couple of short comments inserted above. most below. Part of the confusion might be that we are interested in different things (in this thread). But you got me originally interested by saying that lattice models were `fractal at their very core', and so I naturally wanted to know if fractals were physically important *in the sense you mean* (no need to refer to physical situations where fractals *do* show up). If your interests in this instance are not so much to explain anything about the physics of the ising model, fair enough. (In another mode, I too am interested in the real numbers, just not here.)
However, part of the reason for my confusion I find it difficult to pin down exactly what you mean by energy, and exactly what you mean by fractal. And I don't think it's so much a lack of background on my part, though it may be I haven't asked the questions clearly enough.
There are two distinct concepts of energy. The "classical" energy is the energy of any one given field configuration. In the limit of N to infty, one single field configuration has a measure of zero, and thus, in and of itself is "difficult to work with". So I've been avoiding using the classical energy. The other energy is the "ensemble energy", the energy integrated over some subset of the total set of states of the system. This energy is finite as long as the size of the subset is finite, even in the limit of N to infty. The "classical energy" of a given state can be regained by shrinking the size of the subset to zero (and dividing by the size of the subset, so that in the limit, one has a finite value). In this sense, the classical energy is the "derivative" of the ensemble energy, although one has to be careful, because taking the derivative requires that the limits be well-defined, convergent, etc. In inf-dimensional spaces, (which is what the lattice is, when N to infty) there are "well known" difficulties and pitfalls in defining convergence and derivatives. For the space of all possible states, its even worse, because the number of configurations is 2^N, which is a power set, and has the cardinality of the continuum when N to infty. Many theorems from topology and functional analysis that work on spaces that are countable don't work on the continuum, and v.v.
(1) Is your dyadic mapping example ultimately going to demonstrate the fractal features you are trying to explain? If not, let us leave it. If yes, and you just don't feel I understand it, or it isn't working on me, let us pursue it.
Yes. A year ago, I'd been playing with free groups in two letters. One afternoon, I had been reading about the potts model, and, noting it was representable by a string in two letters, decided to make some graphs, just for the heck of it. I was surprised by what I saw.
(2) Assuming the latter, I want to know precisely in what sense the dyadic (or other map which will result in fractal behaviour) mapping is picked out. If the reason is that some particular metric seems natural to you, fine.
I don't understand what you are asking. The p-adic maps are "unique", except when p=2, in which case, there are two: the 2-adic map, and the Minkowski question mark function. The question-mark function does not have a 3-adic analog, or any p-adic analog for any p other than 2. Now, once you have a p-adic map, you can scale, distort, twist away as you will; see de Rham curve for some fairly general examples of what can be done with the 2-adic map.
(3) But is there any physical relevance to it? Or can I paraphrase by "(a) I have this interesting (and natural) dyadic mapping from a lattice to R. (b)If I choose a function from a set of functions with certain properties on the lattice (included in which will be the ising model hamiltonian), then I can deduce some fractal properties of the function wrt to the dyadic mapping". If so, perhaps you are learning more about dyadic mappings than the Ising model. I guess it is a matter of perspective.
Physical relevance in what? I thought many of the 1D lattice models were "solved" from the physics point of view. Anyway, I'd been studying dyadic mappings for a while, and was surprised by what I saw. I would like to characterize what I saw better. The dyadic map is just a map; one may map many things with it. The result is both an outcome of the dyadic map, and the thing being mapped.
(4)When you say you want to plot the energy as a function of state along the real line, I gather you don't mean plot the hamiltonian as a function of state along the real line, but rather an integration of this over some subset of states, where the subset is defined using the dyadic mapping (or whatever mapping you have *decided* to investigage). Is that correct?
Answered above, before (1).
I think my conclusion is that perhaps one can learn something here about the dyadic mapping, or lattices, or both. But that one is not necessarily learning about by the ising model, because you would not find the same fractal features if the construction were not defined by the dyadic mapping. Of course, you may argue that you would. But I don't think we ever distinguished this mapping from my mapping with states in order of energy, for example. That is also topologically continuous. And one can perform what I think would be an appropriate integral. And there will be nothing fractal there.
Gadzooks. You are talking about a topological vector space with a cardinality of \aleph_1=2^\omega and you are making it out like its a walk in the park. Mere Hilbert spaces and Banach spaces have the cardinality of countable infinity \aleph_0=\omega and these are known to be beset by problems, see, for example, Hilbert cube, weak convergence, Frechet derivative, Fréchet derivative/Sandbox, Sobolev space, trace class, nuclear operator. The configuration space of a lattice model is a whole nuther infinty larger than the countable infinity!! Almost nothing that you can intuit from the world of three dimensions applies to the world of something with the dimensionality of the continuum!
To sort the states in order of increasing energy, for N=32, you would need a small supercomputer. To sort the states by energy for N=64, you'd need a computer that's unlikely to exist in our lifetime. For N=128, you'd need a vivid imagination about space-alien technology. Thus, however it is that you are ordering things on this space, it had better not be algorithmic. You need a different approach to take the N to infty limit.
PS no need for the comments about tutorials on measure theory and metrics---since I don't think you have established the limit of my knowledge in these areas. All the best--Jpod2 23:32, 2 September 2006 (UTC)
Sorry, its clear from our conversations that our vocabularies are different. I'm not saying these are hard concepts, I'm saying you are not familiar with them. linas 01:00, 3 September 2006 (UTC)
I don't think that's the case. But my point was that your tone descends a little low sometimes---could I have mentioned giving you a tutorial on CFT? Probably many times, but I think it would have seemed rude. Well, we've already established that I am suitably thick-skinned, so not to worry.
I think the problem is more that you are misunderstanding my questions, or I am not phrasing them well enough...perhaps I ought to just re-emphasise one thing. You first mentioned that the potts model was `fractal at its very core'. Your userpage section is entitled the fractal life of quantum field theory. And you have speculated that these ideas might explain *physical phenomena* (such as ambitiously, quark confinement).
But recently you seem to have backtracked somewhat from the claim that these specific ideas about fractals and lattices (again, no need to refer to known examples where fractals *do* show up) can tell you something about any physical quantity in any physical theory. Yes, of course the 1D ising model is solved, but your claim was that these ideas were of much more general relevance, I believe. Now I don't think you are so sure of that, or at least your emphasis has quite shifted over the course of this conversation. (But anyway, whether it is solved or not is completely irrelevant---if we could re-understand some of its properties in terms of say some fractal dimension that would be interesting. This is what happens in SLE, for example.)
But, as you emphasise just above, you are now trying to learn about the real numbers, irrespective of `good physics'---fair enough. And as far as I can tell, you are actually interested in comparing the properties of pairs of maps from a lattice to R (i.e. one map being dyadic or p-adic, the other map being something related to (an integral over states of) the hamiltonian.
As I have said many times, you can compare those maps if you want to, or you could just choose two other maps to compare. But let us not worry too much about it, because of course it won't tell me anything physical about the Ising model.
And neither will the dyadic mapping, though I agree that it may well tell you something interesting about dyadic mappings. I completely agree that finding interesting stuff when you look at these graphs is *interesting* and not necessarily expected. It's just not physically relevant for the Ising model, and that's how you *advertised* all this stuff. Maybe that's what you meant all along, but my feeling is you initially thought that the dyadic mapping and the associated fractal behaviour was of more physical significance. All the best --Jpod2 10:00, 3 September 2006 (UTC)

[edit] New fractal section

I am annoyed because you are not making any effort to understand what I'm saying. It seems like you would rather argue and talk than actually think or understand. I've put a lot of effort into answering your questions, and you rather glibly la-dee-da through the conversation. You are accusing me of some sort of false advertising, and that's just bullshit: the problem is that you don't understand the topic, you've never seen the theory behind it, and you'd rather just brush it all off like it isn't happening or it isn't true. Your failings are not my fault. linas 16:26, 3 September 2006 (UTC)

Dear Linas, if I were you I'd calm down and read through the whole conversation. I think the fact that I have patently made a lot of effort, and have been trying to think about and reply to your specific points is evidence that your statements above misrepresent our discussion. If you'd rather someone just say `yeah, fractals, QFT, cool', well probably I am not the person.
But please do read what I've said---I'm not accusing you of false advertising, that would indeed be bullshit. What I said was that I think you have re-evaluated the purpose of this work as we have discussed it, i.e. since you `advertised' it. If that's so, it wouldn't hurt to admit it---if not. then perhaps I got the wrong end of the stick in the first place. But, compare your userpage discussion (which I believe emphasises physical importance) to what you say above:
"whereas I'm interested in the structure of the real numbers (in the same way that number theorists are interested the structure of the integers) and the structure of spaces used to understand the reals, and the ability to integrate and differentiate on such spaces. I want to know why the divisor summatory function has the shape that it has. I want to know how to integrate turbulent differential equations. The p-adic mapping of the 1-d ising and potts and kac models is interesting to me because it is self-similar, and not because its "good physics". "
I actually also would like to think more about comparing the p-adic mapping of a lattice to the `hamiltonian mapping', which is ultimately what I think you're doing. I do believe it is of interest, and I for one don't think the self-similarity is trivially expected. There are interesting questions to be asked about why that happens, right? Can we at least agree on this paragraph?
But what I don't see at the moment is how it can tell us about the Ising model, or QFT, which is what I think you originally thought. Maybe I'm wrong, but your very first reply way above seems to indicate that either you hadn't fully appreciated that the properties of the p-adic map (as opposed to other possible mappings) are so crucial here. Certainly this isn't emphasised in your original thoughts on the userpage.
Look, I'm not trying to have a go at you, I often find myself reinterpreting what I thought I knew. That's the point of a physics discussion, right? Just don't take it so personally. I don't think there are all that many wikipedians that is possible to discuss physics and maths with properly, so it's not worth falling out over IMO. --Jpod2 16:56, 3 September 2006 (UTC)
PS Anyone looking at your discussion page will think we are both somewhat obsessed with this topic:)--Jpod2 17:30, 3 September 2006 (UTC)

[edit] Another try

I've moved this section of the talk page to User:Linas/Lattice models so as to make it more self-contained. Please edit/reply on that page as needed. linas 18:07, 4 September 2006 (UTC)

[edit] Exponents and Universality classes

Yes, you have kickstarted me into looking into which of those things on the universality page are in which universality classes. I will look into it more at some future point. It would be interesting to know what are the CFT descriptions of each of them, and what the exponents are.

I think in 3d many of them will be related to the Wilson-fisher fixed point (i.e =Ising model), but down in d=2 there are infinitely many fixed points in RG flow of a single scalar field (equivalent to the minimal models). So something to look into.

UPDATE It looks like Kadanoff (what a surprise) has a big paper on the avalanches. Looking through the citations Pietronero seems to like applying RG techniques to various different phenomena (internet, forest fires, sand piles). I don't have a Phys rev sub here, though, so I can't read all of them now. Looking at one of Pietronero's, the RG techniques seem a little ad hoc, though they do generate some values for the critical exponents. It would be quite nice to work out what conformal field theory underlies these phenomena, but that doesn't seem to be the language they use.

Great! Most of the universality work was done before conformal field theory existed; I was under the impression that once CFT arrived, it provided a foundation that was previously lacking. But really, this is at the edge of what I know. linas 15:57, 2 September 2006 (UTC)
Well, I think for a long time there has been the kind of `real space' RG (like blocking of lattices etc) most popular in condensed matter, and on the other hand the Gell-Mann-Low RG used in field theory. I think the latter was first used by Wilson and Fisher in the condensed matter theory context, so the field theory approach in condensed matter does go back quite a long way.
Anyway, I am rambling. Perhaps I shall email Pietronero as he might be able to give/point me towards a general overview of the (conformal) field theory description of these various critical phenomena.--Jpod2 16:10, 2 September 2006 (UTC)
The fractal issue we may agree to differ on (the significance of). On this topic I believe we would both be itnerested in classifying the universality classes of the phenomena you posted on scale invariance and universality, and I would certainly be also interested in understanding the underlying field theory.
So anyway, no hard feelings on the above (perhaps it will carry on, but I feel we weren't progressing...) and I will get back to you when I find something interesting about this stuff.
As an aside, I think scalar field theory needs a complete overhaul---both the classical (which is essentially non-existent) and quantum theory (which is just weak). It'll be my next work in progress. I'll get going on it sometime. All the best --Jpod2 23:41, 2 September 2006 (UTC)
Have emailed. Will let you know of anything interesting I learn. --Jpod2 13:04, 3 September 2006 (UTC)

[edit] TechnoSphere

Greetings. In this edit you removed three categories and added Category:Museums. I'm puzzled by both the removal and addition of these categories. Could you help me understand your edit? Thanks. --Rkitko 20:51, 2 September 2006 (UTC)

Its got nothing to do with fractals, dynamical systems, etc. Its some sort of online game or exhibit. It was utterly mis-categorized, and museums seemed to be a good match. Maybe category:computer games or category:science websites or something. linas 20:53, 2 September 2006 (UTC)
I may not know that much about fractals, but when I wrote the article, I sourced materials from the creators of the program that included the phrase "fractal landscape". The 3D environment appeared to be created with some element of fractals, which is why I included that category. It wasn't really a game, though many participants viewed it that way. Largely, it was an early experiment in artificial life hosted on a university computer, which is why I included that category. See the comment on chaos theory in the introductory paragraph as well. A museum it is not. It was later introduced as a museum exhibit, but that was not the goal. Correct me on any of my assumptions here, but I placed those categories there with the little I know about the subjects. How exactly was it mis-categorized? Thanks. --Rkitko 21:04, 2 September 2006 (UTC)
Please review the other articles in the categories to which you want to add your article. It should become quite obvious that all of the other articles have to do with the mathematics of fractals, etc. The article on the technosphere is not about the mathematics of fractals, etc. and therefor rather cleary "does not belong". The opening sentence makes this clear: TechnoSphere was an online digital environment... If you don't like the category for musuems, or one of its sub-cats, then other suitable categories for it might be the category for blogs, for the category for scientific computer software, the category for online entertainment, the category for online digital media presentations. Please take the effort to find the correct categorization for your article. Trying to force-jam it into a place where it clearly does not belong will only cause you readers to scratch their heads when looking for other articles of a similar nature. linas 21:21, 2 September 2006 (UTC)
I admit I never understood the purpose of categories. I wasn't aware that all category contents had to be of a similar nature. I've always used categories as a top-down approach (i.e., placing it there, other users could explore where else fractals can be found or what they're about). And so by placing it in the museums category, you did something similar to what I did--none of the other articles in that category appear to be about this same subject. I was just confused about both the subtraction and addition of categories at the same time. Your reasoning makes sense for removal of those categories, but to be fair (and I don't remember and have no way to check) the categories I placed on that page may have once included fewer articles and maybe others like this one; I don't remember the fractals category being so large before. In that case, one could be right in placing an article in a category at one point in time and as it evolves it could no longer be appropriate. It's frustrating that there isn't a single history showing when articles were added to a category. Otherwise, please assume good faith. I had no intention of force-jamming anything. I'll try to find a more appropriate category for this article. Thanks for your patience with a (relatively) newbie Wikipedian. --Rkitko 22:02, 2 September 2006 (UTC)
Try posting a question at Wikipedia:Reference desk/Science asking how an article like this could be categorized. linas 22:42, 2 September 2006 (UTC)

[edit] Bernoulli polynomials

Hello. Could you comment at talk:Bernoulli polynomials? Michael Hardy 19:24, 7 September 2006 (UTC)

[edit] Divisor function

Hi, Linas. Discussion on talk:divisor function revealed that the otherwise excellent plots of the divisor and sigma functions which you created suffer from some off-by-one bug. Do you think you could fix it? Thanks -- EJ 01:57, 13 September 2006 (UTC)

Thank you for your update and fix; I have replace the old files, now sooner or later some admin will delete the temporary ones. Bye paulatz 140.105.134.1 07:01, 13 September 2006 (UTC)

[edit] September Esperanza Newsletter

Program Feature: Barnstar Brigade
Here in Wikipedia there are hundreds of wikipedians whose work and efforts go unappreciated. One occasionally comes across editors who have thousands of good edits, but because they may not get around as much as others, their contributions and hard work often go unnoticed. As Esperanzians we can help to make people feel appreciated, be it by some kind words or the awarding of a Barnstar. This is where the Barnstar Brigade comes in. The object of this program is to seek out the people which deserve a Barnstar, and help them feel appreciated. With your help, we can recognize more dedicated editors!
What's New?
September elections are upon us! Anyone wishing to be a part of the Advisory Council may list themselves as a candidate from 18 September until 24 September, with the voting taking place from 25 September to 30 September. Those who wish to help with the election staff should also list themselves!
Appreciation Week, a program currently in development, now has its own subpage! Share your good ideas on how to make it awesome there!
The Esperanza front page has been redesigned! Many thanks to all who worked hard on it.
Many thanks to MiszaBot, courtesy of Misza13, for delivering the newsletter.
  1. The proposals page has been updated, with some proposals being archived.
  2. Since the program in development Appretiaion week is getting lots of good ideas, it now has its own subpage.
  3. The September 2006 Council elections will open for nominations on 18 September 2006. The voting will run from 25 September 2006 until 30 September 2006. If you wish to be a candidate or a member of the elections staff, please list yourself!
  4. The new Esperanza front page design has but put up - many thanks to all who worked on it!
  5. TangoTango has written a script for a bot that will list new members of Esperanza, which will help those who welcome new Esperanzains greatly!
Signed...
Natalya, Banes, Celestianpower, EWS23, FireFox, Freakofnurture, and Titoxd
04:04, 18 September 2006 (UTC)
Although having the newsletter appear on everyone's userpage is desired, this may not be ideal for everyone. If, in the future, you wish to receive a link to the newsletter, rather than the newsletter itself, you may add yourself to Wikipedia:Esperanza/Newsletter/Opt Out List.

[edit] Math nav aids

I was reading through WP:Math project archives and came across a post of yours in a section about navigational aids. Have you ever seen some of the German math articles, like de:Gruppentheorie? I'm not sure if you know German, but you can probably get the gist. The top box is a set of core fields in math, the next box deals with things more general than groups, the third box deals with things more specific. I think that box is a bit large, but I like the idea of it. I would enjoy setting up a project to help design some nav aids for our main math pages. What do you think about a project with this goal? - grubber 00:02, 26 September 2006 (UTC)

Hmm. If an article is well-written, then it should not need a nav box. I dislike nav boxes for several reasons -- they take up space, they cater to those with short attention span, and finally, they are usually incorrect. Take, for example the navbox in de:Gruppentheorie as a good illustration of what's wrong with them. First: the ordering * Mathematik o Abstrakte Algebra o Gruppentheorie implies that mathematics has only two branches, abstract algebra and group theory, which is absurd. Next, it states that a group is a special case of * Magma (Axiom E) o Halbgruppe (EA) + Monoid (EAN), which is fine, except for three things: (1) there is no reason the article couldn't state this explicitly, (2) it does not explain what axioms E,A and N are; indeed Axiom A is something very different (3) there are other ways of axiomatically describing a group: instead of starting with a magma, one might start with a free group, and impose a presentation: and so one gets a different heierarchy of axioms. (The point is that this heierarchy is not unique.) The nav box ends with a list of examples of groups, inclding rings, fields, etc. Again, the article could mention this in-line. The list of examples is arguably incomplete: there aren't just finite and infinite groups. Lie groups are better thought of as continuous groups. Continuous groups can be obtained from free groups by imposing a very special set of integrability constraints. Finite-dimensional lie groups are very very different than the infinite-dimensional ones. Then one has things that cross boundaries or demonstate applications, such as hyperbolic groups, or loop groups, or the group of continuous functions on a manifold, which is infinite-dimensional. Somewere in there is non-commutative geometry and K-theory and quantum groups, which deals with the structure of infinite-dimensional groups. Is there a reason why the German nav panel failed to mention these important topics? which, ohh by the way, appeared in one form or another in the vitae of just about every Fields medalist this year? Honestly, I don't think matematics is cut-n-dried the way the nav-box would make you beleive: that nav box looks like something from a textbook from the 1970's: dated, obsolete. linas 15:52, 26 September 2006 (UTC)
I really appreciate your comments. I think you bring up some valid points. I don't advocate a large nav box, and I think it could be tricky to put together a useful hierarchy. The points you make about the German version are important. That's why I'm considering starting a project for that purpose. It needs to be planned, organized, and debated. An organic nav box would quickly get unruly. But, I think the value is this: a person that sees the concept of "field" for the first time in his linear algebra book would get to jump into abstract algebra with a decent picture of where he is entering. He would see that a field is a ring, which is a group; he would see links to discussions on finite fields and field extensions. He may not care about groups or extensions, but at least he gets a feel of the land as he begins. Also, for a casual mathematician, it gives a structured way of browsing the field -- letting him move "vertically" and "horizontally". It will take a bit of work to get it right, but I think it could be a useful (and maybe fun) exercise. - grubber 17:28, 26 September 2006 (UTC)
The right place to discuss this is on the talk pages of WP:WPM. Anyway, I still think that its easier to craft the free-form text of an article to express some idea (such as a hierarchy, a set of connections) than it is to try to condense the concepts into a handful of keywords in a nav-box. Students who are cramming for exams may find it useful to create paper diagrams/cheat sheets organized like this. I just don't see a student learning device to be appropriate for WP. linas 01:26, 27 September 2006 (UTC)
I was just trying to bounce the idea off a couple people that had participated in the chat before, particularly since I'm new to the discussion. Thanks for your comments. I will take it there. - grubber 01:36, 27 September 2006 (UTC)
Hey, your at UT! I was going to go to some lectures tommorrow at RLM 9.166. 12 noon "differential cohomology", perhaps I could say hello in the lobby beforehand or something. I'll look like not-a-student-who-doesn't-belong-here. linas 01:50, 27 September 2006 (UTC)
Haha, if you'd like to say hi, we could proabably do that. I wont be attending that lecture, but I will be in RLM at noon. - grubber 04:51, 27 September 2006 (UTC)

I'll be in the lobby in black jeans and a white striped shirt shortly before noon. linas 15:06, 27 September 2006 (UTC)

I'm sorry I missed you. I was not able to get on here between last night and this afternoon. Another time, it would be fun to meet a WP-ian! - grubber 19:08, 27 September 2006 (UTC)

[edit] arrow of time, dyn sys

Hi Linas. I think you wrote most of the this section on the entropy/arrow of time page. Is that right?

If so I'd be interested in understanding these systems and their significance better. The following statements seem like a good starting point:

"In the case of the Baker's map, it can be shown that several unique and inequivalent diagonalizations or bases exist, each with a different set of eigenvalues. It is this phenomenon that can be offered as an "explanation" for the arrow of time."

In particular:

"That is, although the iterated, discrete-time system is explicitly time-symmetric, the transfer operator is not."

Is there a heuristic way to understand better what's going on here? All the best--Jpod2 11:58, 27 September 2006 (UTC)

The references given at Baker's map are the canonical refs I know of for these claims. The last, the book by Driebe, may be the easiest to approach. I think that all three of these refs then point so some additional "more popular/more accessible/wider general survey" type books that establish this claim more broadly (and perhaps less rigorously), rather than for just the baker's map. On a related topic, I've been updating fdist.pdf to try to make the needed connections. linas 15:04, 27 September 2006 (UTC)
Hi, thanks for pointing out those refs. I will try to have a look at one of them. I can see from the above you are busy going to maths lectures, but I would be interested in asking you more about this. I don't have an intuitive feel for what the claim actually *is*.
So for the Baker's map... What exactly is the discrete-time system we are talking about, and in what sense is it explicitly time symmetric? --Jpod2 17:43, 27 September 2006 (UTC)
I don't get the question. The baker's map is the discrete time system. Just about anything can be set up to be a discrete-time system. More restricted are the measure-preserving dynamical systems. Often, iterated functions are studied. If f(x) is a function that can be iterated, then one may define the discrete time system phi(x,t) to be f^t(x) where exponentiation means function composition (and thus t can only be an integer). The Baker's map is explicitly invertible, so you can iterate its inverse as well ("go back in time"). Time-reversal symmetry just means that a map is one-to-one and onto, so its inverse exists. You don't loose data by iterating, and you can always get back to the initial state unambiguously by iterating backwards.
The claim is that normally, time-reversible processes have unitary operators associated with them, (think college quantum mechanics), while time-irreversible systems (dissipative systems) cannot be unitary, since there is a loss of information in a dissipative system. For a discrete-time system, the time evolution is given by (iterating) the transfer operator. (again analogy to QM, where exp(iH) is "iterated" to the "t'th power" to give exp(iHt) as the unitary time evolution operator). The transfer operator for the Baker's map has different actions on different function spaces. On the set of square integrable functions, its unitary, as one might expect, since the Baker's map is "microscopically" time-reversible. But on another set of states (polynomial states), the transfer operator has a discrete spectrum, and is dissipative. The physical argument being advanced is that square-integrable functions are nasty in that they allow sharp corners (think triangle wave) and discontinuities (think square wave), whereas nature (fluids, gasses, etc.) is never "sharp" like this. If one eliminates the functions that are arbitrarily sharp, one is left with a function space on which the time-evolution operator acts in a dissipative fashion: ergo, microscopically, the system is explicitly time reversible, yet macroscopically its dissipative. I've heard that this argument can be extended to any Axiom A system. I understand the mathematics behind the argument (its not particularly hard or deep); I'm still struggling with reconciling the physical interpretation into the grander scheme of things. (e.g. reconciling this with the idea of a wandering set). linas 03:23, 28 September 2006 (UTC)

Hi Linas, thanks for that. Much of what you say I understood, but I want to be completely explicit. So, our discrete time system in the Baker's map case is:

xt = 2xt − 1 − | 2xt − 1 |
y_t=\frac{y_{t-1}}{2}+\frac{|2x_{t-1}|}{2}

Is that right?

Yes.

On the other hand, given a differential equation for a function x(t), time reversal symmetry implies that given a solution x0(t), then also x0( − t) is also a solution (possibly with some representation of T acting on x0). Is that the case, here?

Yes.

Surely time reversal symmetry is more than being able to run the map in reverse.

Assuming that I am missing something there, I'll carry on. So I guess what I am wondering is what these ideas can reveal to us about physical cases (after all, the section is in a physics article on the arrow of time). I think what you say above has confirmed what I thought the analogy is: the evolution of x(t) is assumed to be reversible, and is analogous to the classical motion of a particle. Is that right?

Yes.

But what is the physical interpretation of the functions f(x,t)?

f(x,t) is supposed to be interpreted as a density, for example, the air pressure as a function of (x,t) or the salinity as a function of (x,t). Typically one asks about what happens if f(x,t) is highly non-uniform at time zero, how does it evolve to the equilibrium state where the fluid/gas is all mixed up and uniform in distribution? (for the hard problems one asks "what the hell *is* the equilibrium state?")

I guess one analogy would be with the time-evolution of a wave-function, where you would naturally choose f\in L^2. But then the transfer operator is unitary, and there are no surprises! In fact, you have even proved that the time evolution *is* unitary.

Well, it was not me who did this. :-) I think Nobelist Ilya Prigogine is one of the figureheads of this line of thinking, he has a book in french "Les Lois du Chaos" that is pop-lit that advances these ideas. (You will notice also that in later life he was here in Austin, so I may be showing home-town bias).

But, I think your interpretation is that f(x,t) should be something like a classical particle density, and the argument is then that it is natural to choose the polynomial states---thus deriving some kind of diffusion equation.

Yes.

I guess what I am asking is

(1) I'm not sure precisely in what sense the discrete map is time-reversal invariant

Try computing the negative-time Baker's map! Surely this will be an instructive exercise!

(2) what is the physical situation we are seeking to shed light on?

The time evolution of densities. More generally, the master equation.

I feel I am missing something important, but perhaps there could be a little more explicit discussion of this in the article?--Jpod2 11:29, 28 September 2006 (UTC)

Ugh. If I have a moment I'll look. I'm swamped with obligations and desires. linas 15:06, 28 September 2006 (UTC)
BTW, there is a way of studying the transition to chaos and dissipative systems, without actually appealing to any one given dynamical system. There are sufficiently many examples of these that the general ideas are studied in rigged Hilbert spaces. linas 15:44, 28 September 2006 (UTC)
"Well, it was not me who did this. :-) I think Nobelist Ilya Prigogine is one of the figureheads of this line of thinking, he has a book in french "Les Lois du Chaos" that is pop-lit that advances these ideas. (You will notice also that in later life he was here in Austin, so I may be showing home-town bias)."
Right---I should say `one has proved' :) Btw, a friend of mine has just joined the math dept at Austin, I didn't realise you lived there.
But, my question here was, why is the interpretation not as a wavefunction, but instead as a classical density?
Because nothing has been quantized, and the system being studied is classical.
I obviously should read more about what the transfer operator *does*, but I don't quite understand yet the rules of the game, here. Anyway, that's my problem....
The transfer operator is the time-evolution operator for densities. Imagine a dust of points laid out with density rho(x), and the time evolution of the position of each speck of dust is given by x(t) (Now, x(t) may be a solution of a discrete time difference equation or a continuous-time differential equation, but it doesn't have to be: x(t) may be "anything"; the transfer oerator can still be defined.). Work out how rho evolves over time. The transfer operator is the thing that evolves rho forward in time. Its an "operator" because it maps functions (at time 0) to functions (at time t). Its an "operator" also because its clearly linear.
Good, this is also the conclusion I had reached.
"Try computing the negative-time Baker's map! Surely this will be an instructive exercise!"
I am obviously missing something, but isn't the inverse computed on the Baker's map page here?
Yes.
Good
It's clearly different from the forward evolution... I am confused.
If you take a plain-old diffeq, and replace +t by -t, you don't get back the same equation, you get back a different equation.
Of course, naturally
What exactly do we mean by time-reversal invariance for a difference equation?
If x(t) is a solution to the forward-time equation, then x(-t) is the solution to the reverse-time equation. (but the Baker's map is not a difference equation, its an iterated function.).
I thought we were thinking of it here as defining a difference equation (that I wrote above)
Sorry, if I said or implied "difference equation", that was unintentional. One could study difference equations, but the baker's map is an iterated function, not a difference equation.
Are you sure that given x(t) as a solution you also want x(-t) to be a solution?
No, I don't want that. Situation is just like that for plain-old everyday differential equations: if x(t) is a solution to a diffeq, then x(-t) usually is not. However, x(-t) is a solution to the diffeq for which t has been replaced by -t. This is what "time-reversibility" means. Most equations in physics are time reviersible, and *all* of the standard, textbook quantum equations are time-reversible (which is why the phrase "microscopic reversible" is used: since the desire is to start with a "microscopic reversible" diffeq, and prove "macroscopic (themodynamic) irreversibility". linas 23:47, 28 September 2006 (UTC)
Ah, this paragraph is clearly where our difference in terminology lies. I agree with your definition of "time-reversibility". However, I thought we were considering the special subset of diffeqs with a time reversal *symmetry*. Isn't this the language used in that section? Hmmm, looking at it again, I see both time-reversible and time-symmetric are used.
For example, if a set of pdes for f(x) are rotationally *symmetric*, it implies that given a solution f(x), then f(Rx) is also a solution of the same set of pdes. So if one makes the same statement for t->-t, his is clearly a stronger statement than your statement of time-reversibility. But anyway, I understand now what you mean, and hopefully this makes it clear what my earlier question meant.
Oh dear, yes, quite right, sorry, I was sloppy/wrong. The whole time I used the word "symmetric" I really only meant "reversible". Ooops.
But let me add that the underlying quantum mechanical laws you refer to also have this stronger sense of Time reversal symmetry. So they really will satisfy my stronger statement, won't they? --Jpod2 00:03, 29 September 2006 (UTC)
Sure, there's C,P,T symmetry. All I wanted to say was that bakers map was reversible. Not all iterated maps are: for example the logistic map is not reversible.
"Ugh. If I have a moment I'll look. I'm swamped with obligations and desires. linas 15:06, 28 September 2006 (UTC)"
Of course, no problem. It sounds like I understood most (but not all) of what you wanted to say. I would just imagine that it needs more explanation than is there at the moment, in order to be enlightening to physicists. However, it is clearly a very interesting subject, so better that there is something rather than nothing! --Jpod2 17:48, 28 September 2006 (UTC)

[edit] arrow of time, further discussion

So, one more conclusion. Would it be fair to say that we are *not* deriving an arrow of time from these systems, because the arrow of time is already there in the underlying microscopic equation? I.e. because there is no time reversal *symmetry* of these microscopic equations. On the other hand, perhaps what one *is* deriving is irreversibility from reversibility. --Jpod2 00:14, 29 September 2006 (UTC)
Euuhhh, yes, that would be a fair conclusion. However, I would also be surprised if there isn't some exactly solvable model out there that is actually time-symmetric not just time-reversible. The general argument is that having a continuous spectrum implies unitary time evolution implies time-reversibility, while a discrete spectrum implies decay and irreversibility. If you peruse the article on rigged Hilbert space, or the references contained therein, you'll see that the general phenomenon of having a discrete vs. continuous spectrum for courser/finer topologies is a very general phenomenon, and the effect is mostly a general statement about topologies rather than about the specific details of any particular iterated map or any particular difference eqn or diff eq.
Now, as far as I know, no one has made the complete journey from QED all the way to irreversibility, but I did get the impression that there's a general roadmap from here to there, even if its a bit murky in detail. Having seen conference proceedings which have the words "quantum" and "irreversibility" in the title certainly indicates to me that there are expeditions mapping it all out. linas 04:02, 29 September 2006 (UTC)
OK, sure. I think we both understand what each other means now :) Probably, some of the discussion above would bear inclusion in that section, agreed? And the terminology regarding time-symmetry and reversibility could be clearer. It certainly seems significant to me that the system from which we are deriving an arrow of time is not time-symmetric.
Deriving the arrow from a microscopic system with no preferred time direction I would expect to be more difficult! Perhaps it is in some sense possible, but I'd certainly like to see that argument more explicitly...anyway, maybe one of us can make a few changes to that section if we get the chance. All the best--Jpod2 08:49, 29 September 2006 (UTC)

You are one tough customer! First, please notice that the reverse-time baker's map is just the forward-time bakers map with x and y interchanged. So this is a Parity-type operation about the 45-degree line, so the bakers map is PT symmetric. Next, please note that neither the Schroedinger equation nor the Dirac equation are T-symmetric, since they are first-order in time. The shroedinger eqn is CT symmetric, with C being complex conjugation. The Dirac (Majorana) equation is PT-symmetric, if one flips x y and z along with t. Yet, I believe that most physicists don't think of Dirac eqn or schroedinger eqn as having an "arrow" of time, since the forward-time world "should look just like" the backwards-time world with P or C flipped. Similarly, the forward-time bakers-map world "should look just like" the backwards-time world, with x and y flipped.

Reversible time evolution is given by unitary operators which have eigenvalues that live on the unit circle, i.e. are exp(i\lambda) with lambda real. Time-reversal simply exchanges +lambda for -lambda. (the two eigenvalues are paired). Irreversible time evolution takes the eigenvalues off the unit circle. The eigenvalues with magnitude greater than one are dissallowed, since the correspond to solutions that "blow up" in the future. Those with magnitude less than one are allowed, these are solutions that decay away. You can still find a kind of reversibility in that the eigenvalues are still paired by e.g. Mobius inversion about the circle. Its just that you are forced to throw away one of the pair because it is matched to "unphysical solutions". linas 15:13, 29 September 2006 (UTC)

Me, tough?
I appreciate the points you are making. But I think if you are writing in an article on the arrow of time, one needs to be careful about distinguishing (1) irreversibility coming from reversibility in a system with a preferred time direction, (2) an arrow of time coming from a time-symmetric microscopic theory. I thought we agreed on this, above? Do we?--Jpod2 15:30, 29 September 2006 (UTC)
Yes. But please note that the Baker's map does not have a prefered time direction, any more that Schroedinger or Dirac equation does. Persumably you now understand how the Baker's map can be understood to be isomorphic to left-right translations on a one-dimensional lattice, with forward-time corresponding to translation to the left, and backward-time corresponding to translation to the right. I think everyone would agree that a one-dimensional lattice is left-right symmetric, and does not have a prefered direction.
I hadn't really thought about the PT symmetry of the Baker's map, that seems like a good point you've raised. There is something confusing me about this. I guess I need to understand better why/how the transfer operator distinguishes between forward and backward evolution.
I've just reread the Baker's map page, and it strikes me that when discussing the non-unitarity, you have that "the transfer operator is not unitary on the space \mathcal{P}_x\otimes L^2_y of functions polynomial in the first coordinate and square-integrable in the second". What was the reason for breaking the x/y symmetry? If you explicitly break the symmetry in this way your conclusion above about flipping x and y probably won't hold. Does everything still work in the same way if functions are polynomial in both coordinates?--Jpod2 15:40, 29 September 2006 (UTC)
Argh. I think you are just knee-jerk reacting at this point, without actually bothering to read any of the references given, or even thinking about the problem. Its not that one is "trying to break the symmetry" its that one is trying to understand the time evolution of smooth, differentiable densities. This is a very general theme in dynamical systems. The decomposition of the Baker's map is not trivial; and I don't particularly want to recap it here (I don't think I could), the calcuations are dozen pages of dense manipulations. Furthermore, it is not unique; the decomposition has been done for the Arnold cat map and I beleive there's a general theorem for all Axiom A systems. My general impression is you need to do some general reading in the area. You can either get textbooks; there are many; or crack open a copy of Phys Rev E or Chaos and start skimming the abstracts. You keep arguing with me as if I was trying to POV-promote some crank theory, and what I'm trying to do is to relay my best understanding of a broad swatch of current research. linas 16:47, 29 September 2006 (UTC)
Argh! Dear Linas, I'm sorry that's how I came across. I don't think this is at all a crank theory, where did you get that idea? I just like to be precise about what the physical claims of a given programme actually are, and I am personally interested in understanding this stuff better. Also, I think the page could be phrased more clearly in places, which we both agree on, right? (However as I have said already it is better to have something there than nothing at all, so I am not pressuring you into working on it.)
I am not in a place where I can easily get to the references, so I'm afraid I haven't read them yet. I will. But, you seem to be reacting as if I have been missing the point entirely; I think the discussion above has served to clarify things, perhaps for both of us. Hasn't it? At one point you were saying that it was unimportant for the arrow of time discussion that the Baker's map differed from its inverse, but now you are saying it is quite relevant that it is PT symmetric. I agree with you, but for example there is no mention of the T or PT symmetry in the articles, is there?
If you are interested in continuing the discussion I would like to summarise. I think now that we are trying to show that the PT symmetry of the underlying microscopic theory is broken for the evolution of certain functions. This would demonstrate an emergent arrow of PT, if you like. Is that a fair summary? (If we are not trying to break any kind of T (or PT) symmetry then I'm not sure what relevance this has to an arrow of time article.)
To see this, shouldn't we compare the forward time evolution of functions in \mathcal{P}_x\otimes L^2_y to the backward time evolution of functions in \mathcal{P}_y\otimes L^2_x? That was the point of my question above about the asymmetry. Maybe I *am* missing the point here, but it wasn't a kneejerk. I just think one has to be clear in these things what one is trying to show. --Jpod2 17:20, 29 September 2006 (UTC)

Yes, the goal of proving irreversibility in physics is to start with a dynamical system that is "microscopically" reversible, and to show "macroscopic" or thermodynamic irreversibility follows. The approach via Baker's map is subtle, and I am not sure I like it. There are other approaches. I've skimmed presentations where a microscopically reversible system is attached to a heat bath. The argument then proceeds along the lines of the coarser-finer topologies of rigged Hilbert space, the heat bath introducing more degrees of freedom and altering the spectrum. Not a terribly convincing argument either. One of the nicer ones was one from a prof at UC Santa Barbera, I forget his name (wish I could remember, so I could find the ref), who considered the quantum mechanics of colliding particles in a box. He discovers the wave functions are highly fractal, and the energy levels are very closely spaced. Starting with initial conditions at t=0 where all the particles are confined to one side of the box, he finds the wave functions extend throughout the box, but interfere destructively on the "empty" side at t=0. However, as soon as t is not zero, the destructive interference is gone, the probability is non-zero throughout the box, and the distribution of energy levels is such that the chance is exponentially tiny that the wave functions will once again destructively interfere over any large volume in the future. Since its just "plain" QM, the time evolution of the wave functions is still unitary; one only gets the semblance of mixing and irreversibility by integrating large volumes. Another important feature seems to be that the discrete spectrum is none-the-less very closely spaced (spacing of 1/2^N for N particles in the box), and so starts resembling the rigged Hilbert space type arguments. Furthermore, since the wave functions are fractal, and one must integrate over a volume to get thermodynamic observables, there is the question of "how to define the integral", how to define a measureable space, etc. which once again brings in the tension needed to discover irreversibility. Mathematically, hard spheres in a box is once again a case of dynamical billiards, which are hyperbolic and at least ergodic. So again all the right ingredients are there. I rather like this particles-in-a-box argument; physically its much less abstract than the Baker's map, although mathematically its much much harder. So I dunno. I'm slowly reading various references but my time is limited. linas 13:56, 3 October 2006 (UTC)

FYI, bumped into another "exactly solvable" (?) model, the Kac-Zwanzig model, its a 1D classical particle moving in an external potential, coupled weakly to N harmonic oscillators (which should be thought of as a heat bath). Its purely deterministic, but in the N to infty limit, there is a strong limit theorem that shows if one starts with random nitial conditions, then it converges to a stochastic process. The strong limit theorem is recent PhD thesis from Gil Ariel.

Perhaps I can rephrase my current intuitive defintion of irreversibility: If a dynamical system is ergodic (or stronger, e.g. mixing), this means that the classical, deterministic trajectories of the system are in some sense "dense" in phase space. One is then jusified in considering the closure of this dense set. When one re-examines the physics, e.g. observables, spectra, etc. on the closure, one discovers that they are fundamentally quite different, and that this is what takes a reversible dynamical system and makes it irreversible. Why is one justified in considering the closure of a dense set? Good question...

As a side effect, by considering the closure, one will typically be lead to consider an easier/simpler/more natural topology for the phase space, which will make the methods for solving the problem quite different, which only leads to confusion, because the problem now looks so different. linas 00:29, 5 October 2006 (UTC)

Hi Linas
Thanks for pointing me to your thoughts on irreversibility. I have some inchoate comments to make, but suddenly find myself a bit too busy to develop them. Perhaps we can pick up the discussion at a later point? We'll both have thought more about it by then. too. I found the discussion above useful, anyway. All the best --Jpod2 20:10, 7 October 2006 (UTC)

[edit] Linux stumble / integration invite / random find

I stumbled across an old page of yours for Linux VPN technologies. Apparently you've moved your activities to here, and I would like to invite you to participate in WP:ʃ as a part of your science work. Also, regarding something else on your site, I did find Movement to impeach George W. Bush the other night. Cwolfsheep 00:35, 28 September 2006 (UTC)

Thanks, I'll take a look. linas 14:32, 28 September 2006 (UTC)

[edit] Bernard Haisch

I'm puzzled by this one. I was thinking about helping with the new Citizendium project, which some of us think may be the solution to the crank 'n' troll problem, but do not know what to make of Haisch, who has been appointed managing editor. He does appear to have 'credentials', and it does seem as though SOME of the criticism from WP expert editors in the past has been unfair. On the other hand, there are negative associations. As I know practically no science, can you help out here? I do medieval Latin and philosophy. But I don't want to be associated with any effort that has pseudo-science connotations. Dbuckner 10:32, 28 September 2006 (UTC)

What's the link for the Citizendium project? What has Haisch been appointed as managing editor of? From what I can tell, the bio Bernard Haisch is more or less accurate. I am not aware of all the problems Haisch may have gotten snagged on in WP, but I'm thinking that one of them is the uncritical promotion of SED. In general, "uncritical promotion" is more or less a synonym for "POV pushing". While SED has some interesting aspects to it, it also seems to blatently contradict well-established and highly regarded principles of physics, and so the academic physics community has been mostly ignoring it, brushing it off. When anyone starts to edit the SED article to state "this is a marvelous answer to all our problems", people will be offended. (A new scientific theory needs to be compatible with the old theory, and then somehow move beyond it. Since SED is not yet compatible with the old theory, it cannot be claimed to "move beyond it" yet). To recap, "uncritical promotion" is the problem. linas 14:49, 28 September 2006 (UTC)

[edit] PLEASE CHECK!

http://en.wikipedia.org/wiki/Talk:Entropy#Non-notable.3F

I believe that you will find (1) that my proposals re the presentation of entropy to beginners in chemistry are NOT intended to shake the heights of thermo/info theory/math and (2) that they have already been made part of the majority of new editions of US gen chem texts (as well as Atkins phys chem)! Certainly, Wikpedia aims to aid those who are learners as well as the learned?

Thanks! FrankLambert 05:42, 9 October 2006 (UTC)

Ah, I see that I have committed the sin of academia: all too quickly brushing away a good idea just because it runs counter to my personal intuition. For that, perhaps I should apologize. You are the main character behind the ideas of entropy (energy dispersal), right? I presume you do understand why there is a negative reaction? I can see that, in textbook thermodynamics, that entropy and enthalpy and free energy and temperature, and what not, are all interconnected. I can understand, in a hand-wavy kind of way, that entropy looks like a dispersal of energy. However, knowing the general, abstract definition, the idea just seems wrong: entropy can be defined for systems for which there is no definition of energy or temperature. What does one do then? I can see that the change in time of entropy seems to be the dissipation of something, but I don't know of any way of connecting entropy to dissipation, although I guess that should be doable. "Dispersal" implies the ability to take the gradient of something, as if there was a flow of something from somewhere to somewhere, but I do not know how to define gradients on dissipative systems. So maybe its a neat idea, but I can't figure out how to intuitively incorporate it into the collection of ideas I understand as entropy.
And finally, it even seems ambiguous for e.g. a refrigerator. Say I'm releasing hot compressed gas through an expansion nozzle. So I'm "dispersing energy", right? The gas cools. Have I increased entropy? Decreased it? I don't know what this intuitive idea is supposed to offer here ...
The article itself has some nasty language:
... he proposed that the confusing portrayal of entropy as "disorder" be abandoned.
Huh? Entropy as the logarithm of "disorder" is the fundamental definition entrenched in mathematics, and in mathematical physics, and in information theory, and etc. How can this be abandoned? What is it to be replaced by?
In this approach the statistical interpretation is related to quantum mechanics, ...
This appears to be a vacant, mumbo-jumbo appeal to mysterious science. Yes, its true that in deep, subtle ways, statistical mechanics resembles quantum field theory; however, the exposition of this resemblance is pretty much beyond the reach of any undergrad, much less a beginner undergrad. What purpose does such an appeal serve, other than to try to bask in the glow and aura of quantum mechanics?
The subject remains subtle and difficult, and in complex cases the qualitative relation of energy dispersal to entropy change can be so inextricably obscured that it is moot.
Ouch. Classic weasel words, frequently found in the apologia of cranky writings. Can usually be paraphrased as "the author got confused by the difficulty of the topic" or "the author has no clue what they're writing about". Highly inappropriate for beginning students. linas 04:59, 10 October 2006 (UTC)

[edit] Entropy (arrow of time)

Your cleanup tag was removed without much cleanup, AFAICS. I'm somewhat disturbed into how many articles the topic of entropy has been split, see Wikipedia talk:WikiProject Physics#Loose ends at Entropy. Not that it can be potentially a sign of execellent coverage, but I fear we aren't at this stage... --Pjacobi 10:15, 9 October 2006 (UTC)

Oh well. What can I say? I've stretched myself thin right now. linas 05:32, 10 October 2006 (UTC)

[edit] Total re-write of the main Physics page is in progess

You might like to join us at Physics/wip where a total re-write of the main Physics page is in progess. At present we're discussing the lead paragraphs for the new version, and how Physics should be defined. I've posted here because you are on the Physics Project participant list. --MichaelMaggs 08:04, 11 October 2006 (UTC)

[edit] CH

CH switched months ago to "I'm-nearly-away-only-cleaning-things-up" mode. You may have seen his (now deleted) essay why he considered Wikipedia broken. His latest failure to get much support blocking the KraMuc (talk · contribs · block log) socks and and IPs (see for example this edit) may have been the last straw. --Pjacobi 07:19, 12 October 2006 (UTC)

Tis a shame. I'd think the "digging" pages might be useful to the more dispute-inclined. I befriended Jay Salzman in New York about 6-7 years ago; he took me on a tour of Central Park at three in the morning. I was particularly struck by one comment he made, as we overlooked one particularly grandiose vista, a thesis he was developing: "of course, you know, Rome never fell", and standing there, among the columns and statues, and the backdrop of skyscrapers, the truth of this was so brilliantly clear... He seemed frail then, must be well over 60 now, I wonder how he's doing. linas 13:27, 12 October 2006 (UTC)

[edit] Wikipedia:Requests for arbitration/Pseudoscience

Hello,

An Arbitration case in which you commented has been opened: Wikipedia:Requests for arbitration/Pseudoscience. Please add any evidence you may wish the arbitrators to consider to the evidence sub-page, Wikipedia:Requests for arbitration/Pseudoscience/Evidence. You may also contribute to the case on the workshop sub-page, Wikipedia:Requests for arbitration/Pseudoscience/Workshop.

On behalf of the Arbitration Committee, Thatcher131 11:41, 12 October 2006 (UTC) x

[edit] Mellin inversion theorem

Thank you for your kind remark concerning Mellin transform. I have added some material to Mellin inversion theorem and will probably add some more related material in the future and re-set some formulas to improve readability. BTW the Zeta function formulas on your home page are quite intriguing!

- Zahlentheorie 13:23, 19 October 2006 (UTC)

[edit] Group algebra of a finite group

Hey, I noticed that your edit last year to group algebra [1] was pretty similar to the definition in Fulton & Harris (Representation Theory: A First Course, page 36). Maybe we should change it, since the wording is very close in places? --Xiaopo ʘ 08:02, 22 October 2006 (UTC)

Yes, well, I was copying from my crib notes, which I would have taken while reading Fulton and Harris. They give good, concise definitions, so I may well have followed it closely. Let me look. linas 15:27, 22 October 2006 (UTC)
OK, lest I be accused of plagiarism, I completely re-wrote that section, making it a lot more elementary as well. It could be interesting to also give the index-free axioms for a module and the representation: so, for instance, can one generalize this to a group bi-algebra, by defining have co-units, co-multiplication, etc.? Might be a good exercise. linas 18:02, 22 October 2006 (UTC)

[edit] physics claims in mobius function

Can you have a look at them? You removed similar stuff from Mobius inversion formula, I think. Thanks,Rich 10:47, 25 October 2006 (UTC)

I expanded that section, its the primon gas model. John Baez gives a low-brow description at http://math.ucr.edu/home/baez/week199.html. I removed similar text from Mobius inversion formula because there, it was so brief that it seemed to be technobabble, and I did not recognize the theory. linas 13:27, 25 October 2006 (UTC)
I created an article on the primon gas. linas 18:50, 25 October 2006 (UTC)

-ok, great! A fascinating connection.Rich 19:57, 25 October 2006 (UTC)

[edit] q-logarithms

Hi Linas! - thanks for writing the Q-derivative and Q-exponential articles. Really nice. --HappyCamper 18:55, 29 October 2006 (UTC)

You are welcome. But now you must tell me, what is the interest that lead you to these articles? linas 02:53, 30 October 2006 (UTC)
Ah, that comes from nonextensive entropy and nonextensive statistical mechanics! --HappyCamper 12:46, 1 November 2006 (UTC)

[edit] {{mathworld}}

Hi Linas. I was curious, why do you think {{mathworld}} should use "first name last name" for citations? All other citation templates (like {{cite web}} use "last name, first name". —Mets501 (talk) 21:48, 2 November 2006 (UTC)

Well, its called a "first name" for a reason, it comes first, right? ... or should I call you 501, Mets? Or Mr. 501? I'd suggest that the other citation templates be fixed, instead of perpetuating a wrong. Well, not a wrong, ...an anachronism, a style that went out of style a long time ago. Just as no one greets you as "Mr. 501" -- I mean, by golly, Mets501 is not even the name that your parents gave you? Its not even a real name, having numbers in it? Its so utterly 21st century, like duude, you know? Why would d00d-man use frock-coated top-hatted 19th century citation style? linas 05:09, 3 November 2006 (UTC)
I see that this morning you changed it back. Based on your web page, I am a few decades older than you are, and the "surname,givename" style was archaic already back when I was in school. The resurrection of this style comes off like a lame attempt to be pseudo-Victorian steam-punk hip, which isn't fitting. linas 14:01, 3 November 2006 (UTC)

[edit] Reverting something or other

Linas, please discuss this issue before unilaterally reverting. Circeus 02:11, 3 November 2006 (UTC)

Huh? Unilaterally reverting what? What issue? linas 05:09, 3 November 2006 (UTC)
He's talking about reverting the mathworld template. Please do not do it again. —Mets501 (talk) 19:06, 3 November 2006 (UTC)
Why not? Its wrong. linas 23:17, 3 November 2006 (UTC)
I work in an academic library and I just grabbed several mathematical journals to see what citation style they employ (I walk past these journals several times a day and this is the first time I've ever looked in them!). Most of them employ a "FirstInitial LastName" style in their references with a handful going with "LastName, FirstInitial." None employed a "FirstName LastName" style. This was certainly not a scientific survey of the reference styles employed by mathematical journals but it's better than us continuing to guess. Although I have a Bachelor's degree in mathematics my Master's is in entirely different field of study but in neither of discipling have I encountered a "FirstName LastName" citation style for references. It's certainly not the current APA or Chicago style. I don't have the current MLA style guide but I'm pretty sure that it doesn't use that style, either. So I think there's a preponderance of evidence that your preferred style simply is not in heavy use in academia and thus inappropriate as the default reference style for articles about academic topics in Wikipedia. Sorry! --ElKevbo 23:56, 3 November 2006 (UTC)
My two cents -- I'm used to last name first for first author, first name first for all subsequent authors. This makes it much easier to find a particular (first) author in a list of refs, particularly if first names are given in full rather than initials. For example, I publish under my full name, Michael Ray Oliver; if you're looking for a paper by "Oliver" it's a lot easier if the O is the first letter on a line rather than the eleventh. --Trovatore 00:00, 4 November 2006 (UTC)
Hmm. I took the conversation over to the talk page of {{cite web}}. All of the books I've got here use firstname or first initial followed by lastname (I have very few books here). I did a tiny but random sample of arxiv, same thing there. The move to use the full first name instead of initial of first name is to acknowledge that authors have real names ... which is not a big deal if they are contemporaries whose citations you are used to seeing, but is considerably more difficlt when dealing with historical personages: who is J. Hadamard? Is that Jacques? James? Jules? linas 00:14, 4 November 2006 (UTC)
My guess would be that the move to using the entire first name in Wikipedia is likely a technical issue. I don't know how easy or problematic it would be to implement the template to cut the first name down to it's first initial. Your criticism is definitely valid and it's one that certainly applied to those styles that still insist on limiting the first and middle names to their initials. I recently read a listserv posting from a scholar who raised this very criticism (I believe it was of a journal who used APA). His humorous complaint was that an article that he co-wrote with his wife appeared to list the same author twice since their first names begin with the same letter. --ElKevbo 00:26, 4 November 2006 (UTC)
No, as I recall, full names are a deliberate stylistic choice (assuming of course that you know them). You can always use the initial if you don't know the name. --Trovatore 00:32, 4 November 2006 (UTC)
But can the template automatically truncate a first name to just its initial? --ElKevbo 00:35, 4 November 2006 (UTC)
Why would you want it to? My personal rule of thumb is to give the author name in exactly the way he habitually publishes, except that the first author should be last name first, to facilitate finding his name in a list. --Trovatore 00:38, 4 November 2006 (UTC)
If you wanted to adhere to APA style then you'd truncate first names to their initial. I'm not saying that it's a good idea - I'm just asking the question as I suspect the answer may have a lot to do with the reason why the template operates the way that it does. --ElKevbo 00:50, 4 November 2006 (UTC)
Yes, its a stylistic choice. I note that as of this moment, J. Hadamard is a red link. linas 00:34, 4 November 2006 (UTC)