Talk:No free lunch in search and optimization

From Wikipedia, the free encyclopedia

When I checked the link at Most of these references can be obtained from... not only were the files not available, but the citations were incorrect and/or incomplete. 22:34, 21 November 2006 (UTC)

A request/challenge--somebody might try translating this explanation (and others like it) into English, so that the rest of us can figure out how it might apply to our fields. I do appreciate the critique of Dembski's application but need more examples in familiar (as opposed to jargon) terms.

Contents

[edit] Factually incorrect figure

The figure is incorrect, and has to go. In fact, when all cost functions are equally likely, each algorithm observes each possible sequence of cost values with equal likelihood, so there is no specialist / generalist trade-off of the sort depicted in the plot. To state that more precisely, if there are N domain elements and M codomain elements (distinct costs), then the probability that an arbitrarily chosen algorithm will observe values in an arbitrarily chosen sequence is M-N.

At the risk of seeming like I'm engaged in self-promotion, I'm going to take a shot at rewriting this article. The touted "simple explanation" is not freely accessible online, and it happens that I gave a very simple explanation during a student's thesis defense in 1994. I will try to respond to the "request / challenge" above, but part of me says that into every life a little math must fall. NFL is inherently about probability distributions on functions and sequences. ThomHImself 00:54, 11 March 2007 (UTC) Tom English


[edit] "The No Free Lunch Theorem" is not the point

Section 3 of Wolpert and Macready (1997) gives two NFL theorems, not one. The 1995 tech report and the 1997 article of W&M both refer to theorems, plural, in the titles. This makes reference to the theorem something of an embarrassment. Beyond that, people who state the theorem rarely, if ever, state the first NFL theorem (the one they seem to have in mind). Instead they state their favorite interpretations.

The fact is that the actual NFL theorems are not as interesting as their immediate implications. That is why I am moving this article to "No Free Lunch in Search and Optimization," with "No-free-lunch theorem" redirecting here. ThomHImself 21:10, 11 March 2007 (UTC) Tom English

Note that double redirects do not work, so that if you do move this page, you will need to update any double redirects that are created as a result of the move. The What Links To No-free-lunch_theorem special page will help in locating them. -- Cat Whisperer 21:18, 11 March 2007 (UTC)

[edit] More changes to come

A figure will clarify the introduction.

Watch for sections giving a gentle introduction to NFL and formalizing NFL.

I made a large number of changes without logging in. Sorry, but I'm a neophyte, and didn't know I wasn't logged in. ThomHImself 05:25, 29 March 2007 (UTC)

[edit] Deleted folkloric theorem from introduction

It is somewhat humorous that someone added to the introduction precisely what I document as mathematical folklore in a later section on "the" NFL theorem:

Put simply, the theorem says that over the average of all possible solution landscapes, no search algorithm is on average better than any other.

No one who makes this claim ever cites both an article and a theorem number. There is no such theorem in the writings of Wolpert and Macready. Furthermore, at the point where the sentence was added, there had been no mention whatsoever of a theorem. The article is now about NFL in search and optimization, and the folkloric theorem is just one aspect of that. ThomHImself 06:16, 29 March 2007 (UTC)