Talk:Quasiconvex function

From Wikipedia, the free encyclopedia

[edit] Redirects

I've just made several pages redirect here; this is the only actual article that deals with weak or strict quasiconcavity or quasiconvexity. Relatedly, I deleted the "see also" link to the nonexistent article on strict quasiconvexity, including a definition here. Tobacman 23:04, 9 October 2006 (UTC)

[edit] Optimization

Would it be accurate to say that quasiconvex optimization takes advantage of the property that "going down hill" always moves you closer to the minimum and that this is why convex and quasiconvex optimization is easier than optimizing over other functions? (I had a long discussion on Talk:Convex optimization; I think this page is the better place for it.) —Ben FrantzDale 17:31, 12 February 2007 (UTC)

Going downhill always moves you always to minimum, regardless of whether the function is convex or not. Did you mean "moves you closer to a global minimum"? Oleg Alexandrov (talk) 03:59, 13 February 2007 (UTC)
In 1D, perhaps, but in general there are non-convex functions with a single minimum for which moving down hill will move you away from the minimum even if subsequent steps down hill would eventually get you there. For example, consider a hill sorrounded by a moat where the moat is deepest on the East side. If you are on the West side of the hill, going down hill moves you West, away from the deep part of the moat. Continuing down hill you would find the shallow side of the moat, then wander your way around to the deep end. —Ben FrantzDale 05:33, 13 February 2007 (UTC)
OK, I don't know how to answer your question. Note by the way that "going downhill" thing has a name, it is called gradient descent. Oleg Alexandrov (talk) 16:01, 13 February 2007 (UTC)
The thing I am trying to understand is why "convex optimization" is so important. Consider a surface, f(x,y). I have a feel for why you would want every iso-contour to be convex: otherwise moving from point a to point b, where f(b) < f(a), you might have to go uphill. What I don't understand is why it is important that the function be convex. That is, why is it important that r2 is convex whereas − exp( − r2) is not convex? —Ben FrantzDale (talk) 23:52, 20 May 2008 (UTC)
With a convex function, you can tell that the approximate solution you've found has a value that's close to optimal. With a quasiconvex function, you can tell that the position of the approximate solution is close to the optimal position, but you can't tell whether the function has a very thin spike even closer to the optimal position that you've missed, causing the value of your approximate solution to be very far from the optimal value. —David Eppstein (talk) 00:19, 21 May 2008 (UTC)
Thank you! Now that you say it, it seems completely obvious. If I understand correctly (in which case this will sound like I'm saying nothing new) the big deal about full convexity is that the gradient gives us a bound on the minimum value. So if f(x,y) is a surface in R3, then we could draw a triangle in x and y around the minimum and we could draw a tetrahedron (using the slopes at the corners of the triangle) that enclosed the true minimum. With a quasiconvex function, we can still draw the triangle around the minimum, but we may not know how far the rabbit hole goes. Is that right? —Ben FrantzDale (talk) 02:32, 21 May 2008 (UTC)
I think you've got it. —David Eppstein (talk) 03:01, 21 May 2008 (UTC)

[edit] Sion's minimax theorem

Hi guys. I've just kicked off Sion's minimax theorem. Anyone got any improvements? Robinh (talk) 20:10, 20 May 2008 (UTC)