Talk:Truel

From Wikipedia, the free encyclopedia

[edit] Expansion

There! Should do for now. Paidgenius 20:24, 21 November 2006 (UTC)

Thanks for your work on the article. The solution to the theoretical example needs a lot of work. Two things to start: as long as C is still alive, C is the "most dangerous opponent". B would never shoot at A if C is still alive: if B shoots A, then C shoots B; if B misses A, then C, who can shoot (and kill) either A or B, will certainly shoot B. Either way, if B shoots at A while C is still alive, B will die. So the solution as it stands is not correct (the conclusion might be, but the argument is not). -- Doctormatt 01:15, 24 November 2006 (UTC)
Okay, I fixed the argument by giving the probabilities of success of A's different strategies. -- Doctormatt 02:00, 24 November 2006 (UTC)

Well, everyone makes mistakes. That's cool though. Paidgenius 20:12, 8 December 2006 (UTC)

I am unclear why my last edit should be removed. I added a sentence to say that B would shoot at C rather than A. Doctormatt removed it, saying it was unsupported, and the question is A's strategy, rather than B's. I also think that the figures given for chances of survival are incorrect.

C's best choice is to shoot B if both B and C survive, because B has the best chance of killing him. B's best choice is to shoot C, as C will kill him if he shoots at him. If A kills B, then C will kill A. If A kills C, B has a 2/3 chance of killing A. So A shoots to miss.

Then B shoots at C. In 6/9 examples, B will kill C. It is then A's turn again: in 2/9 examples, A will kill B and survive. In 3/9 examples, B will miss C. C will then kill B. A will shoot C, and in 1/9 examples A will kill C and survive. In 2/9 examples, A will miss C, and C will then kill A. In 4/9 examples, B will shoot at A having killed C, and have a 2/3 chance of killing A.Abigailgem 16:59, 29 May 2007 (UTC)

I find your argument very confusing. You claim that if A kills C, then B has a 2/3 chance of killing A. This is only true on the first shot. In fact, if A kills C, then B and A will shoot back and forth at each other until one of them is dead. This results in B having a 6/7 chance of killing A. So, I'm not sure your calculations are valid: you need to take into consideration truels of unlimited length. In any case, this is all rather OR-ish, so why don't we just go to the library? Here are some papers to try:
  • Kilgour, D. M., The simultaneous truel, Internat. J. Game Theory 1 (1971/72), 229--242
  • Kilgour, D. M., The sequential truel, Internat. J. Game Theory 4 (1975), no. 3, 151--174
  • Kilgour, D. M., Equilibrium points of infinite sequential truels, Internat. J. Game Theory 6 (1977), no. 3, 167--180
Cheers, Doctormatt 18:05, 29 May 2007 (UTC)

Having had a look at some of the previous revisions, I still would prefer a verbal as well as mathematical explanation of the solution. I am placing that here rather than making such an extreme revision. Does anyone have a comment?

Given a choice between shooting at A or B, C should shoot B, who has the best chance of killing C. Therefore, given a choice of shooting A or C, B should choose to shoot at C.

If A kills B, A will then be killed by C. If A kills C, he faces B and it is B's shot. So A shoots to miss.

In 1/3 of cases, B will miss, then C will kill B and A then has one shot at C, a one third chance of success.

In 2/3 of cases, B will kill C, so that A faces B and it is A's shot.

I do not propose to alter the calculations. Abigailgem 14:44, 28 July 2007 (UTC)

Your explanation fails to explain why A is better off missing. Your first claim "If A kills B, A will then be killed by C. If A kills C, he faces B and it is B's shot. So A shoots to miss." does not compare A's chances of success when shooting to miss with A's chance of success when aiming at C. Note that A is not a perfect shot, so shooting a C is not the same as killing C. One has to consider the probabilty of A's hitting C if one is considering whether or not A should shoot at C.
In general, one must calculate the probability of success with each possible strategy in order to choose the best one. In this example, the fact that C is a perfect shot makes the calculations simpler, and this makes people think that they can construct an intuitive solution in the general case. But, one can't. Please take a look at the referenced paper for lots of examples that show the non-intuitive nature of this problem.
I would like to change the example to one in which nobody is a perfect shot in order to avoid giving readers the impression that an intuitive solution is generally possible. Doctormatt 21:33, 28 July 2007 (UTC)

Indeed. But, if A shoots with the intention of hitting B, in one third of cases (according to the way the problem is drafted) A will kill B. This is an outcome against A's interests, as C, the perfect shot, will then kill A. This is part of the rules of the puzzle, as it is set. There is no need to state it in the solution. The point is that A does not want to kill B or C with his first shot, as that will harm his interests.

I consider changing the example would not be an improvement of the article. However, adding an additional example which was non-intuitive could be an improvement, giving other aspects of the problem. Abigailgem 15:31, 5 August 2007 (UTC)

I apologize, but I'm still not following your logic. Perhaps you could apply your argument to other cases, and tell me the results? This might help me see how you are using the probabilities in your argument. Consider, if you care to, these variations:
  1. A is 40% accurate, B is 50% accurate, C is perfect
  2. A is 30% accurate, B is 35% accurate, C is perfect
  3. A is 20% accurate, B is 30% accurate, C is perfect
  4. A is 10% accurate, B is 35% accurate, C is perfect
What is your impression of A's optimal strategy in these variations of the problem?
I agree that it is clear that A is not helped by shooting at B. That can be argued without calculation. So, in these problems, the question is should A intentionally miss, or should A shoot at C? I can say, based on calculations of the probability of A's survival with each strategy. I honestly don't see any other way to determine the optimal strategy. Cheers, Doctormatt 04:18, 6 August 2007 (UTC)

[edit] Example: OR?

An editor removed the example of a theoretical truel, claiming that it is "blatant OR". My feeling is that this is not OR, as any undergraduate probability student can verify this example. It is not "research": it is merely a not completely trivial (but not difficult) computation. In mathematics articles there are numerous such examples of the results of computations. In terms of verifiability, anyone with a modicum of probability knowledge can verify this, without any other references, so I don't think any citations are needed for this. I'd be happy to hear what other think. I'd also be happy to typset the calculations, and perhaps put them on a separate "proof" page. If people really feel it's OR, I'll find a published example to copy. Doctormatt 22:57, 4 November 2007 (UTC)

  • Doing any derivation on a Wikipedia page without providing a source or a published example of that precise formula is original research. You cannot assume that everyone has "a modicum of probability knowledge" and because of the requirements in WP:OR, you cannot simply create that material yourself. If you do find a published copy, do not recreate the formula with a reference on the end of it, but instead quote only the conclucions of that finding and cite that. Cumulus Clouds 01:09, 6 November 2007 (UTC)