General video game playing
General video game playing (GVGP)[1] is the design of artificial intelligence programs to be able to play more than one video game successfully. In recent years, some progress have been made in this area, including programs that can learn to play Atari 2600 games[2][3][4][5] as well as a program that can learn to play NES games.[6][7][8]
GVGP could potentially be used to create real video game AI automatically, as well as "to test game environments, including those created automatically using procedural content generation and to find potential loopholes in the gameplay that a human player could exploit".[1]
See also
- General game playing
- Artificial intelligence (video games)
- List of emerging technologies
- Outline of artificial intelligence
References
- ↑ 1.0 1.1 Levine, John; Congdon, Clare Bates; Ebner, Marc; Kendall, Graham; Lucas, Simon M.; Miikkulainen, Risto; Schaul, Tom; Thompson, Tommy (2013). "General Video Game Playing". Artificial and Computational Intelligence in Games (Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik) 6: 77–83. Retrieved 25 April 2015.
- ↑ Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; Graves, Alex; Antonoglou, Ioannis; Wierstr, Daan; Riedmiller, Martin (2013). "Playing Atari with Deep Reinforcement Learning" (PDF). Neural Information Processing Systems Workshop 2013. Retrieved 25 April 2015.
- ↑ Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg & Demis Hassabis (26 February 2015). "Human-level control through deep reinforcement learning". Nature 518: 529–533. doi:10.1038/nature14236.
- ↑ Korjus, Kristjan; Kuzovkin, Ilya; Tampuu, Ardi; Pungas, Taivo (2014). "Replicating the Paper "Playing Atari with Deep Reinforcement Learning"" (PDF). University of Tartu. Retrieved 25 April 2015.
- ↑ Guo, Xiaoxiao; Singh, Satinder; Lee, Honglak; Lewis, Richard L.; Wang, Xiaoshi (2014). "Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning" (PDF). NIPS Proceedingsβ. Conference on Neural Information Processing Systems. Retrieved 25 April 2015.
- ↑ Murphy, Tom (2013). "The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel ... after that it gets a little tricky." (PDF). SIGBOVIK. Retrieved 25 April 2015.
- ↑ Murphy, Tom. "learnfun & playfun: A general technique for automating NES games". Retrieved 25 April 2015.
- ↑ Teller, Swizec (October 28, 2013). "Week 2: Level 1 of Super Mario Bros. is easy with lexicographic orderings and". A geek with a hat. Retrieved 25 April 2015.
External links
- The General Video Game AI Competition
- The Arcade Learning Environment
- ConvNetJS Deep Q Learning Demo