Theoretical computer science
From Wikipedia, the free encyclopedia
Theoretical computer science is the collection of topics of computer science that focuses on the more abstract, logical and mathematical aspects of computing, such as the theory of computation, analysis of algorithms, category theory and semantics of programming languages. Although not itself a single topic, its practitioners form a distinct subgroup within computer science researchers.
Contents |
[edit] Scope
It is not easy to circumscribe the theory areas precisely; the ACM's Special Interest Group on Algorithms and Computation Theory (SIGACT), which describes its mission as the promotion of theoretical computer science, says
- "The field of theoretical computer science is interpreted broadly so as to include algorithms, data structures, computational complexity theory, distributed computation, parallel computation, VLSI, machine learning, computational biology, computational geometry, information theory, cryptography, quantum computation, computational number theory and algebra, program semantics and verification, automata theory, and the study of randomness. Work in this field is often distinguished by its emphasis on mathematical technique and rigor."
Even so, the "theory people" in computer science self-identify as different. Some characterize themselves as doing the "'science' underlying the field of computing"[1], although this neglects the experimental science done in non-theoretical areas such as software system research.
[edit] History
While formal algorithms have existed for millennia (Euclid's algorithm for determining the greatest common divisor of two numbers is still used in computation), it was not until 1936 that Alan Turing and Alonzo Church formalized the definition of an algorithm in terms of computation. Similarly, while binary and logical systems of mathematics have long existed, Gottfried Leibniz only formalized logic in 1703 with binary values for true and false. The nature of mathematical proof also has an ancient history, but in 1931 Kurt Gödel proved with his incompleteness theorem that there were fundamental limitations on what statements, even if true, could be proved.
These developments have led to the modern study of logic and computability, and indeed the field of theoretical computer science as a whole. Information theory was added to the field with a 1948 theory of the statistical mechanics of information by Claude Shannon. In the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks and parallel distributed processing were established.
With the development of quantum mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously. This led to the concept of a quantum computer in the latter half of the 20th century that took off in the 1990s when Peter Shor showed that such methods could be used to factor large numbers in polynomial time, which, if implemented, would render all modern public key cryptography systems uselessly insecure.
Modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed.
[edit] Organizations
- EATCS, the European Association for Theoretical Computer Science
- SIGACT
- Dutch Association for Theoretical Computer Science [2]
[edit] Journals and newsletters
- Information and Computation
- Theory of Computing (open access journal)
- Formal Aspects of Computing
- Journal of the ACM
- SIAM Journal on Computing
- SIGACT News
- Theoretical Computer Science
- Theory of Computings Systems
- Chicago Journal of Theoretical Computer Science
- International Journal of Foundations of Computer Science
- Foundations and Trends in Theoretical Computer Science
- Journal of Automata, Languages and Combinatorics
- Acta Informatica
- Fundamenta Informaticae
[edit] Conferences
- Annual ACM Symposium on the Theory of Computing (STOC)
- IEEE Symposium on Foundations of Computer Science (FOCS)
- Symposium on Discrete Algorithms (SODA)
- International Colloquium on Automata, Languages and Programming (ICALP)
- Symposium on Theoretical Aspects of Computer Science (STACS)
- European Symposium on Algorithms (ESA)
- Algebraic Methodology And Software Technology (AMAST)
- IEEE Symposium on Logic in Computer Science (LICS)
- International Symposium on Algorithms and Computation(ISAAC)
- (APPROX/RANDOM)
- Computational Complexity Conference (CCC)
- Symposium on Parallelism in Algorithms and Architectures (SPAA)
- ACM Symposium on Principles of Distributed Computing (PODC)
- Computability in Europe (CiE)