Trustworthy Computing
From Wikipedia, the free encyclopedia
The neutrality and factual accuracy of this article are disputed. Please see the relevant discussion on the talk page. (March 2008) |
The term Trustworthy Computing (TwC) has been applied to computing systems that are inherently secure, available and reliable. The Committee on Information Systems Trustworthiness’ publication, Trust in Cyberspace, defines such a system as one which
“ | does what people expect it to do – and not something else – despite environmental disruption, human user and operator errors, and attacks by hostile parties. Design and implementation errors must be avoided, eliminated or somehow tolerated. It is not sufficient to address only some of these dimensions, nor is it sufficient simply to assemble components are themselves trustworthy. Trustworthiness is holistic and multidimensional. | ” |
More recently, Microsoft has adopted the term Trustworthy Computing as the title of a company initiative to improve public trust in its own commercial offerings. In large part, it is intended to address the concerns about the security and reliability of previous Microsoft Windows releases and, in part, to address general concerns about privacy and business practices. This initiative has changed the focus of many of Microsoft’s internal development efforts, but has been greeted with skepticism by some in the computer industry.
Contents |
[edit] "Trusted" vs. "Trustworthy"
The terms Trustworthy Computing and Trusted Computing had distinct meanings. A given system can be trustworthy but not trusted and vice versa.[1]
The National Security Agency (NSA) defines a trusted system or component as one "whose failure can break the security policy", and a trustworthy system or component as one "that will not fail". Trusted Computing has been defined and outlined with a set of specifications and guidelines by the Trusted Computing Platform Alliance (TCPA). These include: secure input and output, memory curtaining, sealed storage, and remote attestation. As stated above, Trustworthy Computing aims to build consumer confidence in computers, by making them more reliable, and thus more widely used and accepted.
[edit] History
Trustworthy computing is not a new concept. The 1960s saw an increasing dependence on computing systems by the military, the space program, financial institutions and public safety organizations. The computing industry began to identify deficiencies in existing systems and focus on areas that would address public concerns about reliance on automated systems.
In 1967, Allen-Babcock Computing identified four areas of trustworthiness that foreshadow Microsoft’s. Their time-share business allowed multiple users from multiple businesses to coexist on the same computer, presenting many of the same vulnerabilities of modern networked information systems.
Allen-Babcock’s strategy for providing trustworthy computing concentrated on four areas:
- An ironclad operating system [reliability]
- Use of trustworthy personnel [~business integrity]
- Effective access control [security]
- User requested optional privacy [privacy]
A benchmark event occurred in 1989, when 53 government and industry organizations met. This workshop assessed the challenges involved in developing trustworthy critical computer systems and recommended the use of formal methods as a solution. Among the issues addressed was the need for improved software testing methods that would guarantee high level of reliability on initial software release. The attendees further recommended programmer certification as a means to guarantee the quality and integrity of software.
In 1996, the National Research Council recognized that the rise of the Internet simultaneously increased societal reliance on computer systems while increasing the vulnerability of such systems to failure. The Committee on Information System Trustworthiness was convened; producing the work, Trust in Cyberspace. This report reviews the benefits of trustworthy systems, the cost of un-trustworthy systems and identifies actions required for improvement. In particular, operator errors, physical disruptions, design errors, and malicious software as items to be mitigated or eliminated. It also identifies encrypted authorization, fine level access control and proactive monitoring as essential to a trustworthy system.
Microsoft launched its Trustworthy Computing initiative in 2002. This program was in direct response to Internet devastation caused by the Code Red and Nimda worms in 2001. Announcement of the initiative came in the form of an all-employee email from Microsoft founder Bill Gates redirecting the company’s software development activities to include a “by design” view of security.
[edit] Microsoft and Trustworthy Computing
Microsoft CTO and Senior Vice President Craig Mundie authored a whitepaper in 2002, defining the framework of the company’s Trustworthy Computing program . Four areas were identified as the initiative’s key “pillars”. Microsoft has subsequently organized its efforts to align with these goals. These key activities are set forth as:
- Security
- Privacy
- Reliability
- Business Integrity
[edit] Security
Microsoft’s first pillar of Trustworthy Computing is security. Security has always been a part of computing, but now it must become a priority. According to Microsoft, security goes beyond the technology to include the social aspect as well. This is outlined in the following three components:
- Technology Investment – Investing in the expertise and technology necessary to create a secure and trustworthy computing environment.
- Responsible Leadership – Microsoft highlights the responsibility that goes with being an industry leader. This includes working with law enforcement agencies, government experts, academia, and private sectors to join forces and create partnerships necessary to create and enforce secure computing.
- Customer Guidance and Engagement – It is important to develop trust by educating consumers with training and information on best practices for secure computing.
[edit] Privacy
For computing to become ubiquitous in connecting people and transmitting information over various networks and services it is critical that information is protected and kept private. Microsoft has privacy as the second pillar for Trustworthy Computing and commits to making privacy a priority in the design, developing, and testing of their products. To ensure this privacy, it is also important to contribute to standards and policies created by industry organizations and government. Privacy policies must be honored and practiced across the industry.
Another essential element of privacy is providing the user a sense of control over their personal information. This includes ongoing education, information, and notification of policy and procedures. In a world of spam, hackers, and unwanted pop-ups, computer users need to feel empowered with the tools and computing products, especially when it comes to protecting their personal information.
[edit] Reliability
Microsoft’s third pillar of Trustworthy Computing is reliability. Microsoft uses a fairly broad definition to encompass all technical aspects related to availability, performance and disruption recovery. It is intended to be a measure not only of whether a system is working, but whether it will continue working in non-optimal situations.
Six key attributes have been defined for a reliable system:
- Resilient. The system will continue to provide the user a service in the face of internal or external disruption.
- Recoverable. Following a user- or system-induced disruption, the system can be easily restored, through instrumentation and diagnosis, to a previously known state with no data loss.
- Controlled. Provides accurate and timely service whenever needed.
- Undisruptable. Required changes and upgrades do not disrupt the service being provided by the system.
- Production-ready. On release, the system contains minimal software bugs, requiring a limited number of predictable updates.
- Predictable. It works as expected or promised, and what worked before works now.
[edit] Business Integrity
Microsoft’s fourth pillar of Trustworthy Computing is business integrity. Many view this as a reaction by the technology firm to the accounting scandals of Enron, Worldcom and others, but it also speaks to the concerns regarding software developer integrity and responsiveness.
Microsoft identifies two major areas of concentration for business integrity. These are responsiveness: “The company accepts responsibility for problems, and takes action to correct them. Help is provided to customers in planning for, installing and operating the product”; and transparency: “The company is open in its dealings with customers. Its motives are clear, it keeps its word, and customers know where they stand in a transaction or interaction with the company.”
[edit] See also
[edit] References
- ^ Irvine, Cynthia E., "What Might We Mean by 'Secure Code' and How Might We Teach What We Mean?", Proceedings Workshop on Secure Software Engineering Education and Training, Oaho, HI, April 2006. (PDF)
[edit] External links
- Trusted Computing Group
- Wave Systems Corp. Managing Trusted Computing Platforms (TPM)
- The Age of Corporate Open Source Enlightenment, Paul Ferris, ACM Press
- The Controversy over Trusted Computing, Catherine Flick, University of Sydney
- Email from Bill Gates to Microsoft Employees, Wired News, January, 2002
- Trust in Cyberspace, Committee on Information Systems Trustworthiness
- Trustworthy Computing, Microsoft
- Trustworthy Computing, Craig Mundie, Microsoft