Trustworthy computing
The term trustworthy computing (TwC) has been applied to computing systems that are inherently secure, available, and reliable. The Committee on Information Systems Trustworthiness’ publication, Trust in Cyberspace, defines such a system as one which
“ | does what people expect it to do – and not something else – despite environmental disruption, human user, and operator errors, and attacks by hostile parties. Design and implementation errors must be avoided, eliminated, or somehow tolerated. It is not sufficient to address only some of these dimensions, nor is it sufficient simply to assemble components that are themselves trustworthy. Trustworthiness is holistic and multidimensional. | ” |
More recently, Microsoft has adopted the term Trustworthy Computing as the title of a company initiative to improve public trust in its own commercial offerings. In large part, it is intended to address the concerns about the security and reliability of previous Microsoft Windows releases and, in part, to address general concerns about privacy and business practices. This initiative has changed the focus of many of Microsoft’s internal development efforts, but has been greeted with skepticism by some in the computer industry.
"Trusted" vs. "Trustworthy"
The terms trustworthy computing and trusted computing had distinct meanings. A given system can be trustworthy but not trusted and vice versa.[1]
The National Security Agency defines a trusted system or component as one "whose failure can break the security policy", and a trustworthy system or component as one "that will not fail". Trusted Computing has been defined and outlined with a set of specifications and guidelines by the Trusted Computing Platform Alliance (TCPA), including secure input and output, memory curtaining, sealed storage, and remote attestation. As stated above, Trustworthy Computing aims to build consumer confidence in computers, by making them more reliable, and thus more widely used and accepted.
History
Trustworthy computing is not a new concept. The 1960s saw an increasing dependence on computing systems by the military, the space program, financial institutions and public safety organizations. The computing industry began to identify deficiencies in existing systems and focus on areas that would address public concerns about reliance on automated systems.
In 1967, Allen-Babcock Computing identified four areas of trustworthiness that foreshadow Microsoft’s. Their time-share business allowed multiple users from multiple businesses to coexist on the same computer, presenting many of the same vulnerabilities of modern networked information systems.
Allen-Babcock’s strategy for providing trustworthy computing concentrated on four areas:
- An ironclad operating system [reliability]
- Use of trustworthy personnel [~business integrity]
- Effective access control [security]
- User requested optional privacy [privacy]
A benchmark event occurred in 1989, when 53 government and industry organizations met. This workshop assessed the challenges involved in developing trustworthy critical computer systems and recommended the use of formal methods as a solution. Among the issues addressed was the need for improved software testing methods that would guarantee high level of reliability on initial software release. The attendees further recommended programmer certification as a means to guarantee the quality and integrity of software.
In 1996, the National Research Council recognized that the rise of the Internet simultaneously increased societal reliance on computer systems while increasing the vulnerability of such systems to failure. The Committee on Information System Trustworthiness was convened; producing the work, Trust in Cyberspace. This report reviews the benefits of trustworthy systems, the cost of un-trustworthy systems and identifies actions required for improvement. In particular, operator errors, physical disruptions, design errors, and malicious software as items to be mitigated or eliminated. It also identifies encrypted authorization, fine level access control and proactive monitoring as essential to a trustworthy system.
Microsoft launched its Trustworthy Computing initiative in 2002. This program was in direct response to Internet devastation caused by the Code Red and Nimda worms in 2001. Announcement of the initiative came in the form of an all-employee email from Microsoft founder Bill Gates redirecting the company’s software development activities to include a “by design” view of security.
Microsoft closed its Trustworthy Computing Group as part of the loss of 2,100 jobs in a restructuring plan unveiled on the 18th of September, 2014. Microsoft folded responsibilities for the group's security and privacy programs into its Cloud & Enterprise Division, and its Legal & Corporate Affairs group.[2]
Microsoft and Trustworthy Computing
Microsoft CTO and Senior Vice President Craig Mundie authored a whitepaper in 2002, defining the framework of the company’s Trustworthy Computing program. Four areas were identified as the initiative’s key areas.
Security
Microsoft’s first area of the scheme was security. Microsoft stated that security goes beyond technology, to include social aspects as well. This is outlined in three components; Technology Investment, Responsible Leadership, and Customer Guidance and Engagement. Microsoft attempted to invest in technology to create a more secure and trustworthy computing environment. , and attempted to highlight the responsibility that goes with being an industry leader. Several efforts were carried out, including working with law enforcement agencies, government experts, academia, and private sectors regarding security in computing.
Privacy
Microsoft stated that privacy is important for computing to be considered and important part of communication worldwide. Microsoft considered privacy to the second area of the Trustworthy Computing campaign and made privacy a priority in the design , developing, and testing of their products. Microsoft stated that for privacy, it is important for policies to be set by both industry leaders and authorities . Microsoft also attempted to reduce the hold of hackers on computing .
Reliability
Microsoft’s third area of the project was reliability. Microsoft uses a fairly broad definition to encompass all technical aspects related to availability, performance and disruption recovery. It is intended to be a measure not only of whether a system is working, but whether it will continue working in non-optimal situations.
Six key attributes have been defined for a reliable system:
- Resilient. The system will continue to provide the user a service in the face of internal or external disruption.
- Recoverable. Following a user- or system-induced disruption, the system can be easily restored, through instrumentation and diagnosis, to a previously known state with no data loss.
- Controlled. Provides accurate and timely service whenever needed.
- Undisruptable. Required changes and upgrades do not disrupt the service being provided by the system.
- Production-ready. On release, the system contains minimal software bugs, requiring a limited number of predictable updates.
- Predictable. It works as expected or promised, and what worked before works now.
Business Integrity
Microsoft’s fourth pillar of Trustworthy Computing is business integrity. Many view this as a reaction by the technology firm to the accounting scandals of Enron, Worldcom and others, but it also speaks to the concerns regarding software developer integrity and responsiveness.
Microsoft identifies two major areas of concentration for business integrity. These are responsiveness: “The company accepts responsibility for problems, and takes action to correct them. Help is provided to customers in planning for, installing and operating the product”; and transparency: “The company is open in its dealings with customers. Its motives are clear, it keeps its word, and customers know where they stand in a transaction or interaction with the company.”
See also
- Security Development Lifecycle
References
- ↑ Irvine, Cynthia E., "What Might We Mean by 'Secure Code' and How Might We Teach What We Mean?", Proceedings Workshop on Secure Software Engineering Education and Training, Oahu, HI, April 2006. (PDF)
- ↑ Leyden, John. "Blood-crazed Microsoft axes Trustworthy Computing Group". The Register. Retrieved 2014-09-19.
External links
- Trusted Computing Group
- Wave Systems Corp. Managing Trusted Computing Platforms (TPM)
- The Age of Corporate Open Source Enlightenment, Paul Ferris, ACM Press
- The Controversy over Trusted Computing, Catherine Flick, University of Sydney
- Email from Bill Gates to Microsoft Employees, Wired News, January, 2002
- Trust in Cyberspace, Committee on Information Systems Trustworthiness
- Trustworthy Computing, Microsoft
- Trustworthy Computing, Craig Mundie, Microsoft