Usability
From Wikipedia, the free encyclopedia
Usability is a term used to denote the ease with which people can employ a particular tool or other human-made object in order to achieve a particular goal. Usability can also refer to the methods of measuring usability and the study of the principles behind an object's perceived efficiency or elegance.
In human-computer interaction and computer science, usability usually refers to the elegance and clarity with which the interaction with a computer program or a web site is designed. The term is also used often in the context of products like consumer electronics, or in the areas of communication, and knowledge transfer objects (such as a cookbook, a document or online help). It can also refer to the efficient design of mechanical objects such as a door handle or a hammer.
Contents |
[edit] Introduction
The primary notion of usability is that an object designed with the users' psychology and physiology in mind is, for example:
- More efficient to use—it takes less time to accomplish a particular task
- Easier to learn—operation can be learned by observing the object
- More satisfying to use
Complex computer systems are finding their way into everyday life, and at the same time the market is becoming saturated with competing brands. This has led to usability becoming more popular and widely recognized in recent years as companies see the benefits of researching and developing their products with user-oriented instead of technology-oriented methods. By understanding and researching the interaction between product and user, the usability expert can also provide insight that is unattainable by traditional company-oriented market research. For example, after observing and interviewing users, the usability expert may identify needed functionality or design flaws that were not anticipated. Method called "contextual inquiry" does this in the naturally occurring context of the users own environment.
In the user-centered design paradigm, the product is designed with its intended users in mind at all times. In the user-driven or participatory design paradigm, some of the users become actual or de facto members of the design team.[1]
The term user friendly is often used as a synonym for usable, though it may also refer to accessibility.
There is no consensus about the relation of the terms ergonomics (or human factors) and usability. Some think of usability as the software specialization of the larger topic of ergonomics. Others view these topics as tangential, with ergonomics focusing on physiological matters (e.g., turning a door handle) and usability focusing on psychological matters (e.g., recognizing that a door can be opened by turning its handle).
Usability is also very important in website development. According to Jakob Nielsen, "Studies of user behavior on the Web find a low tolerance for difficult designs or slow sites. People don't want to wait. And they don't want to learn how to use a home page. There's no such thing as a training class or a manual for a Web site. People have to be able to grasp the functioning of the site immediately after scanning the home page—for a few seconds at most."[2]
[edit] Definition
Usability is a qualitative attribute that assesses how easy user interfaces are to use. The word "usability" also refers to methods for improving ease-of-use during the design process. Usability consultant Jakob Nielsen and computer science professor Ben Shneiderman have written (separately) about a framework of system acceptability, where usability is a part of "usefulness" and is composed of:
- Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design?
- Efficiency: Once users have learned the design, how quickly can they perform tasks?
- Memorability: When users return to the design after a period of not using it, how easily can they re establish proficiency?
- Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors?
- Satisfaction: How pleasant is it to use the design?
Usability is often associated with the functionalities of the product (cf. ISO definition, below), in addition to being solely a characteristic of the user interface (cf. framework of system acceptability, also below, which separates usefulness into utility and usability). For example, in the context of mainstream consumer products, an automobile lacking a reverse gear could be considered unusable according to the former view, and lacking in utility according to the latter view.
When evaluating user interfaces for usability, the definition can be as simple as "the perception of a target user of the effectiveness (fit for purpose) and efficiency (work or time required to use) of the Interface". Each component may be measured subjectively against criteria e.g. Principles of User Interface Design, to provide a metric, often expressed as a percentage.
It is important to distinguish between usability testing and usability engineering. Usability testing is the measurement of ease of use of a product or piece of software. In contrast, usability engineering (UE) is the research and design process that ensures a product with good usability.
Usability is an example of a non-functional requirement. As with other non-functional requirements, usability cannot be directly measured but must be quantified by means of indirect measures or attributes such as, for example, the number of reported problems with ease-of-use of a system.
[edit] Investigation
The key principle for maximizing usability is to employ iterative design, which progressively refines the design through evaluation from the early stages of design. The evaluation steps enable the designers and developers to incorporate user and client feedback until the system reaches an acceptable level of usability.
The preferred method for ensuring usability is to test actual users on a working system. Although, there are many methods for studying usability, the most basic and useful is user testing, which has three components:
- Get some representative users.
- Ask the users to perform representative tasks with the design.
- Observe what the users do, where they succeed, and where they have difficulties with the user interface.
It's important to test users individually and let them solve any problems on their own. If you help them or direct their attention to any particular part of the screen, you will bias the test. Rather than running a big, expensive study, it's better to run many small tests and revise the design between each one so you can fix the usability flaws as you identify them. Iterative design is the best way to increase the quality of user experience. The more versions and interface ideas you test with users, the better.
Usability plays a role in each stage of the design process. The resulting need for multiple studies is one reason to make individual studies fast and cheap, and to perform usability testing early in the design process. Here are the main steps:
- Before starting the new design, test the old design to identify the good parts that you should keep or emphasize, and the bad parts that give users trouble.
- Test competitors' designs to get data on a range of alternative designs.
- Conduct a field study to see how users behave in their natural habitat.
- Make paper prototypes of one or more new design ideas and test them. The less time you invest in these design ideas the better, because you'll need to change them all based on the test results.
- Refine the design ideas that test best through multiple iterations, gradually moving from low-fidelity prototyping to high-fidelity representations that run on the computer. Test each iteration.
- Inspect the design relative to established usability guidelines, whether from your own earlier studies or published research.
- Once you decide on and implement the final design, test it again. Subtle usability problems always creep in during implementation.
Don't defer user testing until you have a fully implemented design. If you do, it will be impossible to fix the vast majority of the critical usability problems that the test uncovers. Many of these problems are likely to be structural, and fixing them would require major rearchitecting. The only way to a high-quality user experience is to start user testing early in the design process and to keep testing every step of the way.
[edit] ISO standards
[edit] ISO/TR 16982:2002
ISO/TR 16982:2002 "Ergonomics of human-system interaction -- Usability methods supporting human-centred design". This standard provides information on human-centred usability methods which can be used for design and evaluation. It details the advantages, disadvantages and other factors relevant to using each usability method.
It explains the implications of the stage of the life cycle and the individual project characteristics for the selection of usability methods and provides examples of usability methods in context.
The main users of ISO/TR 16982:2002 will be project managers. It therefore addresses technical human factors and ergonomics issues only to the extent necessary to allow managers to understand their relevance and importance in the design process as a whole.
The guidance in ISO/TR 16982:2002 can be tailored for specific design situations by using the lists of issues characterizing the context of use of the product to be delivered. Selection of appropriate usability methods should also take account of the relevant life-cycle process.
ISO/TR 16982:2002 is restricted to methods that are widely used by usability specialists and project managers.
ISO/TR 16982:2002 does not specify the details of how to implement or carry out the usability methods described.
[edit] ISO 9241
ISO 9241 is a multi-part standard covering a number of aspects for people working with computers. Although originally titled Ergonomic requirements for office work with visual display terminals (VDTs) it is being retitled to the more generic Ergonomics of Human System Interaction by ISO. As part of this change, ISO is renumbering the standard so that it can include many more topics. The first part to be renumbered was part 10 (now renumbered to part 110).
Part 1 is a general introduction to the rest of the standard. Part 2 addresses task design for working with computer systems. Parts 3–9 deal with physical characteristics of computer equipment. Parts 110 and parts 11–19 deal with usability aspects of software, including Part 110 (a general set of usability heuristics for the design of different types of dialogue) and Part 11 (general guidance on the specification and measurement of usability).
[edit] Usability considerations
Usability includes considerations such as:
- Who are the users, what do they know, and what can they learn?
- What do users want or need to do?
- What is the general background of the users?
- What is the context in which the user is working?
- What has to be left to the machine?
Answers to these can be obtained by conducting user and task analysis at the start of the project.
[edit] Other considerations
- Can users easily accomplish their intended tasks? For example, can users accomplish intended tasks at their intended speed?
- How much training do users need?
- What documentation or other supporting materials are available to help the user? Can users find the solutions they seek in these materials?
- What and how many errors do users make when interacting with the product?
- Can the user recover from errors? What do users have to do to recover from errors? Does the product help users recover from errors? For example, does software present comprehensible, informative, non-threatening error messages?
- Are there provisions for meeting the special needs of users with disabilities? (accessibility)
Examples of ways to find answers to these and other questions are: user-focused requirements analysis, building user profiles, and usability testing.
[edit] Evaluation methods
There are a variety of methods currently used to evaluate usability. Certain methods make use of data gathered from users, while others rely on usability experts. There are usability evaluation methods that apply to all stages of design and development, from product definition to final design modifications. When choosing a method you must consider the cost, time constraints, and appropriateness of the method. For a brief overview of methods, see Comparison of usability evaluation methods or continue reading below. Usability methods can be further classified into the following subcategories:
[edit] Cognitive modeling methods
Cognitive modeling involves creating a computational model to estimate how long it takes people to perform a given task. Models are based on psychological principles and experimental studies to determine times for cognitive processing and motor movements. Cognitive models can be used to improve user interfaces or predict problem errors and pitfalls during the design process. A few examples of cognitive models include:
- Parallel Design
With parallel design, several people create an initial design from the same set of requirements. Each person works independently, and when finished, shares his/her concepts with the group. The design team considers each solution, and each designer uses the best ideas to further improve their own solution. This process helps to generate many different, diverse ideas and ensures that the best ideas from each design are integrated into the final concept. This process can be repeated several times until the team is satisfied with the final concept.
- GOMS
GOMS is an acronym that stands for Goals, Operator, Methods, and Selection Rules. It is a family of techniques that analyzes the user complexity of interactive systems. Goals are what the user has to accomplish. An operator is an action performed in service of a goal. A method is a sequence of operators that accomplish a goal. Selection rules specify which method should be used to satisfy a given goal, based on the context.
- Human Processor Model
Sometimes it is useful to break a task down and analyze each individual aspect separately. This allows the tester to locate specific areas for improvement. To do this, it is necessary to understand how the human brain processes information. A model of the human processor is shown below.
Many studies have been done to estimate the cycle times, decay times, and capacities of each of these processors. Variables that affect these can include subject age, ability, and the surrounding environment. For a younger adult, reasonable estimates are:
Parameter | Mean | Range |
---|---|---|
Eye movement time | 230 ms | 70-700 ms |
Decay half-life of visual image storage | 200 ms | 90-1000 ms |
Perceptual processor cycle time | 100 ms | 50-200 ms |
Cognitive processor cycle time | 70 ms | 25-170 ms |
Motor processor cycle time | 70 ms | 30-100 ms |
Effective working memory capacity | 7 items | 5-9 items |
Long-term memory is believed to have an infinite capacity and decay time.[3]
- Keystroke level modeling
Keystroke level modeling is essentially a less comprehensive version of GOMS that makes simplifying assumptions in order to reduce calculation time and complexity. See Keystroke level model for more information.
[edit] Inspection methods
These usability evaluation methods involve observation of users by an experimenter, or the testing and evaluation of a program by an expert reviewer. They provide more quantitative data as tasks can be timed and recorded.
- Card Sorting
Card sorting is a way to involve users in grouping information for a website's usability review. Participants in a card sorting session are asked to organize the content from a Web site in a way that makes sense to them. Participants review items from a Web site and then group these items into categories. Card sorting helps to learn how users think about the content and how they would organize the information on the Web site. Card sorting helps to build the structure for a Web site, decide what to put on the home page, and label the home page categories. It also helps to ensure that information is organized on the site in a way that is logical to users.
- Ethnography
Ethnographic analysis is derived from anthropology. Field observations are taken at a site of a possible user, which track the artifacts of work such as Post-It notes, items on desktop, shortcuts, and items in trash bins. These observations also gather the sequence of work and interruptions that determine the user’s typical day.
- Heuristic Evaluation
Heuristic evaluation is a usability engineering method for finding and assesing usability problems in a user interface design as part of an iterative design process. It involves having a small set of evaluators examining the interface and using recognized usability principles (the "heuristics"). It is the most popular of the usability inspection methods, as it is quick, cheap, and easy.
Heuristic evaluation was developed to aid in the design of computer user-interface design. It relies on expert reviewers to discover usability problems and then categorize and rate them by a set of principles (heuristics.) It is widely used based on its speed and cost-effectiveness. Jakob Nielsen's list of ten heuristics is the most commonly used in industry. By determining which guidelines are violated, the usability of a device can be determined.
- Usability Inspection
Usability inspection is a review of a system based on a set of guidelines. The review is conducted by a group of experts who are deeply familiar with the concepts of usability in design. The experts focus on a list of areas in design that have been shown to be troublesome for users.
- Pluralistic Inspection
Pluralistic Inspections are meetings where users, developers, and human factors people meet together to discuss and evaluate step by step of a task scenario. As more people inspect the scenario for problems, the higher the probability to find problems. In addition, the more interaction in the team, the faster the usability issues are resolved.
Consistency Inspection In consistency inspection, expert designers review products or projects to ensure consistency across multiple products to look if it does things in the same way as their own designs.
- Activity Analysis
Activity analysis is a usability method used in preliminary stages of development to get a sense of situation. It involves an investigator observing users as they work in the field. Also referred to as user observation, it is useful for specifying user requirements and studying currently used tasks and subtasks. The data collected is qualitative and useful for defining the problem. It should be used when you wish to frame what is needed, or “What do we want to know?”
[edit] Inquiry methods
The following usability evaluation methods involve collecting qualitative data from users. Although the data collected is subjective, it provides valuable information on what the user wants.
- Task Analysis
Task analysis means learning about users' goals and users' ways of working. Task analysis can also mean figuring out what more specific tasks users must do to meet those goals and what steps they must take to accomplish those tasks. Along with user and task analysis, we often do a third analysis: understanding users' environments (physical, social, cultural, and technological environments).
- Focus Groups
A focus group is a focused discussion where a moderator leads a group of participants through a set of questions on a particular topic. Although typically used as a marketing tool, Focus Groups are sometimes used to evaluate usability. Used in the product definition stage, a group of 6 to 10 users are gathered to discuss what they desire in a product. An experienced focus group facilitator is hired to guide the discussion to areas of interest for the developers. Focus groups are typically videotaped to help get verbatim quotes, and clips are often used to summarize opinions. The data gathered not usually quantitative, but can help get an idea of a target group's opinion.
- Questionaires/Surveys
Surveys have the advantages of being inexpensive, require no testing equipment, and results reflect the users’ opinions. When written carefully and given to actual users who have experience with the product and knowledge of design, surveys provide useful feedback on the strong and weak areas of the usability of a design. This is a very common method and often does not appear to be a survey, but just a warranty card.
[edit] Prototyping methods
- Rapid Prototyping
Rapid prototyping is a method used in early stages of development to validate and refine the usability of a system. It can be used to quickly and cheaply evaluate user-interface designs without the need for an expensive working model. This can help remove hesitation to change the design, since it is implemented before any real programming begins. One such method of rapid prototyping is paper prototyping.
[edit] Testing methods
These usability evaluation methods involve testing of subjects for the most quantitative data. Usually recorded on video, they provide task completion time and allow for observation of attitude.
- Remote usability testing
Remote usability testing is a technique that exploits users’ environment (e.g. home or office), transforming it into a usability laboratory where user observation can be done with screen sharing applications.
- Thinking Aloud
The Think aloud protocol is a method of gathering data that is used in both usability and psychology studies. It involves getting a user to verbalize their thought processes as they perform a task or set of tasks. Often an instructor is present to prompt the user into being more vocal as they work. Similar to the Subjects-in-Tandem method, it is useful in pinpointing problems and is relatively simple to set up. Additionally, it can provide insight into the user's attitude, which can not usually be discerned from a survey or questionnaire.
- Subjects-in-Tandem
Subjects-in-tandem is pairing of subjects in a usability test to gather important information on the ease of use of a product. Subjects tend to think out loud and through their verbalized thoughts designers learn where the problem areas of a design are. Subjects very often provide solutions to the problem areas to make the product easier to use.
[edit] Other methods
- Cognitive walkthrough
Cognitive walkthrough is a method of evaluating the user interaction of a working prototype or final product. It is used to evaluate the system’s ease of learning. Cognitive walkthrough is useful to understand the user’s thought processes and decision making when interacting with a system, specially for first-time or infrequent users.
- Benchmarking
Benchmarking creates standardized test materials for a specific type of design. Four key characteristics are considered when establishing a benchmark: time to do the core task, time to fix errors, time to learn applications, and the functionality of the system. Once there is a benchmark, other designs can be compared to it to determine the usability of the system.
- Meta-Analysis
Meta-Analysis is a statistical procedure to combine results across studies to integrate the findings. This phrase was coined in 1976 as a quantitative literature review. This type of evaluation is very powerful for determining the usability of a device because it combines multiple studies to provide very accurate quantitative support.
- Persona
Personas are fictitious characters that are created to represent the different user types within a targeted demographic that might use a site or product. Alan Cooper introduced the concept of using personas as a part of interactive design in 1998 in his book The Inmates Are Running the Asylum, but had used this concept since as early as 1975. Personas are a usability evaluation method that can be used at various design stages. The most typical time to create personas is at the beginning of designing so that designers have a tangible idea of who the users of their product will be. Personas are the archetypes that represent actual groups of users and their needs, which can be a general description of person, context, or usage scenario. This technique turns marketing data on target user population into a few physical concepts of users to create empathy among the design team.
[edit] Evaluating with tests and metrics
Regardless to how carefully a system is designed, all theories must be tested using usability tests. Usability tests involve typical users using the system (or product) in a realistic environment [see simulation]. Observation of the user’s behavior, emotions, and difficulties while performing different tasks, often identify areas of improvement for the system.
[edit] Prototypes
It is often very difficult for designers to conduct usability tests with the exact system being designed. Cost constraints, size, and design constraints usually lead the designer to creating a prototype of the system. Instead of creating the complete final system, the designer may test different sections of the system, thus making several small models of each component of the system. The types of usability prototypes may vary from using paper models, index cards, hand drawn models, or storyboards.[4]
Prototypes are able to be modified quickly, often are faster and easier to create with less time invested by designers and are more apt to change design; although sometimes are not an adequate representation of the whole system, are often not durable and testing results may not be parallel to those of the actual system.
[edit] Metrics
While conducting usability tests, designers must use usability metrics to identify what it is they are going to measure, or the usability metrics. These metrics are often variable, and change in conjunction with the scope and goals of the project. The number of subjects being tested can also affect usability metrics, as it is often easier to focus on specific demographics. Qualitative design phases, such as general usability (can the task be accomplished?), and user satisfaction are also typically done with smaller groups of subjects.[5] Using inexpensive prototype on small user groups, provide more detailed information, because of the more interactive atmosphere, and the designers ability to focus more on the individual user.
As the designs become more complex, the testing must become more formalized. Testing equipment will become more sophisticated and testing metrics become more quantitative. With a more refined prototype, designers often test effectiveness, efficiency, and subjective satisfaction, by asking the user to complete various tasks. These categories are measured by the percent that complete the task, how long it takes to complete the tasks, ratios of success to failure to complete the task, time spent on errors, the number of errors, rating scale of satisfactions, number of times user seems frustrated, etc.[6] Additional observations of the users give designers insight on navigation difficulties, controls, conceptual models, etc. The ultimate goal of analyzing these metrics is to find/create a prototype design that users like and use to successfully perform given tasks.[4]
After conducting usability tests, it is important for a designer to record what was observed, in addition to why such behavior occurred and modify the model according to the results. Often it is quite difficult to distinguish the source of the design errors, and what the user did wrong. However, effective usability tests will not generate a solution to the problems, but provide modified design guidelines for continued testing.
[edit] Benefits of usability
The key benefits of usability are:
- Higher revenues through increased sales
- Increased user efficiency
- Reduced development costs
- Reduced support costs
[edit] Corporate integration
An increase in usability generally positively affects several facets of a company’s output quality. In particular, the benefits fall into several common areas:[7]
- Increased productivity
- Decreased training and support costs
- Increased sales and revenues
- Reduced development time and costs
- Reduced maintenance costs
- Increased customer satisfaction
Increased usability in the workplace fosters several responses from employees. Along with any positive feedback, “workers who enjoy their work do it better, stay longer in the face of temptation, and contribute ideas and enthusiasm to the evolution of enhanced productivity.[8]" In order to create standards, companies often implement experimental design techniques that create baseline levels. Areas of concern in an office environment include (though are not necessarily limited to):[9]
- Working Posture
- Design of Workstation Furniture
- Screen Displays
- Input Devices
- Organizational Issues
- Office Environment
- Software Interface
By working to improve said factors, corporations can achieve their goals of increased output at lower costs, while potentially creating optimal levels of customer satisfaction. There are numerous reasons why each of these factors correlates to overall improvement. For example, making a piece of software’s user interface easier to understand would reduce the need for extensive training. The improved interface would also tend to lower the time needed to perform necessary tasks, and so would both raise the productivity levels for employees and reduce development time (and thus costs). It is important to note that each of the aforementioned factors are not mutually exclusive, rather should be understood to work in conjunction to form the overall workplace environment.
[edit] Conclusion
Usability is now recognized as an important software quality attribute, earning its place among more traditional attributes such as performance and robustness. Indeed, various academic programs focus on usability. [1] Also several usability consultancy companies have emerged, and traditional consultancy and design firms are offering similar services.
[edit] See also
- Accessibility
- Experience design
- Fitts's law
- Gemba or Customer visit
- Human factors
- GOMS
- GUI
- List of System Quality Attributes
- Information architecture
- Interaction design
- Internationalization
- Learnability
- Universal Usability
- Usability testing
- USable
- Web usability
[edit] References
- ^ Holm, Ivar (2006). Ideas and Beliefs in Architecture and Industrial design: How attitudes, orientations, and underlying assumptions shape the built environment. Oslo School of Architecture and Design. ISBN 8254701741.
- ^ http://www.informationweek.com/773/web.htm
- ^ Card,S.K., Moran, T.P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates.
- ^ a b Wickens, C.D et al (2004). An Introduction to Human Factors Engineering (2nd Ed), Pearson Education, Inc., Upper Saddle River, NJ : Prentice Hall.
- ^ Dumas, J.S. and Redish, J.C. (1999). A Practical Guide to Usability Testing (revised ed.), Bristol, U.K.: Intellect Books.
- ^ Kuniavsky, M. (2003). Observing the User Experience: A Practitioner’s Guide to User Research, San Francisco, CA: Morgan Kaufmann.
- ^ http://www.usabilityprofessionals.org/usability_resources/usability_in_the_real_world/benefits_of_usability.html
- ^ Landauer, T. K. (1996). The trouble with computers. Cambridge, MA, The MIT Press. p158.
- ^ McKeown, Celine (2008). Office ergonomics: practical applications. Boca Raton, FL, Taylor & Francis Group, LLC.
- Donald A. Norman (2002), The Design of Everyday Things, Basic Books, ISBN 0-465-06710-7
- Jakob Nielsen (1994), Usability Engineering, Morgan Kaufmann Publishers, ISBN 0-12-518406-9
- Jakob Nielsen (1994), Usability Inspection Methods, Morgan John Wiley & Sons, ISBN 0-471-01877-5
- Ben Shneiderman: Software Psychology, 1980, ISBN 0-87626-816-5
- Andreas Holzinger: Usability Engineering for Software Developers, Communications of the ACM (ISSN 0001-0782), Vol. 48, Issue 1 (January 2005), 71-74
- Alan Cooper: The Inmates Are Running the Asylum,1999,Sams Publishers, ISBN 0672316498
- Alan Cooper: The Origin of Personas, http://www.cooper.com/insights/journal_of_design
[edit] External links
The external links in this article may not follow Wikipedia's content policies or guidelines. Please improve this article by removing excessive or inappropriate external links. |
[edit] Professional associations
- Usability Solutions — Organization for research based usability solutions.
- Usability Professionals' Association — an organization for people practicing and promoting usability
- ACM SIGCHI — the ACM's Special Interest Group on Computer-Human Interaction
- STC Usability & User Experience Community — Usability and User Experience Community of the Society for Technical Communication
- Human Factors and Ergonomics Society (HFES)
[edit] Conferences
- Usability Week Conference — Held four times a year in various locations around the world by the Nielsen Norman Group with a focus on usability and interactive design
- UPA Conference — a week-long even held by the UPA and covers various aspects of usability in design
- STC Annual Conference — held once a year by the STC, it involves many sessions relating to usability and the user experience
[edit] Research and peer-reviewed journals
- UCL Interaction Centre, University College London
- The Centre for HCI Design is London's largest HCI-related research group, City University, School of Informatics
- The Journal of Usability Studies
- The Virtual Usability Lab
[edit] Design critiques
- Bad Human Factors Designs — examples of bad design
- User Centered — critiques on design, usability discussion
- Bad Usability Calendar — Print Calendar illustrating bad designs
[edit] References
- Interaction-Design.org — an open-content encyclopedia about usability
- jthom.best.vwh.net/usability —Online guide to usability methods
- Usability.gov
- Usabilitybok.org
- usabilityfirst.com — Online guide to usability methods resource
- The Usability Methods Toolbox — complete guide of methods and techniques used in usability evaluation.