Computer user satisfaction
From Wikipedia, the free encyclopedia
Computer user satisfaction (and closely related concepts such as System Satisfaction, User Satisfaction, Computer System Satisfaction, End User Computing Satisfaction) deals with user attitudes to computer systems in the context of their environments. In a broader sense, the definition can be extended to user satisfaction with any computer-based electronic appliance. The term Computer User Satisfaction is abbreviated to user satisfaction in this article. In general, the development of techniques for defining and measuring user satisfaction have been ad hoc and open to question. However, according to key scholars such as DeLone and McLean (2002), user satisfaction is a key measure of computer system success, if not in fact synonymous with it. Interest in the concept thus continues. Scholars distinguish between user satisfaction and usability as part of Human-Computer Interaction.
Contents |
[edit] The Computer User Satisfaction Instrument and the User Information Satisfaction Short-form
Bailey and Pearson’s (1983) 39‑Factor Computer User Satisfaction (CUS) instrument and its derivative, the User Information Satisfaction (UIS) short-form of Baroudi, Olson and Ives are typical of instruments which one might term as 'factor-based'. They consist of lists of factors, each of which the respondent then is asked to rate on one or more multiple point scales. Bailey and Pearson’s CUS asked for five ratings for each of 39 factors. The first four scales were for quality ratings and the fifth was an importance rating. From the fifth rating of each factor, they found that their sample of users rated as most important: accuracy, reliability, timeliness, relevancy and confidence in the system. The factors of least importance were found to be feelings of control, volume of output, vendor support, degree of training, and organisational position of EDP (the electronic data processing, or computing department). However, the CUS requires 39 x 5 = 195 individual seven‑point scale responses. Ives, Olson and Baroudi (1983), amongst others, thought that so many responses could result in errors of attrition, caused by the increasing carelessness of the respondent as they fill in a long questionnaire. They thus developed the UIS. This only requires the respondent to rate 13 factors, and so remains in significant use at the present time. Two seven‑point scales are provided per factor (each for a quality), requiring 26 individual responses in all.
[edit] The problem with the dating of factors
An early criticism of these measures was that the factors date as computer technology evolves and changes. This suggested the need for updates and led to a sequence of other factor-based instruments. Doll and Torkzadeh (1988), for example, produced a factor-based instrument for a new type of user emerging at the time, called an end-user. They identified end-users as users who tend to interact with a computer interface only, while previously users interacted with developers and operational staff as well. McKinney, Yoon and Zahedi (2002) developed a model and instruments for measuring web-customer satisfaction during the information phase. Cheung and Lee (2005) in their development of an instrument to measure user satisfaction with e-portals, based their instrument on that of McKinney, Yoon and Zahedi (2002), which in turn was based primarily on instruments from prior studies.
[edit] No rigorous definition of the term user satisfaction
As none of the instruments in common use really rigorously define their construct of user satisfaction, some scholars such as Cheyney, Mann and Amoroso (1986) have called for more research on the factors which influence the success of end-user computing. Little subsequent effort which sheds new light on the matter exists, however. All factor-based instruments run the risk of including factors irrelevant to the respondent, while omitting some that may be highly significant to him/her. Needless to say, this is further exacerbated by the ongoing changes in information technology.
In the literature there are two definitions for user satisfactions, ‘User satisfaction’ and ‘User Information Satisfaction’ are used interchangeable. According to Doll and Torkzadeh (1988) ‘user satisfaction’ is defined as the opinion of the user about a specific computer application, which they use. Ives et al. (1983) defined ‘User Information Satisfaction’ as “the extent to which users believe the information system available to them meets their information requirements.” Other terms for User Information Satisfaction are “system acceptance” (Igersheim, 1976), “perceived usefulness” (Larcker and Lessig, 1980), “MIS appreciation” (Swanson, 1974) and “feelings about information system” (Maish, 1979). Ang en Koh (1997) have described the user information satisfaction (UIS) as “a perceptual or subjective measure of system success”. This means that user information satisfaction will differ per person.
Several studies have proven that there are different factors which influence the UIS. Yaverbaum (1988) and Ang and Soh (1997) have done research into the relation of computer background and UIS. Yaverbaum (1988) found out that people who use their computer irregularly were more sattisfied. On the contrary Ang en Soh (1997)have proven no evidence that computer background affects UIS. User information satisfaction is still a high topic in research studies.
[edit] A lack of theoretical underpinning
Another difficulty with most of these instruments is their lack of theoretical underpinning by psychological or managerial theory. Exceptions to this were the model of web site design success developed by Zhang and von Dran (2000), and a measure of user satisfaction with e-portals, developed by Cheung and Lee (2005). Both of these models drew upon Herzberg’s two-factor theory of motivation. Consequently, their factors were designed to measure both 'satisfiers' and 'hygiene factors'. However, Herzberg’s theory itself is criticized for failing to distinguish adequately between the terms motivation, job motivation, job satisfaction, and so on.
[edit] Future developments
Currently, some scholars and practitioners are experimenting with other measurement methods and further refinements of the definition for satisfaction and user satisfaction. Others are replacing structured questionnaires by unstructured ones, where the respondent is asked simply to write down or dictate all the factors about a system which either satisfies or dissatisfies them. One problem with this approach, however, is that the instruments tend not to yield quantitative results, making comparisons and statistical analysis difficult. Also, if scholars cannot agree on the precise meaning of the term satisfaction, respondents will be highly unlikely to respond consistently to such instruments. Some newer instruments contain a mix of structured and unstructured items.
[edit] References
- Ang, J. and Koh, S. “Exploring the relationships between user information satisfaction and job satisfaction”, International Journal of Information Management (17:3), 1997, pp 169-177.
- Ang, J. and Soh, P. H. “User information satisfaction, job satisfaction and computer background: An exploratory study”, Information & Management (32:5), 1997, pp 255-266.
- Bailey, J.E., and Pearson, S.W. “Development of a tool for measuring and analysing computer user satisfaction”, Management Science (29:5), May 1983, pp 530-545.
- Baroudi, J.J., and Orlikowski, W.J. “A Short-Form Measure of User Information Satisfaction: A Psychometric Evaluation and Notes on Use”, Journal of Management Information Systems (4:2), Spring 1988, pp 44-58.
- Cheung, C.M.K., and Lee, M.K.O. “The Asymmetric Effect of Website Attribute Performance on Satisfaction: An Empirical Study”, 38th Hawaii International Conference on System Sciences, IEEE Computer Society Press, Hawaii, 2005, pp. 175-184.
- Cheyney, P. H., Mann, R.L., and Amoroso, D.L. "Organisational factors affecting the success of end-user computing", Journal of Management Information Systems 3(1) 1986, pp 65-80.
- DeLone, W.H., and Mclean, E.R. “Information Systems Success: The Quest for the Dependent Variable”, Information Systems Research (3:1), March 1992, pp 60-95.
- DeLone, W.H., Mclean, and R, E. “Information Systems Success Revisited”, 35th Hawaii International Conference on System Sciences, IEEE Computer Society Press, Los Alamitos, CA, 2002, pp. 238-248.
- DeLone, W.H., and Mclean, E.R. “The DeLone and McLean Model of Information Systems Success: A Ten-Year Update”, Journal of Management Information Systems (19:4), Spring 2003, pp 9-30.
- Doll, W.J., and Torkzadeh, G. “The Measurement of End User Computing Satisfaction”, MIS Quarterly (12:2), June 1988, pp 258-274.
- Doll, W.J., and Torkzadeh, G. “The measurement of end-user computing satisfaction: theoretical considerations”, MIS Quarterly (15:1), March 1991, pp 5-10.
- Herzberg, F., Mausner, B., and Snyderman, B. The motivation to work. Wiley, New York, 1959, p. 257.
- Herzberg, F. Work and the nature of man World Publishing, Cleveland, 1966, p. 203.
- Herzberg, F. “One more time: How do you motivate employees?”, Harvard Business Review (46:1), January-February 1968, pp 53-62.
- Igersheim, R.H. “Management response to an information system”, Proceedings AFIPS National Computer Conference, 1976, pp 877-882.
- Ives, B., Olson, M.H., and Baroudi, J.J. “The measurement of user information satisfaction”, Communications of the ACM (26:10), October 1983, pp 785-793.
- Larcker, D.F. and Lessig, V.P. “Perceived usefulness of information: a psychometric examination”, Decision Science (11:1), 1980, pp 121-134.
- Maish, A.M. “A user’s behavior towards his MIS”, MIS Quarterly (3:1), 1979, pp 37-52.
- McKinney, V., Yoon, K., and Zahedi, F.M. “The measurement of web-customer satisfaction: An expectation and disconfirmation approach”, Information Systems Research (13:3), September 2002, pp 296-315.
- Swanson, E.B. “Management and information systems: an appreciation and involvement”, Management Science (21:2), 1974, pp 178-188.
- Zhang, P., and Von Dran, G.M. “Satisfiers and dissatisfiers: a two-factor model for Website design and evaluation.”, Journal of the American Society for Information Science (51:14), December 2000, pp 1253-1268.
- Yaverbaum, G. J. “Critical factors in the user environment - an experimental study of users, organizations and tasks”, MIS Quarterly (12:1), 1988, pp 75-88.