Mental Models and Usability

Depaul University, Cognative Psychology 404

November 15, 1999

 

Mary Jo Davidson, Laura Dove, Julie Weltz

 

 

Introduction

Mental models have been studied by cognitive scientists as part of efforts to understand how humans know, perceive, make decisions, and construct behavior in a variety of environments. The relatively new field of Human-Computer Interaction (HCI) has adopted and adapted these concepts to further the study in its main area of concern (usability). This document will describe mental models and usability. It will then discuss the applications and limitations of mental models as they help improve software usability. The concluding section will describe a study developed and conducted by the authors. This study suggests some potential areas for further research that could help both cognitive scientists and HCI practitioners make progress in understanding mental models.

 

 

What is a Mental Model?

The term “mental model” has been used in many contexts and for many purposes. It was first mentioned by Craik in his 1943 book, The Nature of Explanation. (Craik, 1943) The concept seemed to be forgotten for many years after Craik's untimely death.

 

Theorizing about mental models made a comeback shortly after the “birth” of cognitive science. Mental models reappeared in the literature in 1983 in the form of two books, both named Mental Models; each using the term “mental model” for a different purpose.

 

The Johnson-Laird volume proposed mental models as a way of describing the process which humans go through to solve deductive reasoning problems. His theory included the use of a set of diagrams to describe the various combinations of premises and possible conclusions. (Johnson-Laird, 1983)

 

Gentner and Stevens’ Mental Models proposed that mental models provide humans with information on how physical systems work. This approach could be generalized to a number of situations that humans face, including the behavior of objects according to laws of physics. (Gentner and Stevens, 1983)

 

For purposes of our discussion we will consider two different, but related, descriptions and uses of mental models.

For most cognitive scientists today, a mental model is an internal scale-model representation of an external reality. It is built on-the-fly, from knowledge of prior experience, schema segments, perception, and problem-solving strategies. A mental model contains minimal information. It is unstable and subject to change. It is used to make decisions in novel circumstances. A mental model must be “runnable” and able to provide feedback on the results. Humans must be able to evaluate the results of action or the consequences of a change of state. They must be able to mentally rehearse their intended actions. Cognitive scientists often use academic studies of mental models to gain information on the processes of the mind. This information can then be used to contribute to work on artificial intelligence and simulations. (Markham, 1999)

 

The field of Human-Computer Interaction (HCI) is relatively new. The seeds of the demand for HCI were planted when the first electronic computer was developed, but it has become much more important now that the explosive growth of computer usage has changed how humans see themselves and each other, as well as computers (Dix, 1998). HCI has been called upon consistently to help humans make sense of an increasingly complex world. (Rogers, et.al. 1992) To date, many of the explanations have been a part of the knowledge base developed by cognitive scientists. It naturally follows that mental models would become a part of the vocabulary that HCI practitioners use to explain a very complex world.

 

For HCI practitioners, a mental model is a set of beliefs about how a system works. Humans interact with systems based on these beliefs. (Norman, 1988) This makes mental models very important to HCI and its primary objective, usability.

 

 

What is Usability?

The International Standards Organization, which is well-known for its development of standards for industrial processes and product quality, defines usability as follows:

Usability: the effectiveness, efficiency and satisfaction with which specified users achieve specified goals in a particular environment. (ISO 9241) (Dix, 1998)

 

The standard further defines the components of the usability definition:

 

Effectiveness: accuracy and completeness with which specified users can achieve specified goals in a particular environment

Efficiency: the resources expended in relation to the accuracy and completeness of the goals achieved

Satisfaction: the comfort and acceptability of the work system to its users and other people affected by its use.

While many HCI practitioners use the ISO 9241 definition, it is lacking for our purposes. It describes only performance and does not include some of the elements that actually lead to effectiveness, efficiency, and satisfaction.

 

Cosantine and Lockwood define usability as being composed of the learnability, retainability, efficiency of use, and user satisfaction of a product (Cosantine and Lockwood, 1999). This inclusion of learnability and retainability helps us to understand the role of mental models in usability. To the extent that a correct mental model can be learned and retained by a human, that product will be more usable. The user will be more effective, efficient, and satisfied.

 

In fact, if we knew what caused correct and incorrect mental models to be formed, we should be able to develop systems that would help users form correct mental models and consequently, improve usability. (Rogers, et.al., 1992)

 

 

Why are Mental Models Important to Usability?

Usability is strongly tied to the extent to which a user's mental model matches and predicts the action of a system. Ideally, an interface design is consistent with people’s natural mental models about computers, the environment, and everyday objects. For example, it makes sense to design a calculator program that has similar functionality and appearance to the physical hand-held calculators that everyone is familiar with.

 

However, sometimes the technical capabilities of a system have no resemblance to objects in the world. HCI practitioners have produced a large body of guidelines and heuristics used to design systems that are easier for people to understand and use. (Nielsen,1993) Through various design methods, we can build cues into a system that help users create new, accurate mental models.

 

Norman (1988), Cooper (1995) and IBM (1992) each defined three models of a system:

 

The actual way that a system works from the programmer's perspective: Norman called this the System Model; Cooper called this the Implementation Model; IBM called it the Programmer’s Model.

 

User's Mental Model: The way that the user perceives that the system works.

 

The way the designer represents the program to the user, including presentation, interaction, and object relationships: Norman called this the Design Model; Cooper called this the Manifest Model; IBM called it the Designer's Model.

 

The Design Model determines the usability of the software; it serves as the interaction process between the user and the behavior of the system. Norman suggests the best way for an interface designer to guide a user from novice to expert status is to conceal the system model and indulge the user's mental models. For example, we don't have to understand how a car is engineered to learn to effectively drive the car. The Design Model helps users to form correct mental models.

 

An inaccurate mental model of what is happening in a system leads to errors. Many systems place too many demands on the humans that use them. Users are often required to adjust the way they work to accommodate the computer. Sometimes the result is a minor frustration or inconvenience, such as changes not being saved to a file. Inaccurate mental models of more complex systems, such as an airplane or nuclear reactor, can lead to disastrous accidents. (Reason, 1990)

 

There are increasing demands for usability in technology products. As computers become available to more and more people, it can no longer be assumed that users are homogeneous, educated, technically capable, or even tolerant. Even people who do not use computers are still affected by them in cars, airplanes, supermarkets, etc. Technology products must accommodate the next generation of users who are diverse, less technical, more demanding, explorative, and impatient. This new generation of users expects to have the information they need available within the system itself. (Gribbons, 1999)

 

Typically, the burden is on the user to learn how a software application works. The burden should be increasingly on the system designers to analyze and capture the user's expectations and build that into the system design. (Norman, 1988)

For example, a stereo vendor may ship a 100-page manual with their equipment, causing users to spend hours reading and figuring out how to set up the equipment. The company and its users would be better served by reengineering their core product, the stereo equipment itself. When a customer buys a stereo, their goal is to play music as soon as possible. Keeping the user's goals and expectations in mind, the stereo vendor could completely redesign the system. One simple design method they might employ is color coding the wires and their intended jacks. With a simplified system, the company can also reduce the instructions to a one-page illustration. The new design should meet the user's goal: being able to play music within a few minutes of taking the product out of the box. (Gribbons, 1999)

 

HCI embodies this paradigm shift required of many software development organizations. Many software vendors derive a large portion of their revenue from training and technical support. However, training, support, and documentation are separate information sources that take users away from their goals. The new demographic of computer users expects the knowledge presented in these external sources to be built into the systems themselves. (Gribbons, 1999) The marketplace will demand a shift from poorly designed systems to ones that are intuitive, predictable, and adaptive ( systems that are consistent with the users' mental models. (Norman, 1988)

 

 

How Can Mental Models Be Applied in Software Design?

Software interfaces should be designed to help users build productive mental models of a system. (Preece, 1994) Common design methods employed to support and influence users’ mental models include: simplicity, familiarity, availability, flexibility, feedback, safety, and affordances.

 

Simplicity

Since mental models simplify reality, interface design should simplify actual computer functions. A function should only be included if a task analysis shows it is needed. Basic, most frequently used functions should be immediately apparent, while advanced functions should be less obvious to users. Cluttering an interface with many advanced functions only distracts users from accomplishing their goals. A well-organized interface that supports users’ tasks fades into the background and allows the user to work efficiently. (IBM, 1992)

 

Familiarity

An interface should allow users to build on prior knowledge, especially knowledge gained from experience interacting in the world. The use of concepts and techniques that users already understand from their real world experiences allows them to get started quickly and make progress immediately. The Windows operating system (and originally the Apple system) uses an office metaphor to leverage existing knowledge in this way. Its folder and document icons combined with drag and drop functionality, allow users to grasp basic concepts more quickly than traditional command-based systems.

A small amount of knowledge used consistently throughout an interface can empower the user to accomplish a large number of tasks. Concepts and techniques can be learned once and then applied in a variety of situations. By choosing to be consistent with something the user already understands, an interface can be made easier to learn, more productive, and even fun to use. (IBM, 1992)

 

Availability

An interface should provide visual cues, reminders, lists of choices, and other aids, either automatically or on request. Humans are much better at recognition than recall. Users should never have to rely on their own memory for something the system already knows, such as previous settings, file names, and other interface details. (IBM, 1992)

 

Flexibility

An interface should support alternate interaction techniques, allowing users to choose the method of interaction that is most appropriate to their situation. Users should be able to use any object in any sequence at any time. Flexible interfaces are able to accommodate a wide range of user skills, physical abilities, interactions, and usage environments.

 

Feedback

A system should provide complete and continuous feedback about the results of actions. (Norman, 1988) Any feedback a user gets that supports their current mental model strengthens it. Feedback that contradicts the current mental model causes it to adapt. (Sears, 1997) Immediate feedback allows users to assess whether the results were what they expected and take alternative action immediately if necessary. (IBM, 1992)

 

Safety

A user's actions should cause the results the user expects. Users should feel confident in exploring, knowing they can try an action, view the result, and undo the action if the result is unacceptable. Users feel more comfortable with interfaces in which their actions do not cause irreversible consequences. (IBM, 1992)

 

Affordances

An affordance refers to the properties of an object ( the kinds of operations and manipulations that can be done to the particular object. For example, a chair affords support and sitting; glass affords seeing through and breaking; folders afford opening and storing files; a trash can affords throwing something away. Affordances provide clues to how an object can be used. Perceived affordance is what a person thinks can be done with an object. An interface can take advantage of affordances by using real-world representations of objects in the interface. Users will intuitively know what to do with the object just by looking. (Norman, 1988)

 

 

How Do We Capture And Validate Users' Mental Models?

Mental models result from people's tendency to form explanations of things in the world. The field of HCI seeks to understand the explanations and hypotheses that people form about the systems that they use.

 

Several common mental models have been observed, including:

 

Causality: Users tend to assume causal relationships when one event immediately follows another. For example, you download a file from the Internet and then your computer crashes. Most people will blame the downloaded file for the crash. The timing of events seems to infer a cause and effect relationship. In reality, there may be no connection at all between the two events. (Norman, 1988)

 

Users tend to create anthropomorphic mental models when interacting with software. They will describe the system as "reading" what is typed in, or "asking me to save." Even more advanced users will curse at their computers when they don't get the expected response. (Cooper, 1995)

 

People tend to blame themselves when they have difficulties using a system, especially when they believe that others are able to use the system and that the task is not very complex. In fact, the problem may be a design flaw and many people may have the same problems. In addition, users feel guilty and do not want to admit that they are having trouble or do not understand how a system works. (Norman, 1988)

 

Although design decisions may be made based on knowledge of common mental models, it is important to understand the mental models and motivations of the specific users who will be using a system. In the planning phases of a systems project, an analysis should be performed to define the intended users and how they will be using the new system.

 

There are several commonly used techniques for capturing users' requirements, expectations, mental models, and perceptions. Implementation of each technique includes collecting data, organizing and analyzing results, and then using the results to guide the design of a new system.

 

After a system is designed and developed, similar techniques are used to evaluate the system. Ideally, several prototypes are evaluated at different points in the development process so that results can be incorporated back into the design and development. Throughout the life of a product, software designers should continue to gather feedback from users and incorporate this information into product designs and marketing strategies.

 

There are several challenges when gathering user information. First, it can be costly and time-consuming. Many software development teams do not understand the value of a thorough analysis and planning phase. Secondly, it may be difficult to identify and locate actual users or those that fit the target user profile. Some companies do not know who actually uses their software. In addition, it is important to select a representative cross-section of users. This includes users from different work areas, with different levels of experience, and with different usage patterns. Lastly, management may need to be convinced of the value of user involvement. There may be political or marketing issues that hinder contact with users. (Wilson, Bekker, Johnson, and Johnson, 1997)

 

Task Analysis

Task analysis is the process of identifying and understanding users' goals and tasks, the strategies they use to perform the tasks, the tools they currently use, any problems they experience, and the changes they would like to see in their tasks and tools. Tasks may include those that involve the computer and those that are manually performed. For example, a task analysis of a word processing system would include tasks such as identifying the purpose and subject matter of a document, as well as automated tasks such as typing, saving files, spell checking, etc. (Hix and Hartson, 1993)

 

Surveys and Questionnaires

Surveys and questionnaires are useful for collecting demographic and opinion data. They can help determine users' background and levels of subjective satisfaction. Questions may be open-ended, fill-in-the-blank, multiple choice, or rating scales.

Questions should be carefully written so as not to lead users to the desired answer. Data from questionnaires is relatively easy to tabulate and analyze, but questionnaires usually cannot provide in-depth information. (Faulkner, 1998)

 

Focus Groups and Interviews

Focus groups and interviews are widely-used informal techniques that can be useful for planning or evaluating a system design. A focus group involves a moderator questioning a group of users. An interview is conducted one-on-one with an individual user. These methods are valuable for questioning users about their work or their opinion about a system. Interviewers may ask users to describe a typical day or task, why they do certain things, what they would do when certain events occur, etc. Interviews or focus groups may also include asking users to evaluate simple sketches of a system design.

When analyzing data from focus groups and interviews, it important to identify patterns of responses and not to overemphasize any single user's comments. Like questionnaires, focus groups and interviews collect self-reported data. This may be problematic because users often do not remember or accurately describe what they do.

 

Contextual Inquiry

This technique involves observation at the user's work site. This method is used because a user’s understanding of their work often depends upon being in the actual work environment. In addition, users may not accurately describe the work that they do correctly or comprehensively. Much of their work may be implicitly understood or unconsciously performed.

In contextual inquiry, evaluators observe and record a user's actions as they go about a typical work day. A great deal of information can be gathered about the user's environment, workflow, business processes, and use of technology. This type of field observation may also generate irrelevant data as the observed users deal with the distractions of a normal workday. (Beyer and Holtzblatt, 1998)

 

Participatory Design

In participatory design, designers and users work together to design a system. For example, business users in an accounting department may be involved on the design team for a new accounting system. This type of collaboration can result in effective designs that very closely meet user's goals.

One problem with this approach is that users may not be good designers. They may lack an understanding of the technology and may not be able to think of creative solutions apart from their existing processes. Another potential problem is that users become too familiar with the technology. They lose their objectivity and start to think more like developers.

 

Usability Testing

Usability testing is an effective way to verify an existing design or system. It is a structured observation of users in a laboratory setting. Users are observed performing important tasks with a working system or prototype. They are asked to “think aloud” while completing the tasks. This includes describing what they are trying to do, the hypotheses they are forming, their expected results of an action, etc. The evaluator observes the user's performance noting problems, comments, circuitous paths, etc. Usability tests are useful for collecting quantitative data regarding time per task and number of errors. (Rubin, 1994)

The evaluator always explains to users that only the software is being tested, not the user themselves. Debriefing is usually included to get gather additional information about the user's experience. A usability test is typically videotaped so the evaluator may perform more detailed observations and analysis after the test.

 

Conducting a usability test requires a great deal of planning and effort. It involves high personnel and equipment costs and requires a controlled testing environment. Users who meet the target profile must be located, a script must be created that uses representative tasks, a large amount of data must be analyzed, the results must be prioritized, and design changes must be made based on the results. Because of the high resource requirements, usability tests are not always conducted by software development teams. (Mayhew, 1999)

 

 

What Limitations and Challenges Exist With Mental Models?

Both cognitive science and HCI have tried to find the “key” to mental models, based on a search for commonalities among the ways that humans view their worlds. Most of the challenges and limitations relate to the difficulties associated with isolating and studying mental models, in the capture and validation of the mental models.

 

Study techniques and results have been controversial. Mental models are built “on-the-fly” and seem to be very delicate. Some researchers have claimed that the very act of obtaining information from subjects about a mental model can change the mental model itself. (Rogers, et.al. 1992)

 

Descriptions of the formation of mental models rely on a variety of abstract concepts and processes such as schemas. Consequently, they seem to be an abstraction composed of abstractions. They are highly subjective.  Most studies obtain descriptions of high-level performance that are difficult to link to the building blocks of mental structures and processes.  Most studies of mental models also elicit mental model information after formation. Few studies map specific elements of a stimulus/system (including cues) to mental model component formation. (Roger, et.al. 1992)

 

There are marked differences in the validation criteria accepted by the cognitive science community and that for HCI practitioners. Cognitive science demands statistically significant study with strict controls applied from the scientific method. HCI studies of mental models, and usability in general, are usually less robust than cognitive science studies. They have a different purpose, commercial feasibility. Introspection is commonly used. (Rogers, et.al., 1992)

 

The authors suggest that small steps may lead both communities to resolve their differences and make real progress toward understanding mental models and how they can improve usability.

 

 

Where Do We Go From Here?

In many cases, mental model information is inferred from the behavior of subjects performing tasks in complex systems. This can lead to confounded data and unclear measurements of mental models.

 

The authors of this paper believe that progress can be made on a smaller scale by observing the behavior of individuals confronted with new situations when there is a background of comparable and, possibly, transferable knowledge. This approach may help “tease” complex knowledge and behaviors apart and allow further studies to shed light on mental models, cues, and their relationship to usability.

 

A simple exploratory study was developed and conducted by the authors in order to shed light on potential ways to capture information on mental models.  The purpose of the study was to determine if subjects, given a series of cues, would use components of a mental model of one product to describe the functions of another product that has similar cues. If subjects use those same descriptions for a second product, the authors argue, that is evidence for a relationship between two mental models. The mental models in question are, the original mental model of the known product and the new mental model formed during examination of the cues related to the second product.

 

 

Study

The experimenters randomly distributed one of two surveys to 28 members of a DePaul undergraduate class. The participants ranged in age from 16-35. Twenty-one of the participants were between the ages of 21 and 25. All participants are pursuing studies in information systems as either a major or minor. See Exhibit A1 for more detail on the study design.

 

Sixteen of the subjects completed a survey on the “functionality of control buttons on a CD player” (Control group). Twelve subjects completed a survey on the “functionality of control buttons of a World Wide Web browser” (Experimental group) In fact, these surveys were identical with the exception of the title. The surveys contained a series of five graphics representing control buttons. The participants were asked to write down the function of each button next to the graphic. (See Exhibits A2a and A2b) The CD Player (Control) group was asked to complete the same survey to determine if CD controls were uniformly understood, as well as to provide a comparison with the experimental group.

 

Standard control button graphics on a CD player are composed of a set of geometric figures, including rectangles and triangles. The orientation of the triangles suggests movement in a certain direction. A single triangle pointed to the right means "Play". Rectangles indicate that movement will stop.

 

Currently, the most popular World Wide Web (WWW) browsers, Internet Explorer and Netscape Navigator use a graphic of a triangle pointed right to indicate "Forward". A graphic of a triangle pointed left indicates "Back", a popular command. No other geometric figures are used.

 

Five control buttons were presented in each survey. All are standard control buttons for CD players. Only one is a standard control button for a WWW browser (triangle pointing right - Forward).

 

The experimenters evaluated the responses in two ways, by direction and by title. A response was considered to be in the same direction if it indicated that the function was to move in the same direction as another label. There are three directions, forward, backward, and stop. A response was equal in title, if the same words were used to describe it. For example, “Forward” and “Fast Forward” are equal in direction, but not in title.

 

Detailed results of the WWW and CD Player surveys are presented in Exhibit A3a.

Both of the surveys contained the same post-test questionnaire (See Exhibit A2c) and informed consent document (See Exhibit A2d), which all participants signed. As a part of the background questionnaire, participants were asked about their comfort level with a variety of electronic devices, including a TV, VCR, and CD players. All of the 28 participants rated themselves as "Comfortable" or "Expert" with these devices. Based on the background questionnaire results, all participants are also experienced WWW users. (See Exhibit A3b)

 

Predictions

The experimenters had predicted that to the extent that there was a common function for a WWW control button graphic, the most common response would be to label the button with that function. The triangle pointing right (Forward) is the only example of this situation in the WWW group.

 

For those cases where a common function does not exist, the experimenters believed that the most common WWW group responses would be to label the buttons similarly to the comparable functions of the CD Player, at least in direction, if not by title.

 

Summary of the relevant findings

WWW participants did not label the “Forward” button as “Forward” in most (11 of 12) cases. The most popular label (7 of 12) was "Play", which is the comparable label for the CD Player button. However, 10 of 12 did provide labels that indicated the correct direction (forward).

 

WWW participants labeled the remaining four buttons in ways that closely followed the CD Player participants in direction. Nine of twelve in the WWW group labeled a button “Pause” even though there is no such function on a WWW browser. Ten of twelve in the WWW group labeled a button “Stop” consistent with the CD player model.

 

Most CD Player participants were able to label the functions buttons properly, in direction. All but one participant labeled the “Play”, “Pause”, and “Stop” buttons correctly by title.

 

Both WWW and CD Player participants tended to label the “Search/Scan Forward” and “Search/Scan Back” buttons with labels from a different electronic device, the VCR.

 

Many participants labeled these buttons as “Rewind” and “Fast Forward”, functions which are correct in direction, but not title.

 

This leads to the possibility that the mental model of the VCR is stronger or more meaningful than either the CD Player or the WWW browser mental models. (In this case, the “mistakes” may have been more significant than the expected results.)

A detailed description of the study, methods, tools, and results are available in Appendix A.

 

Some considerations for further study

The study described above was a very small-scale study, designed to provide direction for further research. Knowledge and performance are two different, though related, processes. The participants could recall functions based on cues. This does not necessarily mean that they would perform tasks with a product in a particular way. However, knowledge and performance are both important to usability, so the current study is not invalid, just limited.

 

Further studies could include modification of WWW browser PC software to include control buttons with similar graphics to “CD Player” control buttons. This would help minimize the impact of varied contexts (paper-and-pencil test vs. PC-based task) on the results. Task performance could be measured in addition to knowledge of functions.

 

The participant population was uniformly sophisticated in the use of electronic devices and the WWW. They had formed mental models of both. Different levels of sophistication would provide more information on “naïve” mental models, how humans form mental models “from scratch”.

 

 

Conclusion

Mental models are valuable areas of study for cognitive science and human-computer interaction. While they are difficult to capture and validate, the potential rewards of improved design and increased usability based on correct mental models, compensate handsomely for the effort. Cognitive scientists and HCI practitioners would do well to “start small” and build a knowledge base of mental models and associated behavior based on common cues.

 

 

References

 

Bara, Bruno G. (1995). Cognitive Science - A Developmental Approach to the Simulation of the Mind. Hove UK: Lawrence Erlbaum Associates.

 

Beyer, Karen and Holtzblatt, Hugh (1998). Contextual Design - Defining Customer-Centered Systems. San Francisco: Morgan Kaufmann Publishers.

 

Cooper, Alan (1995). About Face - The Essentials of User Interface Design. Foster City CA: IDG Books Worldwide.

 

Costantine, Larry L. and Lockwood, Lucy A.D. (1999). Software For Use - A Practical Guide to the Models and Methods of Usage-Centered Design. Reading MA: Addison-Wesley.

 

Craik, K.J.W. (1943). The Nature of Explanation. Cambridge UK: Cambridge University Press.

 

Dix, Alan, Finlay, Janet, Abowd, Gregory, and Beale, Russell (1998). Human-Computer Interaction. Herfordshire UK: Prentice Hall Europe.

 

Faulkner, Christine (1998). The Essence of Human-Computer Interaction. Hempstead UK: Prentice Hall.

 

Gentner, Dedre and Stevens, Albert L. (Ed.). (1983). Mental Models. Hillsdale NJ: Lawrence Erlbaum Associates.

 

Gribbons, Dr. William M. (1999). Knowledge-Infused Design ( The “Ultimate Solution” to Product Usability. In Help 99 Proceedings. November 1999, 153-156.

 

Hix, Deborah and Hartson, H. Rex (1993). Developing User Interfaces - Ensuring Usability Through Product and Process. New York: John Wiley & Sons.

 

IBM Corporation (1992). Object-Oriented Interface Design: IBM Common User Access Guidelines. Indianapolis IN: QUE.

 

Johnson-Laird, P.N. (1983). Mental Models - Towards a Cognitive Science of Language, Inference and Consciousness. Cambridge MA: Harvard University Press.

 

Markham, Arthur B. (1999). Knowledge Representation. Mahwah NJ: Lawrence Erlbaum Associates.

 

Mayhew, Deborah J. (1999). The Usability Engineering Life Cycle. San Francisco: Morgan Kaufmann Publishers.

 

Medin, Douglas and Ross, Douglas (1996). Cognitive Psychology, Second Edition. Fort Worth TX: Harcourt Brace & Co.

 

Nielsen, Jakob (1993). Usability Engineering. San Diego CA: Academic Press.

 

Norman, Donald (1988). The Design of Everyday Things. New York: Doubleday/Currency.

 

Preece, Jenny (1994). Human-Computer Interaction. Reading MA: Addison-Wesley.

 

Reason, James (1990). Human Error. Cambridge UK: Cambridge University Press.

 

Rogers, Yvonne, Rutherford, Andrew, and Bibby, Peter (Ed.) (1992). Models In the Mind - Theory, Perspective, and Application. London: Academic Press.

 

Rubin, Jeffrey (1994). Handbook of Usability Testing. New York: John Wiley & Sons.

 

Sears, Andrew (1997). An Introduction to Human-Computer Interaction. Chicago: DePaul University.

 

Shneiderman, Ben (1998). Designing the User Interface - Strategies for Effective Human-Computer Interaction (4th edition). Reading MA: Addison-Wesley.

 

Wilson, Stephanie, Bekker, Mathilde, Johnson, Peter and Johnson, Hilary (1997). Helping and Hindering User Involvement ( A Tale of Everyday Design. In CHI 97 Proceedings. March 1997, 178-185.

 

Appendix

Exhibit A1 - Study Design

 

Exhibit A2a - Survey Form - WWW Group

 

Exhibit A2b - Survey Form - CD Player Group

 

Exhibit A2c - Post-Test Questionnaire - Administered to Both Groups

 

Exhibit A2d - Informed Consent Form - Administered to Both Groups

 

Exhibit A3a - Survey Results for Both Groups

 

Exhibit A3b - Post-Test Questionnaire Results for Both Groups