Data Collection & Analysis
|
Interview MethodsInterview methods solicit verbal reports from workers to elicit knowledge about a work system, and are the most frequently employed method for eliciting work information (Cooke, 1999). Interviews are useful for gathering information about work tasks and about the context surrounding how a task is performed. Interviews can access cognitive elements of work that are not directly observable and can serve as a substitute for directly observing the performance of work tasks when gaining access to direct observation is difficult (Gillan, 2012). Interviews are relatively easy to administer, but analyzing the data collected to draw conclusions is time-consuming and challenging (Cooke, 1999). Interviews generally take place very early in the design cycle (Gillan, 2012). Data gathered using interview methods can be used to refine future inquiries and research questions, design exercises for users to perform under observation, break down work into tasks and subtasks, inform design requirements, identify sources of errors in system performance, and develop theory about work performance.
|
What are Interview Methods?
Interview methods solicit verbal reports from workers to elicit knowledge about a work system.
Why Use Interview Methods?
Good system design requires developers to consider not only the task for which they are making a product, but also the people performing the task and the context and settings in which the people will be using the product.The goal of interview methods is to identify all potential factors that affect people's performance on the tasks of interest by eliciting verbal reports of work information.
Advantages of interviews include that they are relatively easy to administer (Cooke, 1999), although efforts must be taken to avoid interviewer and respondent biases. Also, interviews can be employed when the work under investigation is largely cognitive, as merely observing the physical behaviors of the work will not reveal the mental work the worker is performing in order to make decisions or operate a system. For this reason, interviews are often used in tandem with observational methods (Cooke, 1999). They also can serve as a substitute for direct observation in instances where such observation is difficult or impossible (Gillan, 2012, i.e., where the presence of an observer would interfere too much with the work being performed).
Disadvantages of interviews include that all of the data collected are verbal reports rather than direct observations of behavior. Data gathered from interviews can therefore give an incomplete picture of the work system if interviewers are not thorough in their questioning or draw biased conclusions from the data. Furthermore, interview methods can be time-consuming both during data collection and analysis. Each interview with a user can take around an hour to conduct, and the content rich data must be sifted to distill out the performance affecting factors.This often requires developing a coding scheme to quantify these factors for further analysis, and this can be a difficult and lengthy process. Analyzing data from interviews also requires a large amount of expert judgement from the analysts, and the analyst must be careful not to let preconceived notions about the work practice color his or her conclusions about what factors affect performance.
Given the considerations above, interviews methods should ideally be employed as a supplement to observational methods when highly context-rich data is required to answer the analyst's or designer's questions. This is especially the case where the work of interest has a large degree of mental work that cannot be directly observed. Interviews can be used as a substitute for observation when conducting an observation is too difficult. If research questions can be answered without the need for highly context rich data, a cheaper and quicker data-collection method (i.e., survey research) may be employed to answer the questions more efficiently.
Advantages of interviews include that they are relatively easy to administer (Cooke, 1999), although efforts must be taken to avoid interviewer and respondent biases. Also, interviews can be employed when the work under investigation is largely cognitive, as merely observing the physical behaviors of the work will not reveal the mental work the worker is performing in order to make decisions or operate a system. For this reason, interviews are often used in tandem with observational methods (Cooke, 1999). They also can serve as a substitute for direct observation in instances where such observation is difficult or impossible (Gillan, 2012, i.e., where the presence of an observer would interfere too much with the work being performed).
Disadvantages of interviews include that all of the data collected are verbal reports rather than direct observations of behavior. Data gathered from interviews can therefore give an incomplete picture of the work system if interviewers are not thorough in their questioning or draw biased conclusions from the data. Furthermore, interview methods can be time-consuming both during data collection and analysis. Each interview with a user can take around an hour to conduct, and the content rich data must be sifted to distill out the performance affecting factors.This often requires developing a coding scheme to quantify these factors for further analysis, and this can be a difficult and lengthy process. Analyzing data from interviews also requires a large amount of expert judgement from the analysts, and the analyst must be careful not to let preconceived notions about the work practice color his or her conclusions about what factors affect performance.
Given the considerations above, interviews methods should ideally be employed as a supplement to observational methods when highly context-rich data is required to answer the analyst's or designer's questions. This is especially the case where the work of interest has a large degree of mental work that cannot be directly observed. Interviews can be used as a substitute for observation when conducting an observation is too difficult. If research questions can be answered without the need for highly context rich data, a cheaper and quicker data-collection method (i.e., survey research) may be employed to answer the questions more efficiently.
When to Use Interview Methods?
Interview methods can gather data for formative or summative purposes. Formative purposes involve identifying areas for improvement with a certain design. For instance, analysts may be attempting to identify user requirements, the level of expertise of average users, or common workplace conditions. For formative purposes, interview methods should be conducted very early in the design cycle. Interview methods can serve as the first step in eliciting work information, informing later research methods and design decisions (Cooke, 1999). For example, data gathered from interviews can allow analysts to design tasks and scenarios (Gillan, 2012), and they can then observe users interacting with these scenarios to supplement the information gathered from the interview. Conducting observations early will ensure that the product is designed from the beginning to account for factors that affect task performance from the total system in which the work is taking place, saving the need for costly redesign efforts later should the product not satisfactorily address users' needs.
Summative purposes, on the other hand, involve determining whether or not a product meets the requirements of the task for which it was designed. For summative purposes, interview methods occur later in the design cycle. For instance, analysts can give future users a prototype of the product and conduct an interview to verify that the product is satisfactory. If the product has been designed in accordance with data from earlier data from interviews and observations, the need for further design changes at this stage should be minimized.
Summative purposes, on the other hand, involve determining whether or not a product meets the requirements of the task for which it was designed. For summative purposes, interview methods occur later in the design cycle. For instance, analysts can give future users a prototype of the product and conduct an interview to verify that the product is satisfactory. If the product has been designed in accordance with data from earlier data from interviews and observations, the need for further design changes at this stage should be minimized.
How to Use Interview Methods?
Interview methods generally require an analyst to ask workers or subject matter experts how they engage in the tasks of interest and to describe the settings and contexts in which the task is completed. Subject matter experts are users who are familiar with the tasks and settings under investigation. While the focus of interview questions should depend on the analyst's research questions, common things to document are task steps, emotional reactions to the work (i.e., sources of frustration), common places where errors occur, work hand-offs, documentation practices, and environmental factors amongst others. Interviews are often used in tandem with observations and reviews of archival data sources (i.e., instruction manuals, job descriptions). Performing an observation prior or reviewing archival data sources prior to conducting interviews can help analysts ask questions targeted at aspects of the work that require further inquiry. On the other hand, interviews conducted prior to observations can help narrow an analysts focus about what to attend to during an observation or to design tasks and scenarios for a user to perform under observation. Furthermore, because the verbal reports gathered from an interview may give an incomplete picture of how the work is performed, observations can fill gaps in the data by allowing the analyst to see the work taking place in a real-world setting.
There are a number of considerations an analyst should account for in how they conduct their observation. First, it is essential that analysts consider the total system (Clancey, 2006) when conducting the interview . Below are a list of example questions analysts may consider asking participants during an interview:
While the information above should guide any type of observational inquiry, there are variations in interview methods that analysts may select to suit their research question. Several key variations are addressed below.
These types of questions are important because attending to them should inform design choices. For instance, if a software developer attends to the fact that the eventual users of his software would range from having very little computer experience to very high computer experience, he can build in optional tutorials to get low-experience users up-to-speed without slowing down the experienced users. A designer of a mobile-device who knows that her device will be used in loud manufacturing settings may choose to make notifications vibration-based or text-based rather than auditory-based. Knowing how current work is currently documented and how information is shared within an organization can allow developers to design their technologies to interface well with the tools and systems the organization currently uses. Failing to attend to these issues can lead to design flaws resulting in low customer satisfaction or necessitating costly redesign efforts.
Second, its is imperative that analysts interview a representative sample of the people for whom they are designing their technology. Knowledge should be elicited from the full range of people describing the the full-range of tasks they perform, settings, and contexts. Failing to gain a representative sample can result in the development of a product that only suits the needs of a subset of the technology's eventual users or uses.
Structure
Interview techniques can vary in the degree of structure the analyst imposes upon them. In a completely unstructured interview, the analyst simply has a free-flowing conversation with a participant about the work system. Structured interviews have pre-made questions or structures to aid the analyst in completely exploring the work in question (Cooke, 1999). Data from structured interviews can be easier to analyze than unstructured interviews because of the narrowed scope of the inquiry (Cooke, 1999), but they also require more up-front preparation.
Below is a sample of components that analysts have included in structured interviews in the past (Cooke, 1999). A variety of these elements can be employed in the design of the interview, depending on how well they facilitate answering the questions of the designer or analyst.
Verbal Simulations. The interviewer walks the participants through a work scenario, providing them with a goal as well as contextual information. The participant tells the interviewer how her or she would respond in the scenario. A variant of this method, goal decomposition, has the participant work backward from a single-goal describing the steps and context that would lead up to achieving the goal (Cooke, 1999).
Diagrams. The interviewer has the participants draw out a diagram detailing the steps involved in completing the work of interest. The analyst uses these diagrams to explore the structure of the task or work system.
Critical Incident/Critical Decision Method. The interviewer and participants explore a past incident where the work system suffered a severe error or near-miss of a severe error in order to determine the decision-making strategies employed by the actors in the incident and the circumstances surrounding the incident. By exploring where systems failed, analysts can discover areas of the system in need of redesign. Three information-gathering sweeps through the incident, time line verification, and decision point identification structure the account into meaningfully ordered segments. Progressive deepening leads to a comprehensive, detailed, and contextually rich account of the incident (Hoffman et al., 1998).
"Teachback" Method. The interviewer has the participant explain to him or her how to complete the task of interest. The interviewer then checks to see that he or she understands the work by attempting to explain the same thing back to the participant. The participant corrects any errors in the interviewer's explanation, and the interview continues until the interviewer can satisfactorily explain the task back to the participant.
Number of interviews
Interviews should be conducted with multiple participants until the analyst has reached the saturation point, the point at which the research method is no longer eliciting new information (Garmer, Ylvén, & Karlsson, 2003). Once the analyst finds that the majority of information being elicited is redundant, it is no longer an efficient means to answer research questions to continue the interview. The saturation point may take more or fewer interviews depending on the range of users, uses, and contexts for which the designer is creating a product. If users and contexts of use have little variability, the analysts may not have to conduct as many interviews as may be necessary for when the product is being used by a large variety of users in many different settings. The analyst must be sure to elicit knowledge from participants that are representative of all future users and uses. Otherwise, the designer may create a product that is only satisfactory for a smaller subset of users and contexts of uses.
Number of participants per interview
It is sometimes useful to elicit knowledge using a focus group, a panel of users who interact with the work system under investigation. Rather than gathering work information from each user individually, the analyst gathers data by moderating a discussion between the focus groups members (Garmer et al., 2003).
Recording
It is important for interviewers to take thorough notes of the interviewee's verbal reports, and it is advised that they also employ employ video or audio recording devices. The advantage of recording is that analysts can review the work as many times as necessary and share the recording with other analysts (Clancey, 2006). In addition, attempting to take notes while observing may cause the analyst to miss important details.
There are a number of considerations an analyst should account for in how they conduct their observation. First, it is essential that analysts consider the total system (Clancey, 2006) when conducting the interview . Below are a list of example questions analysts may consider asking participants during an interview:
- How much experience do you have with the work?
- What knowledge, skills, and abilities do you possess (i.e., education, training)?
- How do you go about completing your work?
- What tools do you use? How effective are these tools? How might they be improved?
- Do you need to work with others to accomplish the work? If so, how do you communicate?
- What steps you take in accomplishing the work? Which steps are most critical? Which are the most challenging? Which are the most frustrating?
- How is the work documented? Why is it documented in this way? Who accesses this information and for what purposes?
- In what setting are the tasks being completed?
- What facilities are available?
- Are there environmental variables that affect work (i.e., heat, noise, safety hazards)?
- How does the work-setting fit into the larger organization?
- In what context are the tasks being completed?
- Is the work being completed under high time-pressure?
- Are you fatigued from long hours or lack of sleep?
- Are you multitasking?
- How does your work affect other parts of the organization?
- Do you complete the work the same way over time? Are there monthly, seasonal, or annual changes in how the work is performed?
- Are there social or political pressures on you in how you conduct your work?
While the information above should guide any type of observational inquiry, there are variations in interview methods that analysts may select to suit their research question. Several key variations are addressed below.
These types of questions are important because attending to them should inform design choices. For instance, if a software developer attends to the fact that the eventual users of his software would range from having very little computer experience to very high computer experience, he can build in optional tutorials to get low-experience users up-to-speed without slowing down the experienced users. A designer of a mobile-device who knows that her device will be used in loud manufacturing settings may choose to make notifications vibration-based or text-based rather than auditory-based. Knowing how current work is currently documented and how information is shared within an organization can allow developers to design their technologies to interface well with the tools and systems the organization currently uses. Failing to attend to these issues can lead to design flaws resulting in low customer satisfaction or necessitating costly redesign efforts.
Second, its is imperative that analysts interview a representative sample of the people for whom they are designing their technology. Knowledge should be elicited from the full range of people describing the the full-range of tasks they perform, settings, and contexts. Failing to gain a representative sample can result in the development of a product that only suits the needs of a subset of the technology's eventual users or uses.
Structure
Interview techniques can vary in the degree of structure the analyst imposes upon them. In a completely unstructured interview, the analyst simply has a free-flowing conversation with a participant about the work system. Structured interviews have pre-made questions or structures to aid the analyst in completely exploring the work in question (Cooke, 1999). Data from structured interviews can be easier to analyze than unstructured interviews because of the narrowed scope of the inquiry (Cooke, 1999), but they also require more up-front preparation.
Below is a sample of components that analysts have included in structured interviews in the past (Cooke, 1999). A variety of these elements can be employed in the design of the interview, depending on how well they facilitate answering the questions of the designer or analyst.
Verbal Simulations. The interviewer walks the participants through a work scenario, providing them with a goal as well as contextual information. The participant tells the interviewer how her or she would respond in the scenario. A variant of this method, goal decomposition, has the participant work backward from a single-goal describing the steps and context that would lead up to achieving the goal (Cooke, 1999).
Diagrams. The interviewer has the participants draw out a diagram detailing the steps involved in completing the work of interest. The analyst uses these diagrams to explore the structure of the task or work system.
Critical Incident/Critical Decision Method. The interviewer and participants explore a past incident where the work system suffered a severe error or near-miss of a severe error in order to determine the decision-making strategies employed by the actors in the incident and the circumstances surrounding the incident. By exploring where systems failed, analysts can discover areas of the system in need of redesign. Three information-gathering sweeps through the incident, time line verification, and decision point identification structure the account into meaningfully ordered segments. Progressive deepening leads to a comprehensive, detailed, and contextually rich account of the incident (Hoffman et al., 1998).
"Teachback" Method. The interviewer has the participant explain to him or her how to complete the task of interest. The interviewer then checks to see that he or she understands the work by attempting to explain the same thing back to the participant. The participant corrects any errors in the interviewer's explanation, and the interview continues until the interviewer can satisfactorily explain the task back to the participant.
Number of interviews
Interviews should be conducted with multiple participants until the analyst has reached the saturation point, the point at which the research method is no longer eliciting new information (Garmer, Ylvén, & Karlsson, 2003). Once the analyst finds that the majority of information being elicited is redundant, it is no longer an efficient means to answer research questions to continue the interview. The saturation point may take more or fewer interviews depending on the range of users, uses, and contexts for which the designer is creating a product. If users and contexts of use have little variability, the analysts may not have to conduct as many interviews as may be necessary for when the product is being used by a large variety of users in many different settings. The analyst must be sure to elicit knowledge from participants that are representative of all future users and uses. Otherwise, the designer may create a product that is only satisfactory for a smaller subset of users and contexts of uses.
Number of participants per interview
It is sometimes useful to elicit knowledge using a focus group, a panel of users who interact with the work system under investigation. Rather than gathering work information from each user individually, the analyst gathers data by moderating a discussion between the focus groups members (Garmer et al., 2003).
Recording
It is important for interviewers to take thorough notes of the interviewee's verbal reports, and it is advised that they also employ employ video or audio recording devices. The advantage of recording is that analysts can review the work as many times as necessary and share the recording with other analysts (Clancey, 2006). In addition, attempting to take notes while observing may cause the analyst to miss important details.
References
References
- Cooke, N. J. (1999). Knowledge elicitation. In F. T. Durso (Ed.), Handbook of applied cognition (pp. 479-509). New York, NY US: John Wiley & Sons Ltd.
- Garmer, K., Ylven, J., & MariAnne Karlsson, I. C. (2004). User participation in requirements elicitation comparing focus group interviews and usability tests for eliciting usability requirements for medical equipment: a case study. International Journal of Industrial Ergonomics, 33(2), 85-98.
- Gillan, D. J. (2012). Five questions concerning task analysis. In M. A. Wilson, W. R. Bennett, S. G. Gibson, G. M. Alliger (Eds.), The handbook of work analysis: Methods, systems, applications and science of work measurement in organizations (pp. 201-213).
- Hoffman, R. R., Crandall, B., & Shadbolt, N. (1998). Use of the Critical Decision Method to Elicit Expert Knowledge : A Case Study in the Methodology of Cognitive Task Analysis. Human Factors, 40(2), 254–276.
- Eysenbach, G., & Köhler, C. (2002). How do consumers search for and appraise health information on the world wide web? Qualitative study using focus groups, usability tests, and in-depth interviews. BMJ: British Medical Journal, 324(7337), 573.
- Gerbert, B., & Hargreaves, W. A. (1986). Measuring physician behavior. Medical Care, 838-847.
- Lewis, J. R. (2002). Psychometric evaluation of the PSSUQ using data from five years of usability studies. International Journal of Human-Computer Interaction, 14(3-4), 463-488.