logo
ResearchBunny Logo
Human Error Management in Requirements Engineering: Should We Fix the People, the Processes, or the Environment?

Computer Science

Human Error Management in Requirements Engineering: Should We Fix the People, the Processes, or the Environment?

S. Mahaju, J. C. Carver, et al.

This research delves into human error management strategies in requirements engineering, presenting a newly developed taxonomy from two practitioner surveys. The findings emphasize the critical role of process changes—accounting for over 50% of the strategies identified—in reducing human errors. This insightful study was conducted by Sweta Mahaju, Jeffrey C Carver, and Gary L Bradshaw.

00:00
00:00
Playback language: English
Introduction
Requirements engineering, the foundational phase of software development, is highly susceptible to human error. These errors, stemming from misunderstandings, misapplications, or cognitive overload, can have significant downstream consequences, leading to rework, degraded quality, and increased costs. Existing research on human error management in software engineering is limited, lacking systematic studies of prevention and mitigation techniques specific to requirements engineering. This paper aims to address this gap by developing a taxonomy of human error management strategies based on data gathered from requirements engineering professionals. The study seeks to organize prevention and mitigation approaches into a framework revealing their underlying structure and identify effective strategies. This research will help organizations proactively implement appropriate error prevention and mitigation techniques, avoiding costly learning from mistakes. The broader literature suggests that focusing on changing people is often ineffective, while changes to processes and environments tend to be more successful.
Literature Review
The paper reviews existing literature on human error and error management in various domains, including software engineering and healthcare. Reason's work on human error is foundational, categorizing errors into slips, lapses, and mistakes. Studies in healthcare highlight the ineffectiveness of solely focusing on individual behavior modification to reduce errors. The Kellogg et al. study underscores that solutions targeting technology, tools, or organizational policies are more effective. Relevant research in software engineering includes Huang's work on a defect prevention framework (DPeHE) that enhances meta-cognitive capabilities through knowledge and regulation training. However, its generalizability is questioned due to limited sample size. Other studies, such as Firesmith's work on common requirements problems, while insightful, lack a strong cognitive perspective. Lopes et al. developed an expert system for managing requirements errors, but its solutions are drawn from literature rather than practice. Walia and Carver's Requirements Error Taxonomy (RET) and Anu et al.'s Human Error Taxonomy (HET) provide frameworks for classifying errors, but lack comprehensive prevention and mitigation strategies. This paper builds upon these studies by using industrial data to systematically analyze and categorize human error prevention and mitigation strategies, addressing the limitations of previous research.
Methodology
This study analyzed data from two datasets: NaPiRE (Naming the Pain in Requirements Engineering) and CAPS (Center for Advanced Public Safety). NaPiRE comprises data from a 2016 survey of 226 requirements engineers from various companies and countries. The survey presented respondents with a list of requirements problems and asked them to describe causes, effects, and management strategies. CAPS data originates from a two-step survey of seven requirements engineers at the University of Alabama. The first survey collected information on requirements problems and management strategies, while the second, after introducing the Human Error Taxonomy (HET), gathered additional information on errors identified using the HET. The analysis followed a three-phase process: (1) Identification of prevention and mitigation strategy types; (2) Classification of strategies into a taxonomy; and (3) Discussion to resolve disagreements. Initially, high-level categories (People, Process, Environment, Ambiguous) were identified. Subsequently, low-level categories were defined within each high-level category, further classifying the strategies. Finally, strategies were mapped to specific requirements engineering activities (elicitation, analysis, specification, validation, management). The authors collaboratively analyzed the data, with iterative discussions to ensure consistency and agreement on classifications. Initially, there were some issues with lack of detail in the reported strategies and inconsistencies in language and the authors employed the classification criteria and defined the criteria for classifying the strategies and resolving discrepancies. The final set of 162 prevention and mitigation strategies were classified and analyzed.
Key Findings
The analysis resulted in a Human Error Management Taxonomy (HEMT) classifying prevention and mitigation strategies into People, Process, and Environment categories. Process changes constituted the largest portion (51%) of the strategies, followed by Environment (23%) and People (21%). Within the Process category, 'Refinement and Redesign' was the most frequent low-level category (70 strategies), encompassing strategies related to communication, documentation, review, and other procedural improvements. The 'People' category primarily included 'Internal Training Activities' (26 strategies), indicating a focus on improving team skills. 'Environment' changes included 'Addition of New Process, Tool, or Approach' (18 strategies). The distribution of strategies across the requirements engineering lifecycle revealed that most strategies (51%) were applied during the Management activity, followed by Validation (15%), Elicitation (13%), and Analysis (13%). Notably, people-related strategies were heavily skewed towards the Management activity, often involving training or meeting organization. The findings indicate that practitioners primarily manage human errors using Process and Environment-focused strategies, corroborating research in other domains that suggests that directly changing people's behavior is less effective than changing processes and environments.
Discussion
The findings indicate a strong emphasis on Process and Environment changes for managing human error in requirements engineering, aligning with research suggesting the limited effectiveness of solely changing individual behavior. The prevalence of 'Refinement and Redesign' strategies highlights the importance of improving communication, documentation, and review processes. The significant presence of strategies during the Management activity suggests a need for strong oversight and proactive management practices. The relatively low number of strategies focusing on People, besides training, indicates that organizational and procedural changes are prioritized for error reduction. This highlights a shift from individual responsibility to systematic process improvement for managing human error. The HEMT provides a structured way to approach error management, facilitating a more systematic and proactive approach to preventing and mitigating errors. The taxonomy’s structure allows researchers to evaluate the effectiveness of various strategies and identify gaps in current approaches, ultimately improving requirements engineering practices.
Conclusion
This study presents the Human Error Management Taxonomy (HEMT), a valuable resource for classifying and understanding human error management strategies in requirements engineering. The taxonomy emphasizes the dominance of process-focused strategies and the limited focus on individual behavioral change, aligning with findings from other fields. Future work will evaluate the effectiveness of the identified strategies and refine the taxonomy through empirical studies in industrial settings. This research contributes significantly to improving software quality by providing a structured framework for managing human error in a critical phase of software development.
Limitations
The study's internal validity is potentially affected by the subjective nature of survey data. While the NaPiRE dataset provides a broad representation of industry, the CAPS dataset is limited to one organization. The interpretation of survey responses might have been affected by ambiguities or lack of detail in some responses. The external validity could be strengthened by incorporating data from a wider range of organizations and regions. Further research is needed to validate the effectiveness of the proposed strategies and to assess the taxonomy's applicability in various contexts.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny