Engineering and Technology
Can robots possess knowledge? Rethinking the DIK(W) pyramid through the lens of employees of an automotive factory
J. Hautala
This fascinating study by Johanna Hautala delves into employees' perceptions of robots' knowledge in a highly automated automotive factory. It reveals that while half of the surveyed employees believe robots can possess knowledge, they see this capacity as inherently tied to human collaboration. The study redefines the classic knowledge pyramid to highlight the symbiotic exchange of knowledge between humans and robots.
~3 min • Beginner • English
Introduction
Industrial work is increasingly robotised, integrating people and robots into co-creation processes where data and information are transformed into trustworthy knowledge required for high-quality products such as cars. Developing more independent and collaborative robots necessitates attention to robots’ cognitive abilities to process data, information, and knowledge, and to ensuring transparency so humans can understand robot functions and allocate tasks effectively. Conceptually, knowledge, information, and data are traditionally human-centric, yet factory employees may see robots as team members and trust them. This study addresses practical needs in robotised factories and the scientific need to empirically re-examine the classic DIK(W) pyramid (data, information, knowledge, wisdom). The exploratory survey at Valmet Automotive investigates: (i) whether employees believe robots possess knowledge, (ii) what kind(s) of knowledge robots can or cannot possess, and (iii) whether those who believe robots possess knowledge are more likely to trust robots and view them as teammates. Analyses include content analysis, cross-tabulations, and χ² tests.
Literature Review
Recent research on attitudes toward robots has indirectly addressed knowledge, focusing on trust and team dynamics, often in healthcare and military contexts, with limited attention to highly robotised automotive factories. People may consider robots team members due to robots’ abilities to receive, process, plan, and interact, but trust is situational and linked to transparency and accurate mental models of robot decision-making. The DIKW pyramid conceptualises a hierarchy from data to wisdom. Data are codified observations accessible to machines. Information can be formalised as signs bearing data; whether information includes meaning is debated. In this article, information lacks meaning until interpreted as knowledge. Knowledge, often human-centric, involves interpretation, justification, novelty, value, and ties to both repetition/standardisation and creativity/intuition. Critiques note DIKW privileges explicit, measurable data and overlooks tacit, embodied knowledge foundational to human knowing. In robotics, some argue robots can “know” (e.g., internal states, environment), whereas others contend lack of consciousness, feelings, and embodiment precludes knowledge. Engineering applications use DIKW to architect cognitive robots, but the transformations between levels are not fully understood; transparency is key to human understanding and trust. Some models allow robots to collect, analyse, decide (reaching wisdom), while others restrict robots to data, information, and some knowledge, often requiring human participation in synthesis and intuition. This study leverages these debates to examine employees’ views and to further develop DIK(W).
Methodology
Setting: Valmet Automotive in Uusikaupunki, Finland, the country’s most robotised car factory, producing high-quality vehicles with human-robot collaboration across body shop, paint shop, general assembly, testing, finishing, and supporting departments.
Design: Electronic survey conducted May 2019 as part of the “Second Machine Age Knowledge Co-Creation in Space and Time” project (2018–2023), approved by the University of Turku Ethics Committee. An invitation and one reminder were sent via staff email list and Facebook group. Approximately 4500 employees were employed at the time. Respondents provided consent after reading a privacy notice.
Sample: 269 respondents completed the section reported here (≈6% response rate). Demographics included 60% male, 40% female; age 19–65 across categories; tenure at company and section provided; 72% worked in manufacturing. Roles included experts (e.g., leaders, engineers, designers) and workers (e.g., mechanics, quality measurers).
Instrument: Seven-question survey covering task/role (Q1–Q3), team composition (humans; humans and robots; other) (Q4), a set of statements/arguments (Q5) rated on a 3-point agreement scale, and skills/knowledge (Q6–Q7). Core knowledge questions were open-ended: “In your opinion, can robots possess knowledge? If yes, what kind of knowledge? If no, why not?” Arguments in Q5 included items such as: high-quality cars cannot be produced without robots; liking to work with robots; ability to build a car without robots; trust in factory robots; frequency of human vs robot mistakes; future necessity of ability to work with robots.
Analysis: Quantitative analyses used IBM SPSS. Frequencies addressed RQ1. Crosstabulations and χ² tests assessed group differences for RQ3, with significance thresholds p≤0.001 (very significant), 0.001<p≤0.01 (significant), 0.01<p≤0.05 (almost significant). Qualitative analyses for RQ2 applied data-driven conventional content analysis to open-ended responses, grouping explanations by the actor associated with knowledge (no actor, robot, robot and human) and interpreting them via the DIK(W) framework.
Key Findings
- Belief that robots can possess knowledge: Among 251 answers, 54% (n=135) said yes and 46% (n=116) said no. Believers spanned genders, ages, departments, and experience; no significant differences among these groups. Experts were more likely than workers to say robots possess knowledge (experts 60% vs workers 52%; p=0.017).
- What kind of knowledge? From 149 usable explanations of what robots can possess (n=95) or why not (n=98):
• Category 1 (no actor mentioned; n=20): items like “10101010,” codes, parameters, location data, documents/logs—corresponding to data.
• Category 2 (robot as actor; n=15): memory traces, sensor-based observation, task and location knowledge for precise movement, collision avoidance, error expression—corresponding to formalised information (signs bearing data) used in commands.
• Category 3 (robot and human together; n=62): emphasis on programming by humans; robots possess programmed memory/software enabling action; human programming adds “knowledge,” transforming human knowledge into robot information.
- Why robots cannot possess knowledge (n=98 explanations, provided more often by experts than workers; p=0.042):
• Human control/tool argument: robots follow programs; humans possess knowledge; programming transfers explicit elements but loses something essential.
• Novelty/creativity argument: knowledge entails justified, novel, creative use in changing situations; robots repeat programs, do not learn or create new things, struggle in problematic situations.
• Self-awareness/cognition argument: robots act per programming and sensor data but lack independent thinking and understanding of mistakes; knowledge is tacit and embodied.
- Trust, teams, and skills:
• 55% (153) trusted robots and thought humans make more mistakes than robots.
• 13% (35) did not trust robots; 8% (23) thought robots make more mistakes than humans.
• 73% (203) agreed future ability to work with robots is necessary.
• 60% (168) agreed high-quality cars cannot be produced without robots today.
• Teams: 66% (185) saw teams as humans only; 27% (76) saw humans and robots as teams; a few selected “other.”
- Associations (χ² tests): Those who considered robots able to possess knowledge were more likely to trust robots (p=0.002). No significant difference for viewing robots as teammates versus humans-only teams (p=0.070).
Discussion
Findings show employees are divided on whether robots possess knowledge, with many attributing robot “knowledge” to programmed, formalised capacities and locating genuine knowledge in human-robot interaction. This addresses the research questions by demonstrating that employees situate robots primarily within data and information processing, while knowledge is often contingent on human involvement. Trust aligns with attributions of knowledge: those who think robots can possess knowledge are more likely to trust them. The study advances the DIK(W) framework by: (1) recognising humans and robots as distinct actors with different roles—robots proficient in collecting/processing data and informing via signs, humans interpreting meaning and connecting to broader process “why”; (2) inverting and reconfiguring the pyramid to reflect bi-directional transformations, especially the funneling of human knowledge (K) into programs (I) and data (D) for robots, and the subsequent co-creation loop back to human knowledge; and (3) positioning knowledge as a dividing concept hinging on the independence of the knowing actor. It highlights the limitations of a purely positivist DIK(W) for human contexts, as tacit, embodied, intuitive, and contextual dimensions extend beyond codified data, with transparency efforts helping bridge human understanding of robot processes.
Conclusion
This exploratory study rethinks the DIK(W) pyramid through human-robot co-creation in a highly robotised automotive factory. Empirically, employees are split on whether robots possess knowledge; many who answer “yes” qualify that it is together with humans, and they more often trust robots. Robots are broadly seen as handling data and information, with critical questions differing by actor: “where” for robots versus “why” for humans. Theoretically, the study contributes by reconstructing DIK(W) to include both humans and robots as actors, inverting the pyramid to reflect two-way transformations (especially programming as funneling K→I→D), and emphasising knowledge as a dividing, anthropocentrically charged concept that also has constructionist, technology-assisted interpretations. Practically, effective human-robot collaboration requires understanding distinct human and robot realities, enhancing robot transparency, and educating employees in programming and interpretation. Future research should conduct detailed empirical and ethnographic studies of human-robot knowledge co-creation, examine moments where knowledge is accepted or refused, and replicate the survey across diverse robotised factories and cultures.
Limitations
The response rate was low (approximately 6%), which limits generalisability of the findings to the entire factory workforce. The study is exploratory and conducted in a single factory context. Further empirical, qualitative, and ethnographic research is needed to deepen understanding of human-robot knowledge co-creation processes.
Related Publications
Explore these studies to deepen your understanding of the subject.

