logo
ResearchBunny Logo
Development and implementation of digital pedagogical support systems in the context of educational equity

Education

Development and implementation of digital pedagogical support systems in the context of educational equity

S. Liu, J. Li, et al.

This innovative study by Shuo-fang Liu, Juan Li, Hang-qin Zhang, Zhe Li, and Meng Cheng reveals a digital teaching assistant system designed to enhance educational equity in online engineering courses during the pandemic. The results demonstrate improved academic performance and satisfaction, making it a promising solution for interactive learning and resource recommendation.

00:00
00:00
~3 min • Beginner • English
Introduction
The paper addresses how to ensure educational equity in digitally transformed higher engineering education, particularly for practice-oriented, outcome-based education (OBE) courses. While digitization offers access and personalization benefits, rapid deployment can exacerbate digital inequities, impacting student performance and challenging teachers’ digital pedagogy. The study targets two equity-sensitive areas: equitable digital course planning (resources, interaction, collaboration) and fair assessment of learning outcomes in online engineering practice courses. It motivates the need for systems that support real-time teacher–student interaction, transparent criteria, and multi-dimensional evaluation to mitigate inequities arising from technological access, regional disparities, interaction complexity, and variable student acceptance of digital learning.
Literature Review
The paper reviews challenges in digital engineering education where many systems deliver high-quality content but fall short in curriculum planning and supporting engagement, collaboration, and problem-centered practice. Prior work spans MOOCs and LMS platforms (EDX; Malaysian MOOCs; SMILE at Universitas Pendidikan Ganesha; NCKU’s platform), intelligent recommendation systems (Li & García-Díaz, 2022), SPOC-based platforms (Luo, 2019), Bayesian cloud resource integration (Yang & Lin, 2022), AI/ML-enhanced LMS analytics and recommendation (Villegas-Ch et al., 2020; Sun et al., 2021), hybrid cloud LMS for equity (Elmasry & Ibrahim, 2021), and assessment/satisfaction models (Ibrahim et al., 2021). Interaction-focused studies explored synchronous/asynchronous blends (Danjou, 2020), simulation-based teamwork training (Mahmood et al., 2021), and LLM-based assistants (Sajja et al., 2023), alongside integrated toolkits (Bernardo & Bontà, 2023). The review emphasizes that most solutions address limited equity dimensions. Broader factors include human/user attitudes and skills, environmental conditions, and technical stability and usability (Fadhil, 2022; Sarker et al., 2019). Persistent issues include weak coupling among tools, poor interoperability and resource sharing, limited update of information resources, and inadequate support for personalized content, collaboration, and equitable assessment. The authors identify a gap for multi-dimensional systems integrating course planning, interaction, resource management, and fair evaluation.
Methodology
The study proposes a multi-criteria group decision-making (MCGDM) evaluation approach combining Quality Function Deployment (QFD) with a t-test, implemented in an intelligent online teaching assistant system (System B). Methods: 1) QFD to derive course grading standards from core training abilities; 2) dependent-samples t-tests to compare student performance under two design methods. System architecture: B/S (browser–server) built on Apache, MySQL, PHP (AMP) on Windows. Shared teacher–student platform with role-specific interfaces: teachers manage teaching materials, evaluate grading standards (QFD), inspect learning processes, evaluate outcomes, and view results; students download materials, submit assignments, and query grades. Case course: Design Management and Strategy at a Chinese university, 48 senior Industrial Design students, 16 teams of three, 7 weeks (21 class hours). Tools: Tencent Meeting for content delivery and communications; the new system for outcome display and evaluation analysis. QFD setup: Teaching team (one associate professor, two lecturers) mapped four core abilities—Expression (AE), Insights (AI), Analyze (AA), Collaboration (AC)—to 14 candidate criteria (EL, CL, FL, NE, DE, OR, IN, QU, LO, RE, SY, IT, CO, FY). Weights were computed via teachers’ importance ratings and correlation matrix (Rij ∈ {0,1,3,9}). Top-weighted criteria identified: Reasonable (RE), Elaboration (EL), Logicality (LO), Insightful (IN), Collaboration (CO). These five criteria formed the final evaluation standards. Teaching protocol: Two ideation methods executed with equal duration (6 class hours each). Topics: Brainstorming—design requirements for users with disabilities; Crazy 8—COVID-19 prevention products. Each included an ice-breaking survey (2 hours), two rounds of ideation/convergence with presentation (1 hour), whole-class sharing (1 hour), and final consolidation/submission (2 hours). Evaluation: Three teachers graded 16 teams on the five standards for both methods; system computed weighted statistics and conducted dependent-sample t-tests per criterion and total. Comparative effectiveness: Two-cohort, two-year design. Control Class A (N=54, 18 groups) used an existing widely used system (System A) plus Tencent Meeting; grading: regular (30%) + exam (70%). Experimental Class B (N=48, 16 groups) used the new system (System B). Independent-samples t-tests compared total scores across classes. Satisfaction and usage attitude: Teacher (N=57) and student (N=101) questionnaires covering function satisfaction (E codes) and usage attitude (G codes). Reliability (Cronbach’s α) and validity (KMO, Bartlett’s Sig.) assessed with SPSS 19.
Key Findings
- Within-class dependent t-tests (Brainstorming vs Crazy 8, N=16 teams): Total mean difference = 0.316 (t=2.491, df=15, p=0.025), indicating Brainstorming overall outperformed Crazy 8. By criterion: Reasonable RE: mean diff=0.415, t=2.208, p=0.043 (Brainstorming>Crazy 8); Elaboration EL: mean diff=-0.589, t=-2.998, p=0.009 (Crazy 8>Brainstorming); Logicality LO: mean diff=0.389, t=2.296, p=0.037 (Brainstorming>Crazy 8); Insightful IN: mean diff=0.193, t=3.534, p=0.003 (Brainstorming>Crazy 8); Collaboration CO: mean diff=0.568, t=2.896, p=0.011 (Brainstorming>Crazy 8). Smallest difference on Insightful; largest advantage for Crazy 8 on Elaboration. - Descriptive performance (Table 3): Highest scoring team B5 total=90.85 (excellent); B6=90.56; lowest B16=55.65 (fail). Class-wide Brainstorming averages exceeded Crazy 8 except for Elaboration (Crazy 8 higher) and Logicality (lowest in Crazy 8). - Between-class independent t-test (System A vs System B): Means—Class A=70.2306 (N=18 groups), Class B=77.8081 (N=16 groups). Equal variances assumed: F=1.052, Sig.=0.313; t=-2.055, df=32, two-tailed Sig.=0.048; Welch: t=-2.096, df=30.617, Sig.=0.044. Conclusion: Significant improvement in overall learning effectiveness with System B. - Satisfaction and usage attitude (Table 10): High reliability and validity (Students α=0.941, KMO=0.858; Teachers α=0.929, KMO=0.705; all Bartlett’s Sig.=0.000). Students rated several features higher in System B than A, notably: multi-dimensional learning evaluation analysis reports E20 (4.19 vs 3.79), teacher–student interaction fit E13 (4.17 vs 3.76), publication of tasks and grading standards E18 (4.08 vs 3.73), timely reminders E19 (4.00 vs 3.66), resource handling E11 (4.07 vs 3.78), relevance E9 (3.94 vs 3.64). Teachers rated System B higher on comprehensive observation interface F3 (4.25 vs 3.05), student interaction/collaboration fit E14 (4.25 vs 3.05), teaching efficiency and planning support G11 (4.16 vs 3.23) and G12 (4.25 vs 3.09), fair learning standard evaluation F5 (4.09 vs 4.04), performance evaluation considering process F6 (4.21 vs 2.95), and resource interfaces F4 (4.04 vs 3.05). System A retained advantages in some basic usability items for teachers (E1/E2), but overall teacher–student satisfaction favored System B, especially for equity-related evaluation and interaction functions.
Discussion
Findings demonstrate that embedding a QFD-driven, transparent set of grading standards and automating multi-dimensional evaluation with statistical analysis can mitigate subjectivity and enhance fairness in online engineering practice courses. The new system improved overall learning outcomes relative to a widely used control system and provided clearer, more transparent criteria and feedback, aligning with OBE principles. Brainstorming generally yielded stronger outcomes than Crazy 8 across Reasonable, Logicality, Insightful, and Collaboration, while Crazy 8 excelled in Elaboration—insights that help tailor pedagogy to method-specific strengths. Teacher and student surveys corroborate that equity-oriented features—clear publication of tasks and standards, comprehensive process/outcome observation, fair evaluation functions, multi-dimensional analysis reports, and interaction modes aligned with engineering—are most valued and likely underlie the performance gains. The system supports a shift from summative to formative assessment, enabling individualized guidance, improved engagement, and better curriculum planning consistent with educational equity objectives.
Conclusion
The study presents a successfully developed digital teaching assistant system grounded in a QFD + t-test MCGDM evaluation model that aligns with higher engineering education characteristics and promotes educational equity. Compared to a commonly used system, the new system enhances resource recommendation relevance, teacher–student and student–student interaction and collaboration, transparency of tasks and grading standards, and fairness and multidimensionality of learning assessment. Empirical results show significantly better academic outcomes and higher teacher–student satisfaction, particularly for evaluation analytics and interaction features. The system facilitates formative assessment, helps teachers refine plans with visibility into group differences, and supports students with targeted content, diverse resources, differentiated paths, and personalized guidance. Future work will conduct broader, multi-discipline trials, refine implementation strategies, and address user-identified drawbacks to further improve user experience and generalizability.
Limitations
- Single-institution, course-specific case (Design Management and Strategy) with modest sample sizes (Class B: 48 students in 16 teams; Class A: 54 students in 18 teams), which may limit generalizability across disciplines and contexts. - Comparison against one unnamed control system (System A); differences in interface/technology may confound results despite matching teaching content, schedule, and instructor. - Reliance on Tencent Meeting for synchronous communication in both groups; network conditions and external factors could influence engagement and outcomes. - Evaluation centered on five selected criteria derived via QFD; while transparent, other competencies and criteria may be relevant in different courses. - Data sharing is restricted due to participant privacy; only anonymized data available upon reasonable request.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny