logo
ResearchBunny Logo
Knowledge Sharing in Swarms Realized by Artificially Empathetic Communication

Computer Science

Knowledge Sharing in Swarms Realized by Artificially Empathetic Communication

J. Siwek, P. Żywica, et al.

Using artificial empathy and imprecise cognition, this work introduces a novel method for swarms to share fuzzy, incomplete knowledge so each agent fills missing parameters, forms subjective knowledge subsets, and selects actions to adapt—deciding whether to help, continue, or learn from others. This research was conducted by Joanna Siwek, Patryk Żywica, Konrad Pierzyński, Przemysław Siwek, and Adrian Wójcik.... show more
Introduction

The study addresses how swarm agents can effectively share and use knowledge to optimize collaborative behavior, particularly under imprecise communication constraints typical of IoT-like environments. Building on an artificially empathetic (AE) decision framework, the research explores cognitive empathy-inspired mechanisms—akin to human mirror neuron processes—so agents learn not only from their own experience but also by projecting themselves into others’ states. The central hypothesis is that generalized, imprecise state descriptions and fuzzy similarity can enable decentralized agents with heterogeneous capabilities to learn from each other, decide when to help versus act egoistically, and adapt faster. The model is validated on a small mobile robot swarm using LED-based visual communication, demonstrating universality beyond specific platforms.

Literature Review

The work situates itself within swarm robotics and IoT communication, where effective inter-agent information sharing is known to impact collective performance. Prior research highlights industry applications (smart factories, agriculture, emergency scenarios) and diverse communication architectures (radio, Wi-Fi, Bluetooth, visual) influencing reliability and performance. Knowledge-sharing frameworks in industrial settings and strategy-based information sharing in UAV swarms underscore the importance of robust, adaptive communication. The study draws on cognitive empathy theory and neural science perspectives (mirror neurons) to inform artificial systems’ learning mechanisms, extending earlier implementations of artificially empathetic swarms and fuzzy similarity measures under epistemic uncertainty.

Methodology

The approach comprises a decentralized, two-level decision model and a communication framework that tolerates imprecision. Each agent maintains a personal knowledge base of fuzzy states and associated rewards. Agents broadcast their current state in imprecise, fuzzy terms (e.g., "battery high", "obstacle far"), possibly sharing only a subset of internal and external parameters due to platform or medium constraints. Messages received are further fuzzified by subjective perception. Core concepts:

  • Agent state representation: A vector of fuzzy internal and external parameters. Internal examples include energy level and operational status; external examples include proximity to objectives, neighbor detection, and obstacle distance.
  • Knowledge base: A collection of remembered states and their rewards, used to infer intents and actions. Rewards quantify closeness to a shared swarm goal.
  • Imprecise communication: Agents broadcast partial, fuzzy states (a subset of parameters). Reception may introduce additional fuzziness.
  • Empathic inference and gap filling: The recipient compares the received partial message with known states, using fuzzy similarity measures and aggregation, to identify the most similar remembered state and fill missing parameters, generating a completed state-reward pair for future use. Decision and learning cycle:
  1. Each agent decides among: continuing its own (egoistic) action, helping another agent reach a shared goal, or stopping to learn from others.
  2. On broadcasting a learning success, the teaching agent shares its intent and achieved reward.
  3. The learning agent evaluates whether the broadcast reward exceeds a significance threshold; if yes, it approaches and requests the knowledge.
  4. The learner computes similarity between the received message (omitting missing parameters) and its knowledge base, selects the closest known state, infers missing parameters, and stores the new completed state-reward pair. Action selection is delegated to a lower-level module, which maps intents to executable action sequences (e.g., moving, turning, charging). The communication experiments use LED colors to encode states: green (goal seen), red (goal not seen), blue (another robot seeing the goal). Agents use onboard vision (YOLO-based detection of LED towers) to read states, and onboard computation (Raspberry Pi Zero) to decide egoistic vs empathic actions. Experimental setup and parameters: A three-robot swarm performs a "find food source" task using an Aruco code as target. Six parameters are defined, including code detection, closeness to code, presence of other robots, seeing another robot that sees the code, battery level, and closeness to another robot. A shared goal vector and agent-specific weight vector guide similarity-based reward calculation. In numeric examples, a weighted Jaccard similarity is used to compare current states and broadcast messages to knowledge vectors, select intents, and compute rewards to decide action policies.
Key Findings
  • The empathic communication model enables agents to learn from incomplete, imprecise messages by filling gaps using their own knowledge via fuzzy similarity, producing usable state-reward pairs.
  • Numeric example outcomes illustrate decision-making: For a state s2, similarities to knowledge vectors ordered as k1 < k0 < k2 < k3 led the agent to select k3 as the intent. Subsequent transitions and broadcast messages with missing parameters were completed by choosing the most similar knowledge (e.g., k0 or k3) and filling values, yielding s'_full = [1,1,1,1,0.6,1].
  • Reward comparisons demonstrate rational choice between egoistic and empathic actions: R3 = 0.625 vs R5^5 = 0.62 led to continuing egoistic action; later, R4 = 0.60 vs R4^e = 0.605 favored empathic assistance.
  • Learning significance thresholds were exemplified by a broadcast realized reward of 0.91; the receiving agent verified high reward, completed the message using the closest knowledge (e.g., k3), and appended the new knowledge vector k4 = [1,0,0,1,0.4,1,0.91] to its knowledge base.
  • Empirical validation with three robots and LED communication showed agents can detect goal presence, signal states, follow more advanced agents, pause to communicate, and use shared knowledge to improve performance. Message transmission via LEDs can take up to ~10 s.
  • The model is platform-agnostic at the intent selection level, with action selection handled by platform-specific modules; it was also verified across different platforms and simulation types in prior and related work.
Discussion

Findings support the hypothesis that cognitive empathy-inspired mechanisms can enhance decentralized swarm learning and cooperation under imprecise communication. By allowing agents to project themselves into others’ states and use fuzzy similarity to reconcile incomplete information, the approach reduces reliance on exact action replication, accommodating heterogeneity in capabilities. The model’s decentralized design means computational load does not scale directly with swarm size, as agents compare with the first seen peer and decide locally; however, this greedy selection may risk suboptimal comparisons when multiple agents are in sight. The universality of the framework stems from separating intent selection (platform-independent) from action sequencing (platform-dependent), enabling adaptation to different communication media and physical designs. Results demonstrate practical feasibility in visually constrained environments and suggest broader applicability in IoT contexts where traditional communications are limited or noisy.

Conclusion

The study introduces and validates an artificially empathetic communication and learning model for swarms, enabling agents to share generalized, imprecise state knowledge and fill gaps through fuzzy similarity, thereby accelerating learning and collaboration. Demonstrations on a small robot swarm with LED-based communication confirm the feasibility of decentralized empathic decision-making and knowledge transfer. The approach is well-suited to IoT applications in challenging environments (e.g., waste sorting, search and rescue, military) where human presence is constrained. Future research directions include large-scale reproducible experiments, enhanced lower-level action selection methods (e.g., learning-based controllers), aggregating knowledge across partially incompatible parameter sets, optimizing agent selection when multiple candidates are visible, and exploring alternative communication modalities to reduce latency and noise.

Limitations
  • Physical experiments were limited to three robots due to platform cost and prototyping constraints, with only six parameters considered for proof-of-concept.
  • LED-based visual communication introduces delays (messages up to ~10 s), depends on frame rate, and is susceptible to environmental factors (lighting, reflections, color detection errors, blurriness, white spots).
  • Photo capture and rendering time, movement precision, battery life, and voltage optimization affected performance.
  • The decision to compare with the first spotted agent is greedy and may yield suboptimal choices when multiple agents are present.
  • The scenario and parameterization are application-specific and may require adaptation and extended validation for other platforms and tasks.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny