logo
ResearchBunny Logo
Introduction
Insect-computer hybrid robots offer a promising alternative to small artificial robots due to their superior locomotion and lower manufacturing costs. However, navigating these robots through complex, obstacle-filled terrains remains a challenge. While insects possess inherent obstacle avoidance using antennae, this is limited and can be disrupted by control signals, potentially trapping the robot. Adding sensors for enhanced perception is crucial, but the size and load capacity restrictions of insect-computer hybrid robots limit sensor options. A monocular camera provides a suitable solution due to its small size, low power consumption, and ability to acquire rich information. This research addresses the need for a robust navigation algorithm incorporating obstacle avoidance using a monocular camera for insect-computer hybrid robots, focusing on overcoming limitations in both depth estimation model training and obstacle avoidance command generation.
Literature Review
Existing research on insect-computer hybrid robots has focused on developing protocols for insect motion control, demonstrating the induction of various movements through electrical stimulation of different body parts in various insects (locusts, beetles, bees, cockroaches). Navigation algorithms based on these control protocols are still in early stages, lacking the robustness needed for complex, real-world scenarios. Insects' inherent obstacle avoidance using antennae is limited, and control signals can interfere, leading to unpredictable behavior and trapping. Previous work utilizing cameras on insect robots has primarily focused on direction recognition rather than depth-based obstacle avoidance. The limited availability of suitable datasets for training monocular depth estimation models for small robots, particularly from an insect's perspective, poses a significant challenge.
Methodology
This study proposes a navigation algorithm for insect-computer hybrid robots that integrates an obstacle avoidance module using a monocular camera. The system uses an unsupervised learning monocular depth estimation model to process images and generate depth maps. To address the lack of suitable training data, the researchers created the SmallRobot Dataset, capturing images from a small robot's perspective. The obstacle avoidance module processes depth maps using a weighted sum method to generate control commands (Left Turn, Right Turn, Go Forward), prioritizing obstacle avoidance over direct navigation to the destination. The Madagascar hissing cockroach was chosen as the insect platform due to its size, climbing ability, and existing control protocols. The microcontroller comprises an ESP32-CAM for image acquisition and a custom stimulation module for generating control signals. Navigation experiments were conducted in a controlled environment with an obstacle designed to potentially trap the robot. The success rate of navigation with and without the obstacle avoidance module was compared. A 3D motion capture system tracked the robot's position during experiments.
Key Findings
The integrated obstacle avoidance module significantly improved the navigation success rate from 6.7% to 73.3%. Analysis showed that the obstacle avoidance module reduced the probability of the robot entering a ‘risk zone’ (close proximity to the obstacle) and increased the probability of escaping from the risk zone if entry occurred. The model trained with the SmallRobot Dataset produced significantly more accurate depth maps than models trained on existing datasets (KITTI), highlighting the importance of dataset specificity for this application. The weighted sum method for generating obstacle avoidance commands proved effective, producing commands consistent with human judgment. Figure 2 visually demonstrates the improved trajectories with the obstacle avoidance module, while Figure 3 compares depth map predictions from models trained on different datasets. Figure 4 illustrates the process of generating avoidance commands from images, and Figure 6 shows the obstacle avoidance module's working principle.
Discussion
The results demonstrate the effectiveness of the proposed navigation algorithm in enhancing the obstacle avoidance capabilities of insect-computer hybrid robots. The significant improvement in success rate validates the importance of integrating an external sensing system, even with the constraints of limited payload. The creation of the SmallRobot Dataset addresses a crucial gap in available training data for monocular depth estimation in small-scale robotics. The simple yet effective weighted sum method for command generation showcases a practical approach suitable for resource-constrained platforms. These findings contribute to advancing the capabilities of insect-computer hybrid robots for applications in confined spaces, such as search and rescue missions.
Conclusion
This paper presents the first successful automatic navigation algorithm with obstacle avoidance for insect-computer hybrid robots using a monocular camera. The significant increase in navigation success rate from 6.7% to 73.3% demonstrates the algorithm's effectiveness. Future work will focus on developing more compact microcontrollers with integrated image acquisition and stimulation modules, as well as an ultra-lightweight depth model for onboard processing to reduce energy consumption and improve system autonomy. Improving camera mounting mechanisms to ensure horizontal positioning will also be crucial for future improvements.
Limitations
The current system relies on WiFi communication between the robot and the workstation, which increases energy consumption. The algorithm's performance might be affected by changes in lighting conditions or variations in obstacle types not present in the training dataset. The current study used a specific obstacle configuration; further testing with different obstacle shapes and complexities is necessary. The control of the cockroach itself might exhibit variability due to the biological nature of the platform.
Listen, Learn & Level Up
Over 10,000 hours of research content in 25+ fields, available in 12+ languages.
No more digging through PDFs, just hit play and absorb the world's latest research in your language, on your time.
listen to research audio papers with researchbunny