Application Modules

KAIST Exoskeleton Lab

Total 1건 1 페이지
Application Modules 목록
번호 제목 작성일
1

Application Modules (AM)

The Application Module (AM) is a core component of the wearable robotic system that performs high level control, similar to the cerebral cortex in human motor control. It also processes sensory data and environmental awareness to enhance the adaptability and autonomy of the exoskeleton.

As the centralized supervisory unit, AM is optimized for parallel computing to handle computationally intensive control tasks. Beyond its capabilities, it functions as a comprehensive service platform featuring an intuitive Graphical User Interface (GUI), which allows even non-experts to seamlessly access and operate the robot. Furthermore, AM maintains continuous connectivity with cloud infrastructure, ensuring that operational data is archived and managed in real-time.

Specification
  • CPU: 8-core Arm® Cortex®-A78AE v8.2 64-bit CPU (2 MB L2 + 4 MB L3)
  • AI Performance: 100 TOPs (Sparse) / 50 TOPs (Dense)
  • Power: 10W – 25W
  • GPU: 1024 NVIDIA® CUDA® cores | 32 Tensor cores
  • Storage: 256 GB M.2 NVMe SSD
  • Orbbec 335Lg RGB-D Camera Integrated
  • Integrated board with Control Module (CM)
Function 1
Reinforcement Learning Framework for Physical Robot Platform

This framework aims to develop a robust control policy for a physical robot platform by bridging the gap between simulation and reality. In the learning phase, a teacher policy \(\pi_t\) is trained to track objective goals and reference motions within a physics engine, utilizing domain randomization (DR) to ensure adaptability to dynamic variations. This privileged knowledge is then transferred via distillation to a student policy \(\pi_s\), which is designed to operate without access to ground-truth simulation data. Finally, the student policy is deployed on the real hardware, generating control actions \(a_t\) based solely on estimated states derived from onboard sensors such as IMUs and encoders.




Function 2
Neuromorphic Locomotion Control

This framework establishes an end-to-end pipeline bridging biomechanics-guided data synthesis and real-time neuromorphic control. By employing a Generative AI, the system expands limited experimental inputs into large-scale gait datasets, which provide the essential variety needed for robust training. These data drive a locomotion controller designed with a neuromorphic architecture that mimics the brain's neural structure, optimizing flexor and extensor oscillators to produce bio-mimetic assistive torques. Once trained, this brain-inspired model operates within a modular hardware architecture, dynamically processing stacked sensor data like IMU and GRF signals. This integration ultimately facilitates adaptive and stable torque control for wearable robots across diverse real-world environments.






Function 3
Legged Robot State Estimation

This research presents a robust state estimation framework for legged robots, grounded in the theory of the Invariant Extended Kalman Filter (InEKF) on the \( SE_{2+2N}(3) \) Lie group. By explicitly modeling multiple contact points for each leg, the filter accurately captures complex ground interaction dynamics during locomotion. The system's propagation, driven by IMU kinematics, is tightly coupled with a multi-modal correction phase that integrates visual odometry, joint encoders, and pressure sensor arrays. This strategic fusion of proprioceptive and exteroceptive sensors significantly enhances system observability, effectively mitigating drift even in dynamic environments. Consequently, the proposed architecture delivers precise and consistent state estimates essential for stable legged robot control.