Application Modules

KAIST Exoskeleton Lab

Total 1건 1 페이지
Application Modules 목록
번호 제목 작성일
1

Application Modules (AM)

The Application Module (AM) is a core component of the wearable robotic system responsible for high-level control, analogous to the cerebral cortex in human motor control. It processes sensory information and environmental context to enhance the adaptability and autonomy of the exoskeleton.

As a centralized supervisory unit, AM is optimized for parallel computing to handle computationally intensive control tasks. In addition to control functionality, it serves as an integrated service platform equipped with an intuitive graphical user interface (GUI), enabling users without specialized expertise to access and operate the system. AM also maintains continuous connectivity with cloud infrastructure, allowing operational data to be archived and managed in real time.
Specification
  • CPU: 8-core Arm® Cortex®-A78AE v8.2 64-bit CPU (2 MB L2 + 4 MB L3)
  • AI Performance: 100 TOPs (Sparse) / 50 TOPs (Dense)
  • Power: 10 W – 25 W
  • GPU: 1024 NVIDIA® CUDA® cores | 32 Tensor cores
  • Storage: 256 GB M.2 NVMe SSD
  • Orbbec 335Lg RGB-D Camera Integrated
  • Integrated board with Control Module (CM)
Function 1
Reinforcement Learning Framework for Physical Robot Platform

This framework aims to develop robust control policies for physical robot platforms by bridging the gap between simulation and real-world deployment. During the learning phase, a teacher policy \( \pi_t \) is trained to track objective goals and reference motions within a physics engine, leveraging domain randomization (DR) to achieve robustness against dynamic variations. The learned privileged knowledge is then transferred via distillation to a student policy \( \pi_s \), which operates without access to ground-truth simulation data. The student policy is subsequently deployed on real hardware, generating control actions \( a_t \) based solely on estimated states derived from onboard sensors such as IMUs and encoders.

Function 2
Neuromorphic Locomotion Control

This framework establishes an end-to-end pipeline that bridges biomechanics-guided data synthesis with real-time neuromorphic control. By employing generative AI techniques, limited experimental inputs are expanded into large-scale gait datasets that provide sufficient diversity for robust training. These datasets are used to train a locomotion controller with a neuromorphic architecture that emulates neural structures in the brain, optimizing flexor and extensor oscillators to generate biomimetic assistive torques. Once trained, the brain-inspired model operates within a modular hardware architecture, dynamically processing stacked sensor inputs such as IMU and ground reaction force (GRF) signals. This integration enables adaptive and stable torque control for wearable robots across diverse real-world environments.

Function 3
Legged Robot State Estimation

This research presents a robust state estimation framework for legged robots based on the Invariant Extended Kalman Filter (InEKF) formulated on the \( SE_{2+2N}(3) \) Lie group. By explicitly modeling multiple contact points for each leg, the filter captures complex ground interaction dynamics during locomotion. System propagation driven by IMU kinematics is coupled with a multi-modal correction stage that integrates visual odometry, joint encoders, and pressure sensor arrays. This fusion of proprioceptive and exteroceptive sensing enhances system observability and effectively mitigates drift in dynamic environments, providing accurate and consistent state estimates for stable legged robot control.