Home
/
Robots
/
ergoCub

ergoCub

ergoCub is a 1.5 m ergonomic humanoid robot with AI-driven collaborative lifting, 10 kg payload, advanced sensors, and autonomous navigation for safer human-robot teamwork.
Software Type
Closed Source
Software Package
AI-based ergonomic motion planning and control. Multi-sensor fusion for perception and navigation. Vision modules for human intention recognition. Force-torque sensor integration for safe collaboration. Cloud-based software updates and monitoring.
Actuators
Equipped with advanced electric actuators optimized for smooth, precise, and ergonomic joint movements that support collaborative lifting and manipulation.
Compiute
Powered by Nvidia Jetson AGX Xavier and Intel 11th Gen i7 processors, delivering high-performance AI inference, sensor fusion, and real-time control.
Sensors
Includes Intel RealSense depth cameras, LiDAR, force-torque sensors, and OLED display for comprehensive perception, interaction, and feedback.
Max Op. time
mins

Robot Brief

ergoCub is a 1.5-meter-tall humanoid robot developed by the Italian Institute of Technology, designed with a strong focus on ergonomic collaboration with humans. Weighing approximately 55.7 kg, it is engineered to minimize physical strain during collaborative lifting and manual handling tasks, reducing the risk of workplace injuries such as musculoskeletal disorders. Building upon the iCub platform, ergoCub integrates advanced AI, sensor fusion, and ergonomic design principles to optimize human-robot interaction in industrial and healthcare settings. It features dexterous arms capable of manipulating objects up to 10 kg, and incorporates depth vision via Intel RealSense cameras and LiDAR for precise navigation. Powered by Nvidia Jetson AGX Xavier and Intel 11th Gen i7 processors, along with a flexible OLED 2K screen for expressive interaction, ergoCub uses force-torque sensors and AI algorithms to intuitively respond to external forces and worker intentions. Its autonomous navigation and collaborative manipulation capabilities make it ideal for reducing physical workload and enhancing safety in various environments.

Use Cases

ergoCub assists human workers by performing physically demanding tasks such as lifting, holding, and transporting loads up to 10 kg, while adapting its movements based on ergonomic principles to minimize strain on both the robot and human collaborators. It autonomously navigates complex environments using depth cameras and LiDAR, recognizes human intentions through AI vision modules, and interacts safely via force-torque sensing. The robot supports collaborative workflows in manufacturing, healthcare, and logistics, enhancing productivity and reducing injury risks.

Industries

  • Industrial Automation: Supports collaborative lifting and material handling to reduce worker strain.
  • Healthcare: Assists with patient handling and rehabilitation tasks requiring safe physical interaction.
  • Logistics & Warehousing: Performs load transportation and object manipulation in dynamic environments.
  • Research & Development: Serves as a platform for ergonomic robotics and AI innovation.
  • Safety & Ergonomics: Helps prevent workplace injuries through intelligent collaboration.

Specifications

Length
mm
Width
mm
Height (ResT)
mm
Height (Stand)
1500
mm
Height (Min)
mm
Height (Max)
mm
Weight (With Batt.)
kg
Weight (NO Batt.)
55.7
kg
Max Step Height
mm
Max Slope
+/-
°
Op. Temp (min)
°C
Op. Temp (Max)
°C
Ingress Rating
No items found.

Intro

ergoCub stands 150 cm tall and weighs 55.7 kg, featuring a human-inspired ergonomic design that reduces energy expenditure during joint lifting tasks. It is equipped with:

  • Two dexterous arms capable of precise object manipulation and load handling up to 10 kg
  • Intel RealSense depth cameras and LiDAR sensors for 3D perception and autonomous navigation
  • Force-torque sensors enabling intuitive response to external forces and safe collaboration
  • Nvidia Jetson AGX Xavier and Intel 11th Gen i7 processors for AI computation and control
  • Flexible OLED 2K screen for expressive interaction and user feedback
  • AI vision modules for recognizing human intentions and object properties.

Connectivity

  • WiFi (2.4 GHz and 5 GHz)
  • Ethernet and USB ports for peripherals
  • Cloud connectivity for AI updates and remote monitoring.

Capabilities

  • Collaborative lifting and load handling with ergonomic motion planning
  • Autonomous navigation and obstacle avoidance using multi-sensor fusion
  • Real-time recognition of human intentions and adaptive response
  • Safe physical interaction through force-torque sensing
  • AI-driven manipulation and grip control for diverse objects
  • Integration with wearable sensors to monitor worker biomechanics.