Pepper

Pepper by SoftBank Robotics is a 1.2m social humanoid robot with 20 DOF, emotion recognition, multilingual speech, and a touchscreen, ideal for customer service and education.
Software Type
Closed Source
Software Package
NAOqi operating system. Speech synthesis and speech-to-text engines. People perception and emotion recognition modules. SDKs for Python, C++, Java, JavaScript, ROS. Real-time navigation and collision avoidance software
Actuators
20 DC motors enabling 20 degrees of freedom across the head, arms, hands, knees, and mobile base, allowing fluid and natural movements.
Compiute
Intel Atom E3845 processor powering onboard computing for perception, speech, and control tasks.
Sensors
Two 5-megapixel RGB cameras (mouth and forehead) One 3D depth sensor behind the eyes Four directional microphones Touch sensors on head, hands, and torso Sonar sensors, laser sensors, bumpers, and gyroscopes for navigation and balance
Max Op. time
720
mins

Robot Brief

Pepper is the world’s first social humanoid robot designed to recognize faces and human emotions, optimized for natural human interaction through conversation and touch. Standing 1.2 meters tall, Pepper features 20 degrees of freedom enabling expressive, fluid movements. It is equipped with a 10.1-inch touchscreen on its chest to complement verbal interaction with visual information. Pepper integrates a rich sensor suite including multiple HD cameras, 3D sensors, microphones, touch sensors, sonars, infrared sensors, bumpers, and gyroscopes, enabling omnidirectional autonomous navigation and multimodal interaction. It supports speech recognition and dialogue in 15 languages and uses perception modules to recognize and track people, allowing it to respond to emotions and engage users effectively. Pepper’s open and programmable platform supports multiple programming languages (Python, C++, Java, JavaScript, ROS) and SDKs, making it a versatile tool for businesses, schools, research, and customer service. Over 17,000 units have been adopted worldwide, serving as assistants in retail, hospitality, education, and more. Despite its popularity, Pepper is currently out of stock as SoftBank ceased production in 2021.

Use Cases

Pepper interacts naturally with people by recognizing faces and emotions, engaging in conversations, providing information, guiding visitors, and performing expressive gestures. It serves as a social companion, customer service assistant, and educational tool, enhancing user experience through speech, touch, and visual displays.

Industries

  • Retail: Welcomes customers, provides product information, and enhances shopping experiences.
  • Education: Facilitates interactive learning and robotics research.
  • Hospitality: Greets and guides guests, improving customer service.
  • Healthcare: Offers companionship and assistance to patients.
  • Research: Used extensively in human-robot interaction and AI studies.

Specifications

Length
120
mm
Width
485
mm
Height (ResT)
mm
Height (Stand)
-
mm
Height (Min)
mm
Height (Max)
1200
mm
Weight (With Batt.)
-
kg
Weight (NO Batt.)
-
28
kg
Max Step Height
-
15
mm
Max Slope
+/-
-
°
Op. Temp (min)
-
°C
Op. Temp (Max)
-
°C
Ingress Rating
-
No items found.

Intro

Pepper is a 1.2-meter tall humanoid robot with 20 degrees of freedom enabling natural and expressive movements. It features a 10.1-inch touchscreen on its chest for visual interaction. Equipped with two RGB HD cameras, a 3D depth sensor, four microphones, and multiple tactile sensors, Pepper can recognize faces, track emotions, and navigate autonomously using infrared, sonar, and laser sensors. Its stable wheeled base allows movement up to 3 km/h. Pepper runs on the NAOqi operating system and supports multiple programming languages and SDKs for flexible development. The robot is designed for social interaction, customer engagement, and educational purposes.

Connectivity

  • Wi-Fi (2.4 GHz / 5 GHz)
  • Ethernet (10/100/1000 base T)
  • Bluetooth
  • USB ports

Capabilities

  • Recognizes faces and human emotions through advanced perception modules
  • Multilingual speech recognition and dialogue (15 languages)
  • Expressive gestures with 20 degrees of freedom
  • Autonomous omnidirectional navigation using sonars, lasers, and bumpers
  • Touchscreen interface for enhanced communication
  • Multimodal interaction via LEDs, microphones, speakers, and tactile sensors
  • Open and programmable platform supporting Python, C++, Java, JavaScript, and ROS