PR2

PR2 by Willow Garage is a 2010 research humanoid robot with two 7-DOF arms, omnidirectional base, and ROS integration for autonomous manipulation and navigation.
Software Type
Open Source
Software Package
Robot Operating System (ROS) for control and development. 3D point cloud library for perception. Motion planning and manipulation libraries. Speech recognition and synthesis modules.
Actuators
Two compliant 7-DOF arms with force-sensitive joints and fingertip sensors for adaptive and delicate object manipulation.
Compiute
Two 8-core computers with a combined 48 GB of RAM, providing robust computational power for perception, planning, and control.
Sensors
5-megapixel RGB camera Tilting planar laser scanner for distance measurement Inertial measurement unit (IMU) Structured light “texture projector” for 3D environment capture
Max Op. time
mins

Robot Brief

The PR2 (Personal Robot 2) is one of the most advanced research robots ever built, developed by Willow Garage to serve as a common hardware and software platform for robotics researchers worldwide. Introduced in 2010, the PR2 is designed to accelerate robotics development by providing a robust, flexible, and open platform integrated with the Robot Operating System (ROS). It features two compliant 7-degree-of-freedom arms capable of delicate manipulation tasks, an omnidirectional mobile base, and a telescoping spine that allows the upper body to reach higher objects. Equipped with a rich sensor suite including cameras, laser range finders, and inertial measurement units, the PR2 can autonomously navigate complex environments, recognize and manipulate objects, and interact with humans through speech and gestures. Its software and hardware architecture enable tasks such as cleaning tables, folding towels, fetching drinks, and opening doors. The PR2 has been widely adopted by research institutions and has demonstrated impressive capabilities including autonomous navigation to restaurants and object retrieval. As an open platform, it supports extensive customization and development, making it a cornerstone of modern robotics research.

Use Cases

PR2 autonomously navigates indoor environments, manipulates objects with dexterous arms, recognizes and interacts with people, and performs household tasks such as cleaning, fetching items, and opening doors. It serves as a research and development platform for advancing robotic perception, manipulation, and human-robot interaction.

Industries

  • Research & Development: Provides a flexible platform for robotics innovation and experimentation.
  • Education: Used in universities for teaching robotics and AI principles.
  • Healthcare: Supports research in assistive robotics and human interaction.
  • Service Robotics: Demonstrates potential for household and service tasks automation.

Specifications

Length
mm
Width
mm
Height (ResT)
mm
Height (Stand)
-
1645
mm
Height (Min)
mm
Height (Max)
mm
Weight (With Batt.)
-
kg
Weight (NO Batt.)
-
200
kg
Max Step Height
-
mm
Max Slope
+/-
-
°
Op. Temp (min)
-
°C
Op. Temp (Max)
-
°C
Ingress Rating
-
No items found.

Intro

PR2 is a human-sized mobile manipulation robot featuring two compliant 7-DOF arms with a payload of 1.8 kg each, an omnidirectional mobile base with four casters, and a telescoping spine for extended reach. It is equipped with a 5-megapixel camera, a tilting laser range finder, an inertial measurement unit, and a “texture projector” for structured light 3D sensing. The robot runs on two 8-core computers with 48 GB RAM total and uses 16 laptop batteries for power. Its software stack is built entirely on the open-source Robot Operating System (ROS), enabling easy development and integration of new capabilities. PR2’s compliant arms and fingertip sensors allow delicate and adaptive manipulation, while its rich sensor suite supports autonomous navigation and environment mapping.

Connectivity

  • Wi-Fi (2.4 GHz / 5 GHz)
  • Ethernet
  • USB ports

Capabilities

  • Autonomous navigation with omnidirectional mobility
  • Dexterous manipulation with two 7-DOF compliant arms
  • Object recognition and grasping using cameras and laser sensors
  • Speech and gesture-based human-robot interaction
  • Tasks including table cleaning, towel folding, fetching drinks, and door opening
  • Open-source software enabling extensive customization