Skip to the content.

Xiaoli Wang1, Sipu Ruan1, Xin Meng1, Hongtao Wu1, Wanze Li1, Zhanhong Sun1, Yuwei Wu1, Ceng Zhang1, Wan Su1, Chen Dong2, Cecilia Laschi12* Gregory Chirikjian12*

1Department of Mechanical Engineering, National University of Singapore, Singapore

2Department of Mechanical Engineering, University of Delaware, USA

*Principal Investigator

Motivation: Learning Affordances Through Play

Just like this monkey experimenting with stones, humans learn to use tools by playing, testing, and discovering their affordances. A stick can become a lever, a rock can become a hammer — but this is not taught directly. Instead, it emerges through interaction and imagination.

This project takes inspiration from this natural process. Our goal is to enable robots to:

This biological analogy underpins our concept of Affordance Imagination: giving robots the ability to reason about function through simulation and exploration, the same way evolution equipped primates and humans with the ability to learn tools by trying them out.

Monkey tool imagination
Robot imagination

Introduction

This website presents an affordance dictionary of common household objects and research outcomes of our group to demonstrate progress toward our funded project goal: using physical simulation to detect and reason about the affordances of objects.

Our central concept is affordance imagination — enabling robots to mentally simulate possible interactions with previously unseen objects. By integrating physics-based reasoning, geometric analysis, and learning methods (from demonstrations and large language models), our robots can classify novel objects, predict functional poses, and execute manipulation strategies without relying on massive amounts of training data.

The works presented here illustrate how affordance imagination bridges the gap between theory and practice: from seating a teddy bear on a previously unseen chair, to predicting hanging poses of tools, to capping containers, to leveraging LLMs for task decomposition. Together, these efforts chart a path toward safe, generalizable, and intelligent robot interaction in household and healthcare environments.

Object Affordance Dictionary

This section is designed as a compact catalog for common household objects that can support affordance reasoning, simulation, and demonstration. Each entry records the object category, the 3D asset link, the main affordance, reasoning analysis by LLM, the dominant interaction pattern, and the corresponding imagination video in simulation.

Current asset mapping. The repository contains household object meshes under assets/models/ in .obj and .glb format, affordance reasoning profiles under assets/profiles/, and demonstration videos under assets/videos/. Click a model path to open the 3D viewer, analysis.txt to read the corresponding reasoning or the mp4 video to visualize the imagination process in PyBullet simulation. To achieve generalized affordance reasoning, we develop a framework integrating LLMs for novel object imagination and you can find our demo code in ImaginationWithLLM.
ID Object Name 3D Model Primary Affordance Affordance Reasoning Interaction Pattern Video
01Ashtrayassets/models/ashtray.glb
Click to view 3D model
Collect ash and small discarded itemsanalysis.txt
Click to read reasoning
grasp-rimplace-on-surfacedrop-intoassets/videos/ashtray.mp4
Click to open video
02Basinassets/models/basin.glb
Click to view 3D model
Contain water or loose objectsanalysis.txt
Click to read reasoning
grasp-rimcarryfill/pourassets/videos/basin.mp4
Click to open video
03Basketassets/models/basket.obj
Click to view 3D model
Store and transport household itemsanalysis.txt
Click to read reasoning
grasp-handlecarryload/unloadassets/videos/basket.mp4
Click to open video
04Bathtubassets/models/bathtub.glb
Click to view 3D model
Contain a body for washing or soakinganalysis.txt
Click to read reasoning
approachstep-insupport-bodyassets/videos/bathtub.mp4
Click to open video
05Bedassets/models/bed.glb
Click to view 3D model
Support lying, resting, and sleepinganalysis.txt
Click to read reasoning
approachsit/liereposition-beddingassets/videos/bed.mp4
Click to open video
06Bottleassets/models/bottle.glb
Click to view 3D model
Store and pour liquidanalysis.txt
Click to read reasoning
wrap-grasptilt-pourset-downassets/videos/bottle.mp4
Click to open video
07Bowlassets/models/bowl.glb
Click to view 3D model
Contain food or small itemsanalysis.txt
Click to read reasoning
grasp-rimcarryplaceassets/videos/bowl.mp4
Click to open video
08Boxassets/models/box.glb
Click to view 3D model
Enclose, store, and organize contentsanalysis.txt
Click to read reasoning
grasp-sideopen/closepackassets/videos/box.mp4
Click to open video
09Chairassets/models/chair.glb
Click to view 3D model
Support sitting postureanalysis.txt
Click to read reasoning
grasp-backrestpull/pushsit-supportassets/videos/chair.mp4
Click to open video
10Cubbyassets/models/cubby.glb
Click to view 3D model
Compartmentalize and store objectsanalysis.txt
Click to read reasoning
insertretrieveorganizeassets/videos/cubby.mp4
Click to open video
11Cupassets/models/cup.glb
Click to view 3D model
Contain and transport a drinkanalysis.txt
Click to read reasoning
grasp-sideliftdrink/pourassets/videos/cup.mp4
Click to open video
12Cupboardassets/models/cupboard.glb
Click to view 3D model
Store protected household itemsanalysis.txt
Click to read reasoning
open-doorshelveclose-doorassets/videos/cupboard.mp4
Click to open video
13Display Standassets/models/display.glb
Click to view 3D model
Present and support an object visiblyanalysis.txt
Click to read reasoning
place-objectstabilizerepositionassets/videos/display.mp4
Click to open video
14Ladleassets/models/ladle.glb
Click to view 3D model
Scoop and transfer liquidanalysis.txt
Click to read reasoning
grasp-handledippour-outassets/videos/ladle.mp4
Click to open video
15Plateassets/models/plate.glb
Click to view 3D model
Support and present foodanalysis.txt
Click to read reasoning
pinch-edgecarry-flatset-downassets/videos/plate.mp4
Click to open video
16Potassets/models/pot.glb
Click to view 3D model
Contain and heat ingredientsanalysis.txt
Click to read reasoning
grasp-handleliftpourassets/videos/pot.mp4
Click to open video
17Riserassets/models/riser.glb
Click to view 3D model
Raise an object to a higher levelanalysis.txt
Click to read reasoning
place-underelevatestabilizeassets/videos/riser.mp4
Click to open video
18Shelfassets/models/shelf.glb
Click to view 3D model
Support stored objects verticallyanalysis.txt
Click to read reasoning
place-objectstackretrieveassets/videos/shelf.mp4
Click to open video
19Shoe Rackassets/models/shoe_rack.glb
Click to view 3D model
Organize and store footwearanalysis.txt
Click to read reasoning
place-pairalignretrieveassets/videos/shoe_rack.mp4
Click to open video
20Stoolassets/models/stool.glb
Click to view 3D model
Support sitting or standing reachanalysis.txt
Click to read reasoning
top-grasprepositionstep/sitassets/videos/stool.mp4
Click to open video
21Tableassets/models/table.glb
Click to view 3D model
Support placement and workspace useanalysis.txt
Click to read reasoning
place-objectlean-supportpushassets/videos/table.mp4
Click to open video
22Trash Binassets/models/trashbin.glb
Click to view 3D model
Receive and contain wasteanalysis.txt
Click to read reasoning
drop-intograsp-rimrelocateassets/videos/trashbin.mp4
Click to open video
23TV Standassets/models/tv_stand.obj
Click to view 3D model
Support media devices at viewing heightanalysis.txt
Click to read reasoning
place-objectorganize-cablesrepositionassets/videos/tv_stand.mp4
Click to open video
24Vaseassets/models/vase.obj
Click to view 3D model
Hold flowers or decorative stemsanalysis.txt
Click to read reasoning
grasp-bodyinsert-stemsplace-displayassets/videos/vase.mp4
Click to open video
25Wine Glassassets/models/wine_glass.obj
Click to view 3D model
Contain and present a beverage delicatelyanalysis.txt
Click to read reasoning
pinch-stemliftsip/placeassets/videos/wine_glass.mp4
Click to open video

Put a Lid on It! A Learning-Free Method to Cap a Container via Physical Simulations (UR 2025)

lid

Develops a simulation-based method for matching unseen containers and lids. Uses Gaussian process distance fields and matching imagination to achieve over 91% success rate. Code.

Goal-Guided Reinforcement Learning: Leveraging Large Language Models for Long-Horizon Task Decomposition (ICRA 2025)

g2rl

Proposes G2RL, where LLMs generate subgoals to help reinforcement learning explore efficiently in long-horizon tasks. Validated across simulated environments with improved convergence. Code.

RAIL: Robot Affordance Imagination with Large Language Models (ISER 2025)

rail

Combines LLMs with physics-based simulation for automatic affordance reasoning. From minimal semantic input, robots classify novel objects, predict functional poses, and perform unseen tasks. Code.

RaggeDi: Diffusion-based State Estimation of Disordered Rags, Sheets, Towels and Blankets (arXiv 2024)

raggedi

Applies diffusion models for cloth state estimation. Represents cloth deformation as a translation map and achieves superior accuracy in both simulation and real-world experiments. Code.

PRIMP: PRobabilistically-Informed Motion Primitives for Efficient Affordance Learning from Demonstration (T-RO 2024)

primp

Introduces PRIMP, which learns probabilistic motion primitives from demonstrations and integrates them with workspace-aware motion planning (Workspace-STOMP). Demonstrated on tool-use and affordance-based tasks. Code.

Grasping by Hanging: A Learning-Free Grasping Detection Method for Previously Unseen Objects (arXiv 2024)

grasping-hanging

Introduces Grasping-by-Hanging (GbH), a learning-free approach where robots detect hangable structures to derive grasping poses. Particularly effective for thin or flat objects. Code.

I Get the Hang of It! A Learning-Free Method to Predict Hanging Poses for Previously Unseen Objects (RA-L 2024)

hanging

Proposes a learning-free framework for predicting stable hanging poses using geometric and mechanical analysis. Outperforms learning-based baselines in simulation and real-world tests. Code.

Prepare the Chair for the Bear! Robot Imagination of Sitting Affordance to Reorient Previously Unseen Chairs (RA-L 2023)

prepare-chair

Robots reconstruct previously unseen chairs, simulate their sitting affordance, and reorient them so a humanoid agent (teddy bear proxy) can be seated. Demonstrates object classification and functional pose prediction via physics-based imagination. Code.

Put the Bear on the Chair! Intelligent Robot Interaction with Previously Unseen Chairs via Robot Imagination (arXiv 2022)

bear-chair

Extends chair affordance reasoning by enabling a humanoid robot to physically seat a teddy bear. Introduces a human-assistance module to adjust inaccessible chair poses via natural language instructions. Code.