About the Workshop

Generative AI (GenAI) techniques, including stable diffusion, are transforming robotic perception and development by enabling rapid multimodal data generation and novel scene synthesis from prompts in text, images, and other modalities. This makes GenAI a scalable and diverse alternative for data generation, complementary to physical simulators like Mujoco and Isaac Lab.

Workshop Figure

This workshop brings together leading experts in AI, robotics, computer vision, and simulation to explore how GenAI can enhance robotic development and deployment, focusing on combining GenAI's diversity and scalability in data generation with physics-aware simulation to improve real-world transferability. It aims to establish a cloud-based framework for benchmarking robotic performance using GenAI-generated datasets.

Call for Papers

We invite submissions presenting novel research, methodologies, and applications related to robotic data generation, evaluation, and the integration of generative AI with simulation and real-world deployment. All papers will be peer-reviewed for originality, relevance, technical quality, and clarity. Accepted papers will be presented as posters. At least one author must attend in person to present.

Topics of Interest:

  • Generative AI for robotic data generation and simulation
  • Task-aligned and physically realistic data synthesis for robotics
  • Evaluation and benchmarking of generated data and trained models
  • Multimodal data generation and prompt alignment for robotics tasks
  • Bridging simulation and real-world deployment in robotics
  • Cloud-based robotic evaluation platforms and benchmarking frameworks
  • Applications of GenAI in manipulation, navigation, and teleoperation
  • Datasets, metrics, and reproducibility in robotic data generation

Awards:

  • Best Paper Award: $200 USD will be awarded to the most outstanding paper as selected by the program committee.
  • Best Poster Award: $400 USD will be awarded to the best poster presentation.

Important Dates:

  • Provisional Deadline for Paper submissions: September 15, 2025
  • Provisional Notification of acceptance: September 22, 2025

Late-Breaking Work Paper Submission:

  • Deadline for Paper submissions: September 15, 2025
  • Notification of acceptance: September 22, 2025

Submission Instructions:

  • There is no page limit for submitted papers, but they should fall within the scope of the workshop
  • We welcome both new research contributions and previously published works
  • Presenting your work at the workshop will not interfere with its future publication elsewhere
  • Please use the main conference’s format guidelines and template, available here
  • Submit your paper via OpenReview.
  • Or if you have issues submitting via OpenReview, you can submit your paper in PDF format via email to: rodge.iros25@gmail.com

Tentative Program

Time Talk Tentative Titles and Comments
2:00 – 2:10Opening RemarksIntroduction to the workshop theme and objectives.
2:10 – 2:35Keynote Talk by Peter Yichen Chen Robot Proprioception Meets Differentiable Simulation, Q&A.
2:35 – 3:00Keynote Talk by Shan LuoData Generation for Visual-Tactile Sensing, Q&A.
3:00 – 3:25Keynote Talk by Anh NguyenGenerative AI for Language-driven Grasping, Q&A.
3:25 – 4:00Coffee Break & Poster Session & Live DemoInformal networking with poster presentations and live demonstrations.
4:00 – 4:25Keynote Talk by Weiwei Wan Advantages and challenges of learning methods in robotic manipulation, Q&A.
4:25 – 4:50Keynote Talk by Xiaoshuai HaoExploring Multimodal Visual Language Models for Embodied Intelligence: Opportunities, Challenges, and Future Directions, Q&A.
4:50 – 5:15Keynote Talk by Florian T. PokornyTowards Data-Driven Robotic Manipulation Research at Scale, Q&A.
5:15 – 5:50Panel DiscussionKeynote speakers and organisers discuss challenges, best practices, and future directions with the audience.
5:50 – 6:00Closing RemarksSummary of key takeaways, awards, and potential collaboration opportunities.

Invited Speakers