RoboVIP: Multi-View Video Generation with Visual Identity Prompting Augments Robot Manipulation

Boyang Wang1*, Haoran Zhang4*, Shujie Zhang1,2*, Jinkun Hao1,3, Mingda Jia1, Qi Lv1, Yucheng Mao1,
Zhaoyang Lyu1, Jia Zeng1, Xudong Xu1†, Jiangmiao Pang1
* Indicates Equal Contribution   Corresponding Author
1 Shanghai AI Laboratory    2 Tsinghua University    3 Shanghai Jiao Tong University    4 University of Michigan

Augmented Real-World Zero-Shot Robotics Manipulation Data by our RoboVIP

Abstract

The diversity, quantity, and quality of manipulation data are critical for training effective robot policies. However, due to hardware and setup constraints, collecting large-scale real-world manipulation data remains difficult to scale across diverse environments. Recent work uses text-prompt conditioned image diffusion models to augment manipulation data by altering the backgrounds and tabletop objects in the visual observations. However, these approaches often overlook the practical need for multi-view and temporally coherent observations required by state-of-the-art policy models. Further, text prompts alone cannot reliably specify the scene setup. To provide the diffusion model with explicit visual guidance, we introduce visual identity prompting, which supplies exemplar images as conditioning inputs to guide the generation of the desired scene setup. To this end, we also build a scalable pipeline to curate a visual identity pool from large robotics datasets. Using our augmented manipulation data to train downstream vision-language-action and visuomotor policy models yields consistent performance gains in both simulation and real-robot settings.

Figure 1
Main Pipeline
(1) We extract observation videos from robotics manipulation data with corresponding action data to segment the robot arm and interacted objects for inpainting-based augmentation.
(2) A large pool of visual identity prompts is curated from robotics datasets and used as conditioning inputs for our multi-view video diffusion model to conduct diverse augmentation.
(3) The augmented videos, paired with action information from original robotics manipulation data, are utilized for downstream VLA and visuomotor policy training.

Methods

First, our segmentation pipeline conducts two parallel streams: one for robot-arm segmentation and one for interacted-object segmentation. We first use the gripper-action signal to identify accurate keyframe ranges, which is helpful to locate the interacted objects that are not visible in the first or last frame. We then leverage off-the-shelf models such as Cosmos-Reason1 and SAM2, together with several heuristic refinements, to obtain accurate masks in a fully plug-and-play manner.

Overview of Visual Identity Prompting

Existing data augmentation methods for robot manipulation typically rely on text prompts to control the generation process. However, text descriptions alone are often insufficient to precisely specify complex scene configurations, especially for multi-object and multi-view manipulation scenarios.

To address this limitation, we introduce Visual Identity Prompting (VIP) to the robotics manipulation data augmentation, a conditioning mechanism that augments text prompts with explicit visual exemplars. By providing reference images that encode object appearance, layout, and identity, RoboVIP enables more controllable, consistent, and temporally coherent video generation.

VLA Pipeline

Visual Identity Curation Pipeline.

Our visual identity is curated by panoptic segmentation from the large-scale robotics dataset (BridgeV1, BridgeV2, Droid), followed by Image Quality Assessment, Clip Text-Image Completness, Clarity Filter, and Resolution Filter to find high-quality identity images.
In augmentation stage, we randomly select variable number of identity images from the pool and pack them into one image frame to serve as conditioning for our video diffusion model.



Our RoboVIP video diffusion model is conditioned on the segmented multi-view video sequence, structured text prompt, and visual identity prompting to achieve consistent visual augmentation.

RoboVIP Video Diffusion Model Architecture

Real-World Robot Deployment Comparisons

Vanilla

Original
Successful Case

RoboEngine
Successful Case

Cosmos-Transfer2.5
Successful Case

Our RoboVIP
Successful Case

Cluttered

Original
Failure Case

RoboEngine
Failure Case

Cosmos-Transfer2.5
Failure Case

Our RoboVIP
Successful Case

Video Generation Results

Droid Augmentation Comparisons

Ground Truth

Cosmos-Transfer2.5

RoboEngine

RoboVIP (Ours)

Case 1

Case 2

Case 3

Case 1

Case 2

Case 3

Case 1

Case 2

Case 3

Case 1

Case 2

Case 3

BridgeData V2 Augmented by Our RoboVIP

Our Case 1

Our Case 2

Our Case 3

Our Case 4

Real-World Robot Trajectories Augmented by Our RoboVIP

Our Case 1
(with 30 FPS)

Our Case 2
(with 30 FPS)

Our Case 3
(with 30 FPS)

Our Case 4
(with 30 FPS)

Simulation Results

Pi0 Roll-out in SimplerEnv with Our RoboVIP

Put Spoon on Tablecloth

Put Carrot on Plate

Stack Green Block on Yellow Block

Put Eggplant in Basket

Octo Roll-out in SimplerEnv with Our RoboVIP

Put Spoon on Tablecloth

Put Carrot on Plate

Stack Green Block on Yellow Block

Put Eggplant in Basket

Quantitative Results

Quantitative results table
Table X: Quantitative comparison of policies in SimplerEnv.

BibTeX

@misc{wang2026robovipmultiviewvideogeneration,
                title={RoboVIP: Multi-View Video Generation with Visual Identity Prompting Augments Robot Manipulation}, 
                author={Boyang Wang and Haoran Zhang and Shujie Zhang and Jinkun Hao and Mingda Jia and Qi Lv and Yucheng Mao and Zhaoyang Lyu and Jia Zeng and Xudong Xu and Jiangmiao Pang},
                year={2026},
                eprint={2601.05241},
                archivePrefix={arXiv},
                primaryClass={cs.CV},
                url={https://arxiv.org/abs/2601.05241}, 
          }

References

  1. Yuan, Chengbo, et al. "RoboEngine: Plug-and-Play Robot Data Augmentation with Semantic Robot Segmentation and Background Generation." arXiv preprint arXiv:2503.18738 (2025).
  2. Ali, Arslan, et al. "World simulation with video foundation models for physical ai." arXiv preprint arXiv:2511.00062 (2025).
  3. Chi, Cheng, et al. "Diffusion policy: Visuomotor policy learning via action diffusion." The International Journal of Robotics Research 44.10-11 (2025): 1684-1704.
  4. Team, Octo Model, et al. "Octo: An open-source generalist robot policy." arXiv preprint arXiv:2405.12213 (2024).
  5. Black, Kevin, et al. "$\pi_0 $: A Vision-Language-Action Flow Model for General Robot Control." arXiv preprint arXiv:2410.24164 (2024).