Article 11
The world is changing and so should our skills! My work on unsupervised, incremental skill learning in evolving environments for RL agents to be published in #ICLR2022Paper + demo + code @...
View ArticleArticle 10
Check out my latest work, we trained a (mini) GPT for learning diverse, multi-modal robotic behaviors from demonstrations!Particularly proud of our codebase, too: written to be clear, concise,, and...
View ArticleArticle 9
How can we train data-efficient robots that can respond to open-ended queries like “warm up my lunch” or “find a blue book”?Introducing CLIP-Field, a semantic neural field trained w/ NO human labels...
View ArticleArticle 8
Why spend ⏰/💸 collecting targeted expert demos or labelling your datasets when robots learn this well *w/o* any of them?We trained a 🤖 fully offline on only 4.5 hrs of uncurated demos & extract...
View ArticleArticle 7
I'll be at NeurIPS presenting Behavior Transformers -- find us at Hall J #110 on Tuesday morning at the very first session! Feel free to hit me up on DM/email if you want to grab ☕ and chat about robot...
View ArticleArticle 6
To recap, Behavior Transformer (BeT) is a new architecture for behavior cloning that can model task-agnostic multi-modal play data, capture their underlying modes, and solve tasks through unconditional...
View ArticleArticle 5
Since then, we've also developed Conditional-BeT, a way to train goal-conditioned BeT from fully uncurated data. C-BeT makes sense of "play" style robot demos w/ no labels and no RL to extract...
View ArticleArticle 4
At the core of CLIP-Field lies a neural field that maps real world coordinates to the semantic representation spaces underlying pretrained models like CLIP and Sentence-BERT. This mapping enables our...
View ArticleArticle 3
For real world exps, we collect RGB-D data using an iPhone 13 Pro and pre-process them using open-label detection/segmentation models like Detic and LSeg.We then convert the data to world coordinates...
View ArticleArticle 2
We can train a CLIP field from scratch under an hour, including automated labeling, thanks to advances in NeRF literature such as instant-NGP. Our trained model can then be used on a robot to find...
View ArticleArticle 1
Thanks to my advisors and collaborators @cpaxton@lerrel@soumith and Arthur Szlam, and finally Meta AI for an amazing internship!Paper: http://arxiv.org/abs/2210.05663More video/demos:...
View ArticleArticle 0
#Introduction I'm Mahi, third year PhD at NYU and visiting researcher at FAIR working on the intersection of #robotics and #machinelearning ! Since CLIP-Fields recently got outstanding paper award at...
View Article
More Pages to Explore .....