Quantcast
Channel: Mahi Shafiullah
Browsing all 12 articles
Browse latest View live

Article 11

The world is changing and so should our skills! My work on unsupervised, incremental skill learning in evolving environments for RL agents to be published in #ICLR2022Paper + demo + code @...

View Article



Article 10

Check out my latest work, we trained a (mini) GPT for learning diverse, multi-modal robotic behaviors from demonstrations!Particularly proud of our codebase, too: written to be clear, concise,, and...

View Article

Article 9

How can we train data-efficient robots that can respond to open-ended queries like “warm up my lunch” or “find a blue book”?Introducing CLIP-Field, a semantic neural field trained w/ NO human labels...

View Article

Article 8

Why spend ⏰/💸 collecting targeted expert demos or labelling your datasets when robots learn this well *w/o* any of them?We trained a 🤖 fully offline on only 4.5 hrs of uncurated demos & extract...

View Article

Article 7

I'll be at NeurIPS presenting Behavior Transformers -- find us at Hall J #110 on Tuesday morning at the very first session! Feel free to hit me up on DM/email if you want to grab ☕ and chat about robot...

View Article


Article 6

To recap, Behavior Transformer (BeT) is a new architecture for behavior cloning that can model task-agnostic multi-modal play data, capture their underlying modes, and solve tasks through unconditional...

View Article

Article 5

Since then, we've also developed Conditional-BeT, a way to train goal-conditioned BeT from fully uncurated data. C-BeT makes sense of "play" style robot demos w/ no labels and no RL to extract...

View Article

Article 4

At the core of CLIP-Field lies a neural field that maps real world coordinates to the semantic representation spaces underlying pretrained models like CLIP and Sentence-BERT. This mapping enables our...

View Article


Article 3

For real world exps, we collect RGB-D data using an iPhone 13 Pro and pre-process them using open-label detection/segmentation models like Detic and LSeg.We then convert the data to world coordinates...

View Article


Article 2

We can train a CLIP field from scratch under an hour, including automated labeling, thanks to advances in NeRF literature such as instant-NGP. Our trained model can then be used on a robot to find...

View Article

Article 1

Thanks to my advisors and collaborators @cpaxton@lerrel@soumith and Arthur Szlam, and finally Meta AI for an amazing internship!Paper: http://arxiv.org/abs/2210.05663More video/demos:...

View Article

Article 0

#Introduction I'm Mahi, third year PhD at NYU and visiting researcher at FAIR working on the intersection of #robotics and #machinelearning ! Since CLIP-Fields recently got outstanding paper award at...

View Article
Browsing all 12 articles
Browse latest View live




Latest Images

Pangarap Quotes

Pangarap Quotes

Vimeo 10.7.0 by Vimeo.com, Inc.

Vimeo 10.7.0 by Vimeo.com, Inc.

HANGAD

HANGAD

MAKAKAALAM

MAKAKAALAM

Doodle Jump 3.11.30 by Lima Sky LLC

Doodle Jump 3.11.30 by Lima Sky LLC