Filters








31 Hits in 3.6 sec

iGibson 1.0: a Simulation Environment for Interactive Tasks in Large Realistic Scenes [article]

Bokui Shen, Fei Xia, Chengshu Li, Roberto Martín-Martín, Linxi Fan, Guanzhi Wang, Claudia Pérez-D'Arpino, Shyamal Buch, Sanjana Srivastava, Lyne P. Tchapmi, Micael E. Tchapmi, Kent Vainio (+3 others)
2021 arXiv   pre-print
We present iGibson 1.0, a novel simulation environment to develop robotic solutions for interactive tasks in large-scale realistic scenes.  ...  Our environment contains 15 fully interactive home-sized scenes with 108 rooms populated with rigid and articulated objects.  ...  We thank NVIDIA, Google, ONR MURI (N00014-14-1-0671), ONR (1165419-10-TDAUZ), Panasonic (1192707-1-GWMSX), Qualcomm and Samsung for their support.  ... 
arXiv:2012.02924v6 fatcat:rntndokbmrek3iznnjgwbkmp3u

iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks [article]

Chengshu Li, Fei Xia, Roberto Martín-Martín, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, Andrey Kurenkov, C. Karen Liu (+4 others)
2021 arXiv   pre-print
We present iGibson 2.0, an open-source simulation environment that supports the simulation of a more diverse set of household tasks through three key innovations.  ...  Third, iGibson 2.0 includes a virtual reality (VR) interface to immerse humans in its scenes to collect demonstrations.  ...  This prevents us from simulating tasks like folding laundry and making bed in large, interactive scenes.  ... 
arXiv:2108.03272v4 fatcat:kopai4k5o5aallgwauteeiezve

BEHAVIOR: Benchmark for Everyday Household Activities in Virtual, Interactive, and Ecological Environments [article]

Sanjana Srivastava, Chengshu Li, Michael Lingelbach, Roberto Martín-Martín, Fei Xia, Kent Vainio, Zheng Lian, Cem Gokmen, Shyamal Buch, C. Karen Liu, Silvio Savarese, Hyowon Gweon (+2 others)
2021 arXiv   pre-print
We introduce BEHAVIOR, a benchmark for embodied AI with 100 activities in simulation, spanning a range of everyday household chores such as cleaning, maintenance, and food preparation.  ...  Building such a benchmark poses three fundamental difficulties for each activity: definition (it can differ by time, place, or person), instantiation in a simulator, and evaluation.  ...  Acknowledgments We would like to thank Bokui Shen, Xi Jia Zhou, and Jim Fan for comments, ideas, and support in data collection.  ... 
arXiv:2108.03332v1 fatcat:jviakd3qwvabderhycrilezxoe

A Survey of Embodied AI: From Simulators to Research Tasks [article]

Jiafei Duan, Samson Yu, Hui Li Tan, Hongyuan Zhu, Cheston Tan
2022 arXiv   pre-print
Consequently, there has been substantial growth in the demand for embodied AI simulators to support various embodied AI research tasks.  ...  Instead, they learn through interactions with their environments from an egocentric perception similar to humans.  ...  Acknowledgments This research is supported by the Agency for Science, Technology and Research (A*STAR), Singapore under its AME Programmatic Funding Scheme (Award #A18A2b0046) and the National Research  ... 
arXiv:2103.04918v8 fatcat:2zu4klcchbhnvmjej5ry3emu4u

Learning Synthetic to Real Transfer for Localization and Navigational Tasks [article]

Maxime Pietrantoni, Boris Chidlovskii, Tomi Silander
2020 arXiv   pre-print
To design the navigation pipeline four main challenges arise; environment, localization, navigation and planning. The iGibson simulator is picked for its photo-realistic textures and physics engine.  ...  This work aimed at creating, in a simulation, a navigation pipeline whose transfer to the real world could be done with as few efforts as possible.  ...  Simulation Environment and agent Four main simulation environments are typically used in autonomous navigation tasks, iGibson, Habitat-sim, Sapien, AI2Thor. iGibson and Habitat-sim use real-world scenes  ... 
arXiv:2011.10274v2 fatcat:f3dq263minc3feka2qshhkivly

BenchBot: Evaluating Robotics Research in Photorealistic 3D Simulation and on Real Robots [article]

Ben Talbot, David Hall, Haoyang Zhang, Suman Raj Bista, Rohan Smith, Feras Dayoub, Niko Sünderhauf
2020 arXiv   pre-print
We introduce BenchBot, a novel software suite for benchmarking the performance of robotics research across both photorealistic 3D simulations and real robot platforms.  ...  BenchBot is publicly available (http://benchbot.org), and we encourage its use in the research community for comprehensively evaluating the simulated and real world performance of novel robotic algorithms  ...  object detection challenge [17] (is fully simulated with realistic light reflections, but for a spaced out "clean environment"), and b) iGibson [12] (built from real-world data real-world data with  ... 
arXiv:2008.00635v1 fatcat:6j7idhvrlveqdcwidtpnms6tya

The ThreeDWorld Transport Challenge: A Visually Guided Task-and-Motion Planning Benchmark for Physically Realistic Embodied AI [article]

Chuang Gan, Siyuan Zhou, Jeremy Schwartz, Seth Alter, Abhishek Bhandwaldar, Dan Gutfreund, Daniel L.K. Yamins, James J DiCarlo, Josh McDermott, Antonio Torralba, Joshua B. Tenenbaum
2021 arXiv   pre-print
To complete the task, an embodied agent must plan a sequence of actions to change the state of a large number of objects in the face of realistic physical constraints.  ...  In this challenge, an embodied agent equipped with two 9-DOF articulated arms is spawned randomly in a simulated physical home environment.  ...  A task planner is responsible for reasoning over the large set of states of the environment, and a motion planner computes a path to accomplish the task.  ... 
arXiv:2103.14025v1 fatcat:77fdafsoevcfdidnyqml75hbum

ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation [article]

Chuang Gan, Jeremy Schwartz, Seth Alter, Damian Mrowca, Martin Schrimpf, James Traer, Julian De Freitas, Jonas Kubilius, Abhishek Bhandwaldar, Nick Haber, Megumi Sano, Kuno Kim (+12 others)
2021 arXiv   pre-print
We introduce ThreeDWorld (TDW), a platform for interactive multi-modal physical simulation.  ...  TDW enables simulation of high-fidelity sensory data and physical interactions between mobile agents and objects in rich 3D environments.  ...  Using TDWs flexible and general framework, we can train embodied agents to perform tasks in a 3D physical world and collect behavioral data in a simulated environment that mimics the sensory and interactive  ... 
arXiv:2007.04954v2 fatcat:xs77qhhhujchrcflhoqtu5ubtq

Deep Learning for Embodied Vision Navigation: A Survey [article]

Fengda Zhu, Yi Zhu, Vincent CS Lee, Xiaodan Liang, Xiaojun Chang
2021 arXiv   pre-print
"Embodied visual navigation" problem requires an agent to navigate in a 3D environment mainly rely on its first-person observation.  ...  modeling seen scenarios, understanding cross-modal instructions, and adapting to a new environment, etc.  ...  The rendering scenes of some datasets are shown in Fig. 3 . Embodied Simulators An embodied simulator provides an interface for an agent to interact with the environment.  ... 
arXiv:2108.04097v4 fatcat:46p2p3zlivabbn7dvowkyccufe

Interactive Gibson Benchmark: A Benchmark for Interactive Navigation in Cluttered Environments [article]

Fei Xia, William B. Shen, Chengshu Li, Priya Kasimbeg, Micael Tchapmi, Alexander Toshev, Li Fei-Fei, Roberto Martín-Martín, Silvio Savarese
2020 arXiv   pre-print
Our benchmark comprises two novel elements: 1) a new experimental setup, the Interactive Gibson Environment, which simulates high fidelity visuals of indoor scenes, and high fidelity physical dynamics  ...  and even encouraged to accomplish a task.  ...  Tchapmi for helpful discussions. We thank Google for funding. Fei Xia would like to thank Stanford Graduate Fellowship for the support.  ... 
arXiv:1910.14442v2 fatcat:uaggamqrazfnpbvsd7pfxbk2ce

A Survey on Human-aware Robot Navigation [article]

Ronja Möller, Antonino Furnari, Sebastiano Battiato, Aki Härmä, Giovanni Maria Farinella
2021 arXiv   pre-print
This paper is concerned with the navigation aspect of a socially-compliant robot and provides a survey of existing solutions for the relevant areas of research as well as an outlook on possible future  ...  Given the current growth and innovation in the research communities concerned with the topics of robot navigation, human-robot-interaction and human activity recognition, it seems like this might soon  ...  It contains 30 varied photo-realistic indoor scenes and supports interaction with objects in the scene.  ... 
arXiv:2106.11650v1 fatcat:rhutta3sqrhk3po4hmi77h33qy

ReLMoGen: Leveraging Motion Generation in Reinforcement Learning for Mobile Manipulation [article]

Fei Xia, Chengshu Li, Roberto Martín-Martín, Or Litany, Alexander Toshev, Silvio Savarese
2021 arXiv   pre-print
Our method is benchmarked on a diverse set of seven robotics tasks in photo-realistic simulation environments.  ...  To validate our method, we apply ReLMoGen to two types of tasks: 1) Interactive Navigation tasks, navigation problems where interactions with the environment are required to reach the destination, and  ...  for SGP-R, SGP-D, motion generators, and iGibson simulator in Table A.3, Table A.4, Table A.5 and Table A.6.  ... 
arXiv:2008.07792v2 fatcat:p3xmdsvr6vck5m57y2un3pdpvi

SCOD: Active Object Detection for Embodied Agents using Sensory Commutativity of Action Sequences [article]

Hugo Caselles-Dupré, Michael Garcia-Ortiz, David Filliat
2021 arXiv   pre-print
Our experiments on 3D realistic robotic setups (iGibson) demonstrate the accuracy of SCOD and its generalization to unseen environments and objects.  ...  With SCOD, we aim at providing a novel way of approaching the problem of object discovery in the context of a naive embodied agent. We provide code and a supplementary video.  ...  SCOD on 3D realistic robotic setups by using the Fetch robot in the iGibson interactive simulator.  ... 
arXiv:2107.02069v1 fatcat:rjajnn24vbfitdfug5gpcwbrsi

Learning Human Search Behavior from Egocentric Visual Inputs [article]

Maks Sorokin, Wenhao Yu, Sehoon Ha, C. Karen Liu
2020 arXiv   pre-print
"Looking for things" is a mundane but critical task we repeatedly carry on in our daily life.  ...  We introduce a method to develop a human character capable of searching for a randomly located target object in a detailed 3D scene using its locomotion capability and egocentric vision perception represented  ...  Specifically, we used iGibson, which provides a suite of realistic indoor environments [XSL * 20] Wenhao: verify.  ... 
arXiv:2011.03618v1 fatcat:a36hggr7sraydknanltivrlq5q

Core Challenges in Embodied Vision-Language Planning [article]

Jonathan Francis, Nariaki Kitamura, Felix Labelle, Xiaopeng Lu, Ingrid Navarro, Jean Oh
2022 arXiv   pre-print
We propose a taxonomy to unify these tasks and provide an in-depth analysis and comparison of the new and current algorithmic approaches, metrics, simulated environments, as well as the datasets used for  ...  In this survey paper, we discuss Embodied Vision-Language Planning (EVLP) tasks, a family of prominent embodied navigation and manipulation problems that jointly use computer vision and natural language  ...  This work was supported, in part, by a doctoral research fellowship from Bosch Research and by the U.S. Air Force Office of Scientific Research, under award number FA2386-17-1-4660.  ... 
arXiv:2106.13948v4 fatcat:esrtfxpun5ae5kaydjnymf3v6u
« Previous Showing results 1 — 15 out of 31 results