ARPG-logo
ARPG Research

Introduction

If mobile robots are to become ubiquitous, we must first solve fundamental problems in perception. Before a mobile robot system can act intelligently, it must be given – or acquire – a representation of the environment that is useful for planning and control. Perception comes before action, and the perception problem is one of the most difficult we face.

We study probabilistic perception algorithms and estimation theory that enable long-term autonomous operation of mobile robotic systems, particularly in unknown environments. We have extensive experience with vision based, real-time localization and mapping systems, and are interested in fundamental understanding of sufficient statistics that can be used to represent the state of the world.

An important goal in mobile robotics is the development of perception algorithms that allow for persistent, long-term autonomous operation in unknown situations (over weeks or more). In our effort to achieve long-term autonomy, we have had to solve problems of both metric and semantic estimation. We use real-time, embodied robot systems equipped with a variety of sensors – including lasers, cameras, inertial sensors, etc. – to advance and validate algorithms and knowledge representations that are useful for enabling long-term autonomous operation.

Current projects


Light source estimation

This effort addresses the problem of determining the location, direction, intensity, and color of the illuminants in a given scene. The problem has a broad range of applications in augmented reality, robust robot perception, and general scene understanding. In our research, we model complex light interactions with a custom path-tracer, capturing the effects of both direct and indirect illumination. Using a physically-based light model not only improves our estimation of the light sources, but will play a critical role in future research in surface property estimation and geometry refinement, ultimately leading to more accurate and complete scene reconstruction systems.

Publications

  • Mike Kasper, Christoffer Heckman. "Multiple Point Light Estimation from Low-Quality 3D Reconstructions". In International Conference on 3D Vision (3DV) 2019.
  • Mike Kasper, Nima Keivan, Gabe Sibley, Christoffer Heckman. "Light Source Estimation in Synthetic Images". In European Conference on Computer Vision, Virtual/Augmented Reality for Visual Artificial Intelligence Workshop 2016.

Funded by

  • Toyota grant 33643/1/ECNS20952N: Robust Perception

Parkour Cars

This project aims to develop high fidelity real-time systems for perception, planning and control of agile vehicles in challenging terrain including jumps and loop-the-loops. The current research is focused on the local planning and control problem. Due to the difficulty of the maneuvers, the planning and control systems must consider the underlying physical model of the vehicle and terrain. This style of simulation-in-the-loop planning enables very accurate prediction and correction of the vehicle state, as well as the ability to learn precise attributes of the underlying physical model.

Publications

  • Sina Aghli and Christoffer Heckman. "Terrain Aware Model Predictive Controller for Autonomous Ground Vehicles". In Robotics: Science and Systems, Bridging the Gap in Space Robotics Workshop 2017.
  • Christoffer Heckman, Nima Keivan, and Gabe Sibley. "Simulation-in-the-loop for Planning and Model-Predictive Control". In Robotics: Science and Systems, Realistic, Rapid, and Repeatable Robot Simulation Workshop 2015.

Funded by

  • NSF #1646556. CPS: Synergy: Verified Control of Cooperative Autonomous Vehicles
  • DARPA #N65236-16-1-1000. DSO Seedling: Ninja Cars
  • Toyota grant 33643/1/ECNS20952N: Robust Perception

Referring Expressions for Object Localization

Understanding references to objects based on attributes, spatial relationships, and other descriptive language expands the capability of robots to locate unknown objects (zero-shot learning), find objects in cluttered scenes, and communicate uncertainty with human collaborators. We are collecting a new set of annotations, SUNspot, for the SUNRGB-d scene understanding dataset. Unlike other referring expression datasets, SUNspot will focus on graspable objects in interior scenes accompanied by the depth sensor data and full semantic segmentation from the SUNRGB-d dataset. Using SUNspot, we hope to develop a novel referring expressions system that will improve object localization for use in human-robot interaction.

RAMFIS: Representations of Abstract Meaning for Information Synthesis

Humans can readily extract complex information from many different modalities, including spoken and written expressions and information from images and videos, and synthesize it into a coherent whole. This project aims to support automated synthesis of diverse multi-media information sources. // We are proposing a rich, multi-graph Common Semantic Representation (CSR) based on Abstract Meaning Representations (AMRs) embellished with vision and language vector representations and temporal and causal relations between events, and supported by a rich ontology of event and entity types.

Funded by

  • DARPA Award: FA8750-18-2-0016. AIDA

Compass

Compass is a simultaneous localization and mapping (SLAM) pipeline with extensible frontend capability and an optimization backend based on Ceres solver.

Publications

  • Fernando Nobre, Mike Kasper, Christoffer Heckman.. "". In IEEE International Conference on Robotics and Automation 2017.
  • Fernando Nobre, Christoffer Heckman.. "". In International Symposium on Experimental Robotics 2016.

Funded by

  • DARPA #N65236-16-1-1000. DSO Seedling: Ninja Cars
  • Toyota grant 33643/1/ECNS20952N: Robust Perception

MARBLE: Multi-agent Autonomy with RADAR-Based Localization for Exploration

ARPG is a component of team MARBLE, a funded participant in the DARPA Subterranean Challenge. We are providing the autonomy, perception and low-level planning algorithms for ground vehicle support in the project. The project kicked off in September 2018 and is ongoing, with competition events in September 2019 onward.

Funded by

  • DARPA TTO Subterranean Challenge