Kinect Robot Navigation Contest

http://www.iros2014.org/program/kinect-robot-navigation-contest


IROS 2014, Microsoft, and Adept MobileRobots are proud to announce the 2014 Kinect Autonomous Mobile Robot Navigation Contest, to be held in the Exhibit Hall at IROS on September 18, 2014.


Round 1: Qualification Phase 

The Qualification Phase will start at 12:01 a.m. Pacific Time (PT) on March 21, 2014 and end at 11:59 

p.m. PT on May 14, 2014. Entries must be received within this phase to be eligible for potential 

advancement to Round 2. 

 

Round 2: Event Participation Phase 

The Event Participation Phase will take place on-site at the 2014 International Conference on 

Intelligent Robots and Systems in Chicago, Illinois on September 18 during normal conference hours. 



Round 1: Qualification Phase  

During the qualification phase, you must send an email to roboinfo@microsoft.com with the words 

“IROS 2014 Kinect Navigation Contest Entry” in the subject line. The body of the email should contain 

your first and last name, your organizational affiliation, and a note that you will be in attendance at 

IROS and would like to participate in the Contest. 

 

Further, within your email you must include a 1 page (500 words maximum) description of the 

mapping and navigation solution you intend to create for the contest and optional links to a publicly 

available video (one minute in length or longer) or other publications that demonstrates your ability to 

implement autonomous navigation. All entries must be presented in the English language. 

 

Your email will serve as your entry into Round 1. There is a limit of one (1) email per person. If you are 

submitting an entry on behalf of your organization, each entry received from your organization must 

be substantially unique and different. If you send multiple emails, only your last and most recent email 

will count as your official entry. We are not responsible for incomplete entries or entries that are not 

decipherable for any reason. 



1. Mapping 

 - We didn’t use SLAM algorithms for localization; instead we used odometry information obtained from a Pioneer 3dx mobile robot. This is mainly due to following two reasons. One is that there are some moving obstacles in the operating environment. It is well-known that handling dynamic obstacles in that mapping phase is difficult task to do and we focused more on the navigation part. The other reason is that we try to make this algorithm as light as possible so that it can be run in real-time with an ordinary laptop.

 

2. Navigation 

 - Our goal is to control the robot to the desired goal point through predefined waypoints without colliding into obstacles, both static and dynamic, in a dynamic environment.

In our previous works, we focused on avoiding moving pedestrians, so we used a skeleton-grab API from the Kinect camera for detecting and tracking moving persons. In addition, we also used sonar sensors in Pioneer 3DX for handling objects that are too close to detect with the Kinect camera. Without presence of obstacles in front, we used a pure pursuit algorithm as our low-level controller to reach the goal point. 

  In handling dynamic obstacles, predicting the next position of an obstacle is crucially important and we will use autoregressive Gaussian process motion model (AR-GPMM) to predict future trajectories. AR-GPMM predicts next positions of moving objects based on recent m positions using autoregressive Gaussian process.  Tracking will be typically done using the Kinect camera from skeleton-grab API as we assumed pedestrians are moving around in our working environment. 

 In generating the avoiding motion, we will use autoregressive Gaussian process motion controller (AR-GPMC).  Most of the existing navigation algorithms are often computationally heavy so that controlling a real robot in real-time has been a difficult task to handle. On the other hand, our proposed AR-GPMC learns how to control from existing motion control algorithms, which can hardly run in real-time, using Gaussian process.

 The previous results are summarized in our recent paper, "Real-Time Navigation in Crowded Dynamic Environments Using Gaussian Process Motion Control", which was presented in ICRA 2014. 

 

3. Video links 

 - Supplementary video for ICRA 2014: http://www.youtube.com/watch?v=wHGymXWdNSY 

 - This video describes our robot platform as well as simulation and actual experimental results. 

 

4. Publications

Sungjoon Choi, Eunwoo Kim, and Songhwai Oh, “Real-Time Navigation in Crowded Dynamic Environments Using Gaussian Process Motion Control,” IEEE International Conference on Robotics and Automation, 2014

Eunwoo Kim, Sungjoon Choi, and Songhwai Oh, “A Robust Autoregressive Gaussian Process Motion Model Using l1-Norm Based Low-Rank Kernel Matrix Approximation”, IEEE International Conference on Intelligent Robots and Systems, 2014 (will be presented)



 


신고
« PREV : 1 : ··· : 169 : 170 : 171 : 172 : 173 : 174 : 175 : 176 : 177 : ··· : 577 : NEXT »