Магистерский проект 2008 Магистрант: Матрунич Сергей Анатольевич Научный руководитель: Зимянин Л.Ф. Simultaneous localization and mapping.

Презентация:



Advertisements
Похожие презентации
HPC Pipelining Parallelism is achieved by starting to execute one instruction before the previous one is finished. The simplest kind overlaps the execution.
Advertisements

© 2002 IBM Corporation Confidential | Date | Other Information, if necessary © Wind River Systems, released under EPL 1.0. All logos are TM of their respective.
Time-Series Analysis and Forecasting – Part IV To read at home.
11 BASIC DRESS-UP FEATURES. LESSON II : DRESS UP FEATURES 12.
SPLAY TREE The basic idea of the splay tree is that every time a node is accessed, it is pushed to the root by a series of tree rotations. This series.
Diffraction and Interference. Interference and Diffraction Distinguish Waves from Particles O The key to understanding why light behaves like waves is.
1 Another useful model is autoregressive model. Frequently, we find that the values of a series of financial data at particular points in time are highly.
Ionospheric model. Introduction Because of the complicated nature of the ionosphere, there have been numerous approaches for ionospheric modeling. In.
Convolutional Codes Mohammad Hanaysheh Mahdi Barhoush.
Chap 9-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 9 Estimation: Additional Topics Statistics for Business and Economics.
Inner Classes. 2 Simple Uses of Inner Classes Inner classes are classes defined within other classes The class that includes the inner class is called.
Linear Block Codes Mahdi Barhoush Mohammad Hanaysheh.
© 2005 Cisco Systems, Inc. All rights reserved. BGP v Route Selection Using Policy Controls Applying Route-Maps as BGP Filters.
Chap 7-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 7 Sampling and Sampling Distributions Statistics for Business.
© 2005 Cisco Systems, Inc. All rights reserved. BGP v Customer-to-Provider Connectivity with BGP Connecting a Multihomed Customer to Multiple Service.
© 2006 Cisco Systems, Inc. All rights reserved. BSCI v Implementing BGP Explaining BGP Concepts and Terminology.
© 2005 Cisco Systems, Inc. All rights reserved. INTRO v Growing the Network Maximizing the Benefits of Switching.
Teacher of English – Polishchuk N.M 1. The passive model of learning 2. The active learning model 3. Interactive learning model.
Imagine what it is like to be an army tank driver. You must be very alert. You must be able to react quickly when under fire and drive the vehicle carefully.
BASIC ASSEMBLY DESIGN 79. There is a number of ways to enter ASSEMBLY DESIGN mode. Any ONE way will do it. Click here 80.
Транксрипт:

Магистерский проект 2008 Магистрант: Матрунич Сергей Анатольевич Научный руководитель: Зимянин Л.Ф. Simultaneous localization and mapping

Краткое содержание Введение SLAM используя фильтр Калмана SLAM используя фильтр для частиц Исследование : проблема поиска

Введение: SLAM SLAM: Simultaneous Localization and Mapping Робот изучает незнакомое, статическое окружение. Дано: Система управления роботом Визуальные датчики Оценка: Положения робота -- localization где Я ? Детализация окружающего пространства – mapping На что похоже то что вокруг меня? Оба источника данных зашумлены.

Введение: SLAM Допущение Маркова x0x0 x1x1 x2x2 x t-1 xtxt … o1o1 o t-1 … a1a1 a t-1 … o2o2 otot a2a2 atat m State transition: Observation function:

Введение: SLAM Method: Sequentially estimate the probability distribution p(x t ) and update the map. Prior: p(x 0 ) p(x 1 ) p(m 1 ) or m 1 … p(x t-1 ) p(m t-1 ) or m t-1 p(x t ) p(m t ) or m t Prior distribution on x t after taking action a t

Введение: SLAM Методы: 1. Parametric method – Kalman filter 2. Sample-based method – particle filter The robots trajectory estimate is a tracking problem Represent the distribution of robot location x t (and map m t ) by a Normal distribution Sequentially update μ t and Σ t Represent the distribution of robot location x t (and map m t ) by a large amount of simulated samples. Resample x t (and m t ) at each time step

Введение: SLAM Почему SLAM трудная задача? Robot location and map are both unknown. Location error Map error The small error will quickly accumulated over time steps. The errors come from inaccurate measurement of actual robot motion (noisy action) and the distance from obstacle/landmark (noisy observation). When the robot closes a physical loop in the environment, serious misalignment errors could happen.

SLAM: фильтр Калмана Корректирующее уровнение: Предположение: Prior p(x 0 ) is a normal distribution Observation function p(o|x) is a normal distribution Тогда: Posterior p(x 1 ), …, p(x t ) are all normally distributed. Mean μ t and covariance matrix Σ t can be derived analytically. Sequentially update μ t and Σ t for each time step t

SLAM: фильтр Калмана Assume: State transition Observation function Kalman filter: Propagation (motion model): Update (sensor model):

SLAM: фильтр Калмана The hidden state for landmark-based SLAM: localizationmapping Map with N landmarks: (3+2N)-dimentional Gaussian State vector x t can be grown as new landmarks are discovered.

SLAM: фильтр для частиц Update equation: Основная идея: Normal distribution assumption in Kalman filter is not necessary A set of samples approximates the posterior distribution and will be used at next iteration. Each sample maintains its own map; or all samples maintain a single map. The map(s) is updated upon observation, assuming that the robot location is given correctly.

SLAM: фильтр для частиц Assume it is difficult to sample directly from But get samples from another distribution is easy. We sample from, with normalized weight for each x i t as The set of (particles) is an approximation of Particle filter: Resample x t from,with replacement, to get a sample set with uniform weights

SLAM: фильтр для частиц Particle filter (contd):

SLAM: фильтр для частиц Choose appropriate Choose Then Transition probability Observation function

SLAM: фильтр для частиц Алгоритм: Let state x t represent the robots location, 1. Propagate each state through the state transition probability. This samples a new state given the previous state. 2. Weight each new state according to the observation function 3. Normalize the weights, get. 4. Resampling: sample N s new states from are the updated robot location from

SLAM: фильтр для частиц State transition probability: are the expected robot moving distance (angle) by taking action a t. Observation probability: Measured distance (observation) for sensor k Map distance from location x t to the obstacle

SLAM: фильтр для частиц Lots of work on SLAM using particle filter are focused on: Reducing the cumulative error Fast SLAM (online) Way to organize the data structure (saving robot path and map). Maintain single map: cumulative error Multiple maps: memory and computation time In Parrs paper: Use ancestry tree to record particle history Each particle has its own map (multiple maps) Use observation tree for each grid square (cell) to record the map corresponding to each particle. Update ancestry tree and observation tree at each iteration. Cell occupancy is represented by a probabilistic approach.

SLAM: фильтр для частиц

Проблема поиска The agent doesnt have map, doesnt know the underlying model, doesnt know where the target is. Agent has 2 sensors: Camera: tell agent occupied or empty cells in 4 orientations, noisy sensor. Acoustic sensor: find the orientation of the target, effective only within certain distance. Noisy observation, noisy action. Assumption:

Проблема поиска Differences from SLAM Rough map is enough; an accurate map is not necessary. Objective is to find the target. Robot need to actively select actions to find the target as soon as possible. Similar to SLAM To find the target, agent need build map and estimate its location.

Проблема поиска Model: Environment is represented by a rough grid; Each grid square (state) is either occupied or empty. The agent moves between the empty grid squares. Actions: walk to any one of the 4 directions, or stay. Could fail in walking with certain probability. Observations: observe 4 orientations of its neighbor grid squares: occupied or empty. Could make a wrong observation with certain probability. State, action and observation are all discrete.

Проблема поиска In each step, the agent updates its location and map: Belief state: the agent believes which state it is currently in. It is a distribution over all the states in the current map. The map: The agent thinks what the environment is. For each state (grid square), a 2-dimentional Dirichlet distribution is used to represent the probability of empty and occupied. The hyperparameters of Dirichlet distribution are updated based on current observation and belief state.

Проблема поиска Belief state update: where Probability of successful moving from s j to s when taking action a From map representation and with Neighbor of s in orientation j the set of neighbor states of s

Проблема поиска Обновление карты (распределение Дирихле): Предпологаем на шаге t-1, гиперпараметр для S На шаге t, гиперпараметр для S обновляется до and are the posterior after observing o given that the agent is in the neighbor of state s. If the probability of wrong observation for any orientation is p 0, then priorp 0 if o is occupied 1-p 0 if o is empty can be computed similarly.

Проблема поиска Предпологаемое изменение: Map representation update: a=right a=up a=right a=up

Проблема поиска Выберите действие: Each state is assigned a reward R(s) according to following rules: Less explored grids have higher reward. Try to walk to the empty grid square. Consider neighbor of s with priority. xx

Cпасибо за внимание