# Distributed Gaussian Process under Localization Uncertainties

Posted 2013.01.08 15:14**- All expressions in this article must be made by me. No Copy! - **

**Paper Survey**

**1. D.Culler, Overview of Sensor Networks **

- Distributed manner의 중요성

=> In wireless sensor network, communication usually is accounted for the largest power consumption factor.

=> It is the common sense that the longer and more often WSN communicates, the more power is consumed. ... In order to effectively handle physical constraints such as battery capacity , communication range and limited bandwidth, using distributed algorithm can be an important ,and in some cases only, solution for implementing WSN.

**2. D. Estrin, Connecting the Physical World with Pervasive Networks**

- Distributed manner의 중요성

=> Energy constraints often plays an dominating role in designing small embedded processors. The biggest causes of energy consumption in WSN is usually wireless communication.

**3. K. M. Lynch, Decentralized Environmental Modelingi by Mobile Sensor Networks**

- Distributed manner

=> In this paper, Lynch et all. implemented decentralized Kalman filtering using PI average consensus estimator and used this algorithm to environmental modelling.

**4. J. Choi, Distributed learning and cooperative control for multi-agent systems**

- Distributed manner

=> In this paper, Choi et al. assumed each agent in WSN to be resource constrained, i.e., it has limited communication range, bandwidth, memory and computational power where distributed learning and control is inevitable. They proposed a distributed control algorithm that each agent independently estimates an unknown field of interest from noisy measurements and coordinate multiple agents to discover peaks of the unknown field. An agent can only share data with its neighbors in a limited communication range.

**5. J. Cortes, Distributed Kriged Kalman filter for spatial estimation **

- Distributed manner

=> In this paper, Cortes et al. designed the distributed Kriged Kalman filter for predictive inference of the random field and of its gradient using Jacobi over-relaxation algorithm and dynamic average consensus algorithm.

=> JOR을 따로 증명!!

**6. N. Cressie, Kriging nonstationary data**

- Assumed exact positioning, neglect localization error

- Gaussian Process regression has been widely and successfully adapted to make statistical inference from geostatistical and environmental data.

**7. C. E. Rasmussen, Gaussian processes for machine learning <= GP 책 **

- Gaussian Process regression has been widely and successfully adapted to make statistical inference from geostatistical and environmental data.

=> Gaussian Process regression, or kriged Kalman filter, has been successfully implemented to various statistical inference from environmental data. [에 대한 refer]

=> Setting appropriate hyper parameters in kernel function is extremely important. Fortunately, we can achieve optimal hyper-parameters by maximum likelihood estimator using gradient method. [에 대한 refer]

=> __Gaussian process 에 대한 설명__

Gaussian process(GP) can describe a distribution over functions, i.e. function-space view, and it is completely specified by its mean function and covariance function. As its name implies, GP assumes Gaussian distribution between variables and is formally defined as below.

Definition: A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution.

Lets denote the input vector as $x \in \R^n$, and the output(target) as $y \in \R$. The target $y$ may either be continuous for the regression case or discrete for the classification case. Then we have a dataset $D$ consist of $n$ observations, i.e. $D = {(x_i, y_i) | i = 1, \cdots, n}$. Given this training data, Gaussian process(GP) make predictions of output $t_{\star}$ for new input vector $x_{\star}$ which is usually not in the training dataset. Throughout this paper, input $x \in \R^2$ and output $y \in \R$ indicate two dimensional location and sensor measurement respectively.

If we denote mean function as $m(x)$ and the covariance function as $k(x, x')$ of a real process $f(x)$, GP of f(x) can be represented as

$f(x) ~ GP(m(x), k(x, x'))$

For notational simplicity we will assume zero-mean GP. Handling non-zero distribution is straightforward in that first make $Y = {y_1, y_2, \cdots, y_n}$ to have zero mean by subtracting $Y_{mean} = mean(Y)$ and then add $Y_{mean}$ to predicted output using zero-mean GP. Several types of covariance functions can be used, and the one of the most widely used one is squared exponential(SE) kernel defined as

$K(x, x') = \sigma_f^2 exp(-\frac{||x-x'||^2}{2\sigma_x^2})$,

where $x, x' \in \R^d$. In SE kernel, $sigma_f^2$ and $sigma_x^2$ are referred to hyperparameters and can be estimated using maximizing log-likelihood $p(y|x, \theta)$ where $\theta$ denotes hyperparameters.

$ log P(y|x, \theta) = frac{1}{2}y^T(K+\sigma_w^2)^{-1}y -frac{1}{2}log|K+\sigma_w^2I| - \frac{n}{2}log2\pi$

Hyperparameters maximizing equaion() can be achieved using iterative gradient method.

It will be more realistic to model the situations that we do not have access to observation values $\bar(y)$ themselves, but to corrupted versions of $y$ where $y^{(i)} = \bar(y)^{(i)} + w^{(i)}$ and $w^{(i)}$ is an independent identically distributed (i.i.d.) white Gaussian noise with variance $sigma_w^2$, i.e. $w~N(0, \sigma_w^2)$.

- 이 뒤로부턴 논문에 직접!

**8. A. Krause, Near-optimal sensor placements: maximizing information while minimizing communication cost**

- GP가 많이 사용되었고, 이를 더 잘 사용하기 위해서 near-optimal sensor placements with respect to a mutual information criterion in GP are proposed.

=> Gaussian Process regression has been widely used in WSN and several researches have been made to aid the performance of GP such as near-optimal sensor placement maximizing mutual information. [에 대한 refer]

**9. A. Krause, Near-optimal sensor placements: theory, efficient algorithms and empirical studies**

- Assumed exact positioning, neglect localization error

=> Gaussian Process regression has been widely used in WSN and several researches have been made to aid the performance of GP such as near-optimal sensor placement maximizing mutual information. [에 대한 refer]

**10. J. Choi, Swarm intelligence for achieving the global maximum using spatio-temporal Gaussian processes**

- Assumed exact positioning, neglect localization error

=> such as, .., or combining GP into multi-agent control algorithms.

=> However, most of the existing approaches did not consider localization error. [에 대한 refer]

**11. J. Choi, Biologically-inspired navigation strategies for swarm intelligence using spatial Gaussian processes**

- Assumed exact positioning, neglect localization error

=> such as, .., or combining GP into multi-agent control algorithms.

=> However, most of the existing approaches did not consider localization error. [에 대한 refer]

**12. K. Ho. Alleviating sensor position error in source localization using calibration emitters at inaccurate locations**

.=> However, exact localization in mobile sensor networks can be hardly achieved due to its restricted resources. [에 대한 refer]

=> Localization error in mobile sensor networks is inevitable due to its limited resource constraints.

**13. L. Hu, Localization for mobile sensor networks**

.=> However, exact localization in mobile sensor networks can be hardly achieved due to its restricted resources. [에 대한 refer]

=> Localization error in mobile sensor networks is inevitable due to its limited resource constraints.

**14. P. Oguz-Ekim, Robust Localization of nodes and time recursive tracking in sensor networks using noisy range measurements**

=> In localizing positions of sensor nodes in a network, several range based approaches have been proposed. in this paper, Oguz-Ekim et al. used iterative maximize likelihood estimation of positions given measurement.

**15. R. Karlsson, Bayesian surface and underwater navigation**

=> 14와 같고, and in this paper, Karlsson et al. used particle filter approaches.

**16. D.Titterton, Strapdown inertial navigation technology**

=> For higher localization precision, fusion of GPS and inertial navigation system(INS) can be used. However, due to its high cost and computational burden, it is not likely to be used in mobile sensor networks.

=> 훨씬 더 많은 refer를 달 수 있을 것으로 예상된다.

**17. M. Mysorewala, Multi-scale adaptive sampling with mobile agents for mapping of forest fires**

=> GPR under Localization uncertainties와 비슷한 시도이다. 즉 위치 오차와 측정 오차를 모두 고려.

=> In this paper, Mysorewala et al. used neural network, extended Kalman filter(EKF) and greedy search heuristics to implement distributed multi-scale sampling strategy. Using EKF which can handle both localization and measurement uncertainty, this approach can be seen to be similar to ours. However, since this paper relies on parametric model, our non-parametric model, Gaussian process regression considering localization uncertainty, can be more flexible to handle dynamic real-world situations.

**18. A. Girrad, Gaussian process priors with uncertain inputs-application to multiple-step ahead time series forecasting**

=>

**19. J. Kocijan, Predictive control with Gaussian process medols**

=>

**20. R. Murray-Smith, Adaptive, cautious, predictive control with Guassian process priors **

=>

**21. M. Deisenroth, Gaussian process dynamic programming**

=>

**22. D. Nash, Using monte carlo simulation and bayesian networks to quantify and demonstrate the impact of fertilizer**

=>

**23. M. Wainwright, Graphical Models, Exponential Families, and Variational Inference**

=>

< Laplace Approximation>

**24. L. Tierney, Accurate approximations for posterior moments and marginal densities **

=>

**25. L. Tierney, Fully exponential Laplace approximations to expectations and variances of non-positive functions**

=>

**26. Y. Miyata, Fully exponential Laplace approximations using asymptotic modes **

=>

**27. Y. Miyata, Laplace approximations to means and variances with asymptotic modes **

=>

**28. Y. Xu, Mobile sensor networks for learning anisotropic Gaussian processes**

=>

< Gaussian Process>

**29. A. J. Smola, Sparse greedy Gaussian process regression**

=> GP Approximation

**30. C. Williams, Using the Nystrom method to speed up kernel machines**

=> GP Approximation

**31. N. Lawrence, Fast sparse Gaussian process methods**

=> GP Approximation

**32. M. Seeger, PAC-Bayesian generalization error bounds and sparse approximation**

=> GP Approximation

**33. V. Tresp, A Bayesian committee machine**

=> GP Approximation

**35. Y. Xu, Mobile sensor network navigation using gaussian processes with truncated observations**

**36. D. Ehrlich, Applications of NOAA-AVHRR 1 km data for environmental monitoring**

**40. D.Bertsekas, Parallel and distributed computation **

=> Jacobi over-relaxation 알고리즘을 제안

**41. Y. Xu, Spatial prediction with mobile sensor networks using GP**

=> In this paper, Xu et al.

**42. F.E.Udwadia, Some convergence results related to the JOR iterative method for symmetric, positive-definite matrices**

=> JOR의 convergent를 증명

**50. M. Jadaliha, Gaussian process regression for sensor networks under localization uncertainty**

=> In this paper, Jadaliha et al, proposed novel Gaussian process structure which incorporates not only observation noise but also localization noise into account. Since the proposed predictive mean and variance estimators have no closed form solution, they suggested two approximation techniques, Laplace approximation and Monte Carlo importance sampling. They also proposed simple Laplace approximation method which is a simplified version of Laplace approximation using $\hat{x}$ instead of corrupted $x$ obtained by maximum a posteriori(MAP) estimator. This approximate version can be solved by using distributed algorithms assuming proper kernel function.

**51. ****M. Jadaliha, ****Gaussian Process Regression Using Laplace Approximations Under Localization Uncertainty**

=>

■

#### 'Enginius > Machine Learning' 카테고리의 다른 글

Softmax activation이란? (0) | 2013.03.05 |
---|---|

Precision과 Recall (0) | 2013.02.07 |

Distributed Gaussian Process under Localization Uncertainties (0) | 2013.01.08 |

Deep Learning Rocks. still (0) | 2012.12.13 |

Reinforcement Learning (10) | 2012.12.04 |

Solving LMI using SeDuMi (0) | 2012.11.26 |

- Filed under : Enginius/Machine Learning
- 0 Comments 0 Trackbacks

Trackback URL :