英文版高清带书签 Contents Preface xvii Acknowledgments xix I Basics 1 1 Introduction 3 1.1 Uncertainty in Robotics 3 1.2 Probabilistic Robotics 4 1.3 Implications 9 1.4 Road Map 10 1.5 Teaching Probabilistic Robotics 11 1.6 Bibliographical Remarks 11 2 Recursive State Estimation 13 2.1 Introduction 13 2.2 Basic Concepts in Probability 14 2.3 Robot Environment Interaction 19 2.3.1 State 20 2.3.2 Environment Interaction 22 2.3.3 Probabilistic Generative Laws 24 2.3.4 Belief Distributions 25 2.4 Bayes Filters 26 2.4.1 The Bayes Filter Algorithm 26 2.4.2 Example 28 2.4.3 Mathematical Derivation of the Bayes Filter 31 2.4.4 The Markov Assumption 33 2.5 Representation and Computation 34 2.6 Summary 35 2.7 Bibliographical Remarks 36 2.8 Exercises 36 3 Gaussian Filters 39 3.1 Introduction 39 3.2 The Kalman Filter 40 3.2.1 Linear Gaussian Systems 40 3.2.2 The Kalman Filter Algorithm 43 3.2.3 Illustration 44 3.2.4 Mathematical Derivation of the KF 45 3.3 The Extended Kalman Filter 54 3.3.1 Why Linearize? 54 3.3.2 Linearization Via Taylor Expansion 56 3.3.3 The EKF Algorithm 59 3.3.4 Mathematical Derivation of the EKF 59 3.3.5 Practical Considerations 61 3.4 The Unscented Kalman Filter 65 3.4.1 Linearization Via the Unscented Transform 65 3.4.2 The UKF Algorithm 67 3.5 The Information Filter 71 3.5.1 Canonical Parameterization 71 3.5.2 The Information Filter Algorithm 73 3.5.3 Mathematical Derivation of the Information Filter 74 3.5.4 The Extended Information Filter Algorithm 75 3.5.5 Mathematical Derivation of the Extended Information Filter 76 3.5.6 Practical Considerations 77 3.6 Summary 79 3.7 Bibliographical Remarks 81 3.8 Exercises 81 4 Nonparametric Filters 85 4.1 The Histogram Filter 86 4.1.1 The Discrete Bayes Filter Algorithm 86 4.1.2 Continuous State 87 4.1.3 Mathematical Derivation of the Histogram Approximation 89 4.1.4 Decomposition Techniques 92 4.2 Binary Bayes Filters with Static State 94 4.3 The Particle Filter 96 4.3.1 Basic Algorithm 96 4.3.2 Importance Sampling 100 4.3.3 Mathematical Derivation of the PF 103 4.3.4 Practical Considerations and Properties of Particle Filters 104 4.4 Summary 113 4.5 Bibliographical Remarks 114 4.6 Exercises 115 5 Robot Motion 117 5.1 Introduction 117 5.2 Preliminaries 118 5.2.1 Kinematic Configuration 118 5.2.2 Probabilistic Kinematics 119 5.3 Velocity Motion Model 121 5.3.1 Closed Form Calculation 121 5.3.2 Sampling Algorithm 122 5.3.3 Mathematical Derivation of the Velocity Motion Model 125 5.4 Odometry Motion Model 132 5.4.1 Closed Form Calculation 133 5.4.2 Sampling Algorithm 137 5.4.3 Mathematical Derivation of the Odometry Motion Model 137 5.5 Motion and Maps 140 5.6 Summary 143 5.7 Bibliographical Remarks 145 5.8 Exercises 145 6 Robot Perception 149 6.1 Introduction 149 6.2 Maps 152 6.3 Beam Models of Range Finders 153 6.3.1 The Basic Measurement Algorithm 153 6.3.2 Adjusting the Intrinsic Model Parameters 158 6.3.3 Mathematical Derivation of the Beam Model 162 6.3.4 Practical Considerations 167 6.3.5 Limitations of the Beam Model 168 6.4 Likelihood Fields for Range Finders 169 6.4.1 Basic Algorithm 169 6.4.2 Extensions 172 6.5 Correlation-Based Measurement Models 174 6.6 Feature-Based Measurement Models 176 6.6.1 Feature Extraction 176 6.6.2 Landmark Measurements 177 6.6.3 Sensor Model with Known Correspondence 178 6.6.4 Sampling Poses 179 6.6.5 Further Considerations 180 6.7 Practical Considerations 182 6.8 Summary 183 6.9 Bibliographical Remarks 184 6.10 Exercises 185 II Localization 189 7 Mobile Robot Localization: Markov and Gaussian 191 7.1 A Taxonomy of Localization Problems 193 7.2 Markov Localization 197 7.3 Illustration of Markov Localization 200 7.4 EKF Localization 201 7.4.1 Illustration 201 7.4.2 The EKF Localization Algorithm 203 7.4.3 Mathematical Derivation of EKF Localization 205 7.4.4 Physical Implementation 210 7.5 Estimating Correspondences 215 7.5.1 EKF Localization with Unknown Correspondences 215 7.5.2 Mathematical Derivation of the ML Data Association 216 7.6 Multi-Hypothesis Tracking 218 7.7 UKF Localization 220 7.7.1 Mathematical Derivation of UKF Localization 220 7.7.2 Illustration 223 7.8 Practical Considerations 229 7.9 Summary 232 7.10 Bibliographical Remarks 233 7.11 Exercises 234 8 Mobile Robot Localization: Grid And Monte Carlo 237 8.1 Introduction 237 8.2 Grid Localization 238 8.2.1 Basic Algorithm 238 8.2.2 Grid Resolutions 239 8.2.3 Computational Considerations 243 8.2.4 Illustration 245 8.3 Monte Carlo Localization 250 8.3.1 Illustration 250 8.3.2 The MCL Algorithm 252 8.3.3 Physical Implementations 253 8.3.4 Properties of MCL 253 8.3.5 Random Particle MCL: Recovery from Failures 256 8.3.6 Modifying the Proposal Distribution 261 8.3.7 KLD-Sampling: Adapting the Size of Sample Sets 263 8.4 Localization in Dynamic Environments 267 8.5 Practical Considerations 273 8.6 Summary 274 8.7 Bibliographical Remarks 275 8.8 Exercises 276 III Mapping 279 9 Occupancy Grid Mapping 281 9.1 Introduction 281 9.2 The Occupancy Grid Mapping Algorithm 284 9.2.1 Multi-Sensor Fusion 293 9.3 Learning Inverse Measurement Models 294 9.3.1 Inverting the Measurement Model 294 9.3.2 Sampling from the Forward Model 295 9.3.3 The Error Function 296 9.3.4 Examples and Further Considerations 298 9.4 Maximum A Posteriori Occupancy Mapping 299 9.4.1 The Case for Maintaining Dependencies 299 9.4.2 Occupancy Grid Mapping with Forward Models 301 9.5 Summary 304 9.6 Bibliographical Remarks 305 9.7 Exercises 307 10 Simultaneous Localization and Mapping 309 10.1 Introduction 309 10.2 SLAM with Extended Kalman Filters 312 10.2.1 Setup and Assumptions 312 10.2.2 SLAM with Known Correspondence 313 10.2.3 Mathematical Derivation of EKF SLAM 317 10.3 EKF SLAM with Unknown Correspondences 323 10.3.1 The General EKF SLAM Algorithm 323 10.3.2 Examples 324 10.3.3 Feature Selection and Map Management 328 10.4 Summary 330 10.5 Bibliographical Remarks 332 10.6 Exercises 334 11 The GraphSLAM Algorithm 337 11.1 Introduction 337 11.2 Intuitive Description 340 11.2.1 Building Up the Graph 340 11.2.2 Inference 343 11.3 The GraphSLAM Algorithm 346 11.4 Mathematical Derivation of GraphSLAM 353 11.4.1 The Full SLAM Posterior 353 11.4.2 The Negative Log Posterior 354 11.4.3 Taylor Expansion 355 11.4.4 Constructing the Information Form 357 11.4.5 Reducing the Information Form 360 11.4.6 Recovering the Path and the Map 361 11.5 Data Association in GraphSLAM 362 11.5.1 The GraphSLAM Algorithm with Unknown Correspondence 363 11.5.2 Mathematical Derivation of the Correspondence Test 366 11.6 Efficiency Consideration 368 11.7 Empirical Implementation 370 11.8 Alternative Optimization Techniques 376 11.9 Summary 379 11.10 Bibliographical Remarks 381 11.11 Exercises 382 12 The Sparse Extended Information Filter 385 12.1 Introduction 385 12.2 Intuitive Description 388 12.3 The SEIF SLAM Algorithm 391 12.4 Mathematical Derivation of the SEIF 395 12.4.1 Motion Update 395 12.4.2 Measurement Updates 398 12.5 Sparsification 398 12.5.1 General Idea 398 12.5.2 Sparsification in SEIFs 400 12.5.3 Mathematical Derivation of the Sparsification 401 12.6 Amortized Approximate Map Recovery 402 12.7 How Sparse Should SEIFs Be? 405 12.8 Incremental Data Association 409 12.8.1 Computing Incremental Data Association Probabilities 409 12.8.2 Practical Considerations 411 12.9 Branch-and-Bound Data Association 415 12.9.1 Recursive Search 416 12.9.2 Computing Arbitrary Data Association Probabilities 416 12.9.3 Equivalence Constraints 419 12.10 Practical Considerations 420 12.11 Multi-Robot SLAM 424 12.11.1 Integrating Maps 424 12.11.2 Mathematical Derivation of Map Integration 427 12.11.3 Establishing Correspondence 429 12.11.4 Example 429 12.12 Summary 432 12.13 Bibliographical Remarks 434 12.14 Exercises 435 13 The FastSLAM Algorithm 437 13.1 The Basic Algorithm 439 13.2 Factoring the SLAM Posterior 439 13.2.1 Mathematical Derivation of the Factored SLAM Posterior 442 13.3 FastSLAM with Known Data Association 444 13.4 Improving the Proposal Distribution 451 13.4.1 Extending the Path Posterior by Sampling a New Pose 451 13.4.2 Updating the Observed Feature Estimate 454 13.4.3 Calculating Importance Factors 455 13.5 Unknown Data Association 457 13.6 Map Management 459 13.7 The FastSLAM Algorithms 460 13.8 Efficient Implementation 460 13.9 FastSLAM for Feature-Based Maps 468 13.9.1 Empirical Insights 468 13.9.2 Loop Closure 471 13.10 Grid-based FastSLAM 474 13.10.1 The Algorithm 474 13.10.2 Empirical Insights 475 13.11 Summary 479 13.12 Bibliographical Remarks 481 13.13 Exercises 482 IV Planning and Control 485 14 Markov Decision Processes 487 14.1 Motivation 487 14.2 Uncertainty in Action Selection 490 14.3 Value Iteration 495 14.3.1 Goals and Payoff 495 14.3.2 Finding Optimal Control Policies for the Fully Observable Case 499 14.3.3 Computing the Value Function 501 14.4 Application to Robot Control 503 14.5 Summary 507 14.6 Bibliographical Remarks 509 14.7 Exercises 510 15 Partially Observable Markov Decision Processes 513 15.1 Motivation 513 15.2 An Illustrative Example 515 15.2.1 Setup 515 15.2.2 Control Choice 516 15.2.3 Sensing 519 15.2.4 Prediction 523 15.2.5 Deep Horizons and Pruning 526 15.3 The Finite World POMDP Algorithm 527 15.4 Mathematical Derivation of POMDPs 531 15.4.1 Value Iteration in Belief Space 531 15.4.2 Value Function Representation 532 15.4.3 Calculating the Value Function 533 15.5 Practical Considerations 536 15.6 Summary 541 15.7 Bibliographical Remarks 542 15.8 Exercises 544 16 Approximate POMDP Techniques 547 16.1 Motivation 547 16.2 QMDPs 549 16.3 Augmented Markov Decision Processes 550 16.3.1 The Augmented State Space 550 16.3.2 The AMDP Algorithm 551 16.3.3 Mathematical Derivation of AMDPs 553 16.3.4 Application to Mobile Robot Navigation 556 16.4 Monte Carlo POMDPs 559 16.4.1 Using Particle Sets 559 16.4.2 The MC-POMDP Algorithm 559 16.4.3 Mathematical Derivation of MC-POMDPs 562 16.4.4 Practical Considerations 563 16.5 Summary 565 16.6 Bibliographical Remarks 566 16.7 Exercises 566 17 Exploration 569 17.1 Introduction 569 17.2 Basic Exploration Algorithms 571 17.2.1 Information Gain 571 17.2.2 Greedy Techniques 572 17.2.3 Monte Carlo Exploration 573 17.2.4 Multi-Step Techniques 575 17.3 Active Localization 575 17.4 Exploration for Learning Occupancy Grid Maps 580 17.4.1 Computing Information Gain 580 17.4.2 Propagating Gain 585 17.4.3 Extension to Multi-Robot Systems 587 17.5 Exploration for SLAM 593 17.5.1 Entropy Decomposition in SLAM 593 17.5.2 Exploration in FastSLAM 594 17.5.3 Empirical Characterization 598 17.6 Summary 600 17.7 Bibliographical Remarks 602 17.8 Exercises 604 Bibliography 607 Index 639
2022-07-02 11:56:42 8.87MB 机器人学 定位 地图构建 路径规划
1
论文摘要:元学习仅需少量学习就可以获取先前的先前任务和经验,从而可以从少量数据中学习新任务。但是,短镜头学习中的一个关键挑战是任务模糊性:即使可以从大量先前任务中元学习强大的先验知识,但用于新任务的小数据集也可能太含糊而无法获取单个模型(例如,针对该任务的分类器)。在本文中,我们提出了一种概率元学习算法,该算法可以从模型分布中为新任务采样模型。我们的方法扩展了模型不可知的元学习,它通过梯度下降适应新任务,并结合了通过变分下界训练的参数分布。在元测试时,我们的算法通过将噪声注入梯度下降的简单过程进行自适应,在元训练时,对模型进行训练,以使这种随机自适应过程从近似模型后验中生成样本。我们的实验结果表明,我们的方法可以在模糊的几次镜头学习问题中对合理的分类器和回归器进行采样。
2022-06-13 21:51:35 2.89MB 元学习 概率MAML
1
我也是找了好久,英文“Probabilistic Programming and Bayesian Methods for Hackers”,2分分享给大家,http://www.cnblogs.com/hxsyl/
2022-06-07 10:44:21 7.61MB 贝叶斯 概率编程
1
Probabilistic Machine Learning (Duke STA561)
2022-05-22 16:11:52 884KB Machine Learning Duke STA561
1
Learning Probabilistic Graphical Models in R 英文版Learning Probabilistic Graphical Models in R 英文版Learning Probabilistic Graphical Models in R 英文版Learning Probabilistic Graphical Models in R 英文版
2022-04-29 23:03:10 3.19MB 概率图
1
作者写作很好,细节清晰,很容易理解,书很权威,很好的书!
2022-04-28 15:54:04 25.69MB Machine Learning Bayesian Probalistic
1
好像之前附件不能超过几M,现在可以上传不超过20MB的文件。鉴于有同志反映文件不完整或者无法解压缩,现在重传该文件。
2022-04-11 20:22:58 13.29MB The Probabilistic Method
1
MTA 多点触控归因。 找出哪些渠道最有助于用户转换。 楷模 该软件包包含以下多点触控归因模型的实现: 沙普利 马可夫 邵和李的所谓简单概率模型 邵和李的袋装逻辑回归 附加危害(生存) 此外,还包括一些流行的启发式“模型”,特别是 第一次接触 线性的 最后接触 时间衰减 基于位置 包含数据 该软件包具有与称为的R软件包相同的测试数据集-包含10,000行,其中包含12个通道上的客户旅程:alpha,beta,delta,epsilon,eta,gamma,iota,kappa,lambda,mi,theta和zeta 。 这些是按路径进行的转化汇总。 假设有一条路(客户旅程) a > b > c total_conversions等于2, total_null等于5。这意味着我们记录了2次消费者旅程 a > b > c > (conversion) 和5次客户旅程 a > b >
1
Machine Learning: A Probabilistic Perspective 源码
2022-03-04 22:24:17 47.38MB Machine Learning matlab,代码
1
count-min-sketch:C语言中的最小计数草图实现
1