[{"title":"( 55 个子文件 226KB ) MDP 马尔科夫决策过程matlab工具包","children":[{"title":"MDPtoolbox","children":[{"title":"MDPtoolbox","children":[{"title":"mdp_check_square_stochastic.m <span style='color:#111;'> 2.21KB </span>","children":null,"spread":false},{"title":"mdp_computePR.m <span style='color:#111;'> 2.82KB </span>","children":null,"spread":false},{"title":"mdp_eval_policy_iterative.m <span style='color:#111;'> 5.81KB </span>","children":null,"spread":false},{"title":"AUTHORS <span style='color:#111;'> 63B </span>","children":null,"spread":false},{"title":"COPYING <span style='color:#111;'> 1.53KB </span>","children":null,"spread":false},{"title":"mdp_Q_learning.m <span style='color:#111;'> 5.28KB </span>","children":null,"spread":false},{"title":"mdp_policy_iteration_modified.m <span style='color:#111;'> 5.54KB </span>","children":null,"spread":false},{"title":"mdp_example_forest.m <span style='color:#111;'> 4.62KB </span>","children":null,"spread":false},{"title":"mdp_silent.m <span style='color:#111;'> 1.70KB </span>","children":null,"spread":false},{"title":"mdp_computePpolicyPRpolicy.m <span style='color:#111;'> 2.86KB </span>","children":null,"spread":false},{"title":"mdp_finite_horizon.m <span style='color:#111;'> 4.07KB </span>","children":null,"spread":false},{"title":"mdp_eval_policy_matrix.m <span style='color:#111;'> 3.47KB </span>","children":null,"spread":false},{"title":"mdp_check.m <span style='color:#111;'> 3.94KB </span>","children":null,"spread":false},{"title":"mdp_value_iteration_bound_iter.m <span style='color:#111;'> 4.96KB </span>","children":null,"spread":false},{"title":"mdp_policy_iteration.m <span style='color:#111;'> 5.41KB </span>","children":null,"spread":false},{"title":"mdp_span.m <span style='color:#111;'> 1.67KB </span>","children":null,"spread":false},{"title":"mdp_eval_policy_optimality.m <span style='color:#111;'> 4.10KB </span>","children":null,"spread":false},{"title":"mdp_relative_value_iteration.m <span style='color:#111;'> 5.08KB </span>","children":null,"spread":false},{"title":"mdp_bellman_operator.m <span style='color:#111;'> 3.44KB </span>","children":null,"spread":false},{"title":"mdp_value_iterationGS.m <span style='color:#111;'> 7.26KB </span>","children":null,"spread":false},{"title":"README <span style='color:#111;'> 2.38KB </span>","children":null,"spread":false},{"title":"mdp_LP.m <span style='color:#111;'> 3.75KB </span>","children":null,"spread":false},{"title":"mdp_value_iteration.m <span style='color:#111;'> 6.63KB </span>","children":null,"spread":false},{"title":"mdp_verbose.m <span style='color:#111;'> 1.71KB </span>","children":null,"spread":false},{"title":"documentation","children":[{"title":"mdp_bellman_operator.html <span style='color:#111;'> 3.20KB </span>","children":null,"spread":false},{"title":"mdp_check.html <span style='color:#111;'> 2.89KB </span>","children":null,"spread":false},{"title":"mdp_LP.html <span style='color:#111;'> 3.17KB </span>","children":null,"spread":false},{"title":"DOCUMENTATION.html <span style='color:#111;'> 3.04KB </span>","children":null,"spread":false},{"title":"mdp_eval_policy_optimality.html <span style='color:#111;'> 3.60KB </span>","children":null,"spread":false},{"title":"mdp_eval_policy_iterative.html <span style='color:#111;'> 7.33KB </span>","children":null,"spread":false},{"title":"mdp_policy_iteration_modified.html <span style='color:#111;'> 4.85KB </span>","children":null,"spread":false},{"title":"mdp_relative_value_iteration.html <span style='color:#111;'> 7.55KB </span>","children":null,"spread":false},{"title":"BIA.png <span style='color:#111;'> 6.71KB </span>","children":null,"spread":false},{"title":"mdp_computePpolicyPRpolicy.html <span style='color:#111;'> 3.28KB </span>","children":null,"spread":false},{"title":"index_alphabetic.html <span style='color:#111;'> 6.32KB </span>","children":null,"spread":false},{"title":"mdp_example_forest.html <span style='color:#111;'> 6.67KB </span>","children":null,"spread":false},{"title":"index_category.html <span style='color:#111;'> 6.86KB </span>","children":null,"spread":false},{"title":"meandiscrepancy.jpg <span style='color:#111;'> 15.90KB </span>","children":null,"spread":false},{"title":"mdp_eval_policy_TD_0.html <span style='color:#111;'> 3.28KB </span>","children":null,"spread":false},{"title":"mdp_computePR.html <span style='color:#111;'> 2.82KB </span>","children":null,"spread":false},{"title":"mdp_check_square_stochastic.html <span style='color:#111;'> 2.41KB </span>","children":null,"spread":false},{"title":"mdp_example_rand.html <span style='color:#111;'> 3.74KB </span>","children":null,"spread":false},{"title":"mdp_value_iterationGS.html <span style='color:#111;'> 8.49KB </span>","children":null,"spread":false},{"title":"mdp_value_iteration.html <span style='color:#111;'> 6.57KB </span>","children":null,"spread":false},{"title":"mdp_finite_horizon.html <span style='color:#111;'> 4.06KB </span>","children":null,"spread":false},{"title":"arrow.gif <span style='color:#111;'> 231B </span>","children":null,"spread":false},{"title":"mdp_value_iteration_bound_iter.html <span style='color:#111;'> 3.50KB </span>","children":null,"spread":false},{"title":"INRA.png <span style='color:#111;'> 131.30KB </span>","children":null,"spread":false},{"title":"mdp_verbose_silent.html <span style='color:#111;'> 2.39KB </span>","children":null,"spread":false},{"title":"mdp_Q_learning.html <span style='color:#111;'> 4.09KB </span>","children":null,"spread":false},{"title":"mdp_span.html <span style='color:#111;'> 2.03KB </span>","children":null,"spread":false},{"title":"mdp_policy_iteration.html <span style='color:#111;'> 4.82KB </span>","children":null,"spread":false},{"title":"mdp_eval_policy_matrix.html <span style='color:#111;'> 2.91KB </span>","children":null,"spread":false}],"spread":false},{"title":"mdp_example_rand.m <span style='color:#111;'> 3.75KB </span>","children":null,"spread":false},{"title":"mdp_eval_policy_TD_0.m <span style='color:#111;'> 4.96KB </span>","children":null,"spread":false}],"spread":false}],"spread":true}],"spread":true}]