DeepLabCut是一个工具箱,用于无标记地估计执行各种任务的动物的姿势。 。 只要您可以看到(标记)要跟踪的内容,就可以使用此工具箱,因为它与动物和物体无关。 最新更新: :purple_heart: DeepLabCut支持多动物姿势估计(BETA版本,请给我们提供反馈! pip install deeplabcut==2.2b8 )。 :purple_heart: 我们有一个实时软件包! 快速pip install deeplabcut : pip install deeplabcut 您还需要tensorflow和wxPython参见 项目管理的管道和工作流程的概述。 有关循序渐进的用户指南,请阅读《! 为了更深入地
1
Twenty-five years have passed since the publication of the Russian version of the book Estimation of Dependencies Based on Empirical Data (EDBED for short). Twentyfive years is a long period of time. During these years many things have happened. Looking back, one can see how rapidly life and technology have changed, and how slow and difficult it is to change the theoretical foundation of the technology and its philosophy. I pursued two goals writing this Afterword: to update the technical results presented in EDBED (the easy goal) and to describe a general picture of how the new ideas developed over these years (a much more difficult goal). The picture which I would like to present is a very personal (and therefore very biased) account of the development of one particular branch of science, Empirical Inference Science. Such accounts usually are not included in the content of technical publications. I have followed this rule in all of my previous books. But this time I would like to violate it for the following reasons. First of all, for me EDBED is the important milestone in the development of empirical inference theory and I would like to explain why. Second, during these years, there were a lot of discussions between supporters of the new paradigm (now it is called the VC theory1) and the old one (classical statistics). Being involved in these discussions from the very beginning I feel that it is my obligation to describe the main events. The story related to the book, which I would like to tell, is the story of how it is difficult to overcome existing prejudices (both scientific and social), and how one should be careful when evaluating and interpreting new technical concepts. This story can be split into three parts that reflect three main ideas in the development of empirical inference science: from the pure technical (mathematical) elements of the theory to a new paradigm in the philosophy of generalization. The first part of the story, which describes the mai
2021-10-26 15:37:02 1.01MB 机器学习
1
NGBoost:用于概率预测的自然梯度提升 ngboost是一个Python库,实现了“自然梯度增强”,如。 它建立在,旨在针对适当的评分规则,分布和基础学习者的选择进行扩展和模块化。 在此可以对NGBoost的基本方法进行详尽的介绍。 安装 via pip pip install --upgrade ngboost via conda-forge conda install -c conda-forge ngboost 用法 波士顿住房数据集上的概率回归示例: from ngboost import NGBRegressor from sklearn . datasets import load_boston from sklearn . model_selection import train_test_split from sklearn . metrics import mean_squared_error X , Y = load_boston ( True ) X_train , X_test , Y_train , Y_test = train_test_split
1
Estimation of!dependences¥based on(empirical data
2021-10-23 09:10:01 4.87MB 机器学习
1
Introduction to Nonparametric Estimation 英文版
2021-10-17 16:29:01 2.36MB 估计 英文版 导论
1
计算置信区间的matlab代码参数估计 一些使用SIR模型进行参数估计的快速示例代码,以及使用Fisher信息矩阵和轮廓似然性检查可识别性和不确定性的代码-有关更多信息,请参阅实验室作业pdf(此代码适用于第2部分)。 该代码最初是为2017 NIMBioS / MBI / CAMBAM研究生暑期班和NIMBioS不确定性定量教程而设计的。 R和Matlab中都提供了等效的代码,该代码需要执行以下步骤: 在一些初始参数值下模拟模型 使用最大似然(ML)从(模拟的)暴发数据估计模型参数(假设模型具有给定的Poisson均值,但可以更改为您喜欢的任何值,例如最小二乘等)。 计算Fisher信息矩阵(FIM)的简化形式并测试其等级,以评估可识别参数/组合的数量 生成每个参数的轮廓似然并确定95%的置信区间 问题? 联系玛丽莎·艾森伯格()。 该材料是根据MIT许可授权的-可以免费使用/修改并注明原始来源(请参阅许可文本)。
2021-10-15 16:47:11 173KB 系统开源
1
LMS自适应时延估计延迟,通过LMS滤波去,得到最优权值矢量,最优权值对应延时值。动态分析
2021-10-15 14:36:47 544KB LMS自适应
1
Optimum Array Processing: Part4 of Detection, Estimation & Modulation Theory
2021-10-13 20:25:10 88.56MB Optimum Arra
1