HMM的C和C++实现,实现的是离散型的HMM,包括离散和连续的HMM实现.
2019-12-21 18:56:27 2.97MB CHMM HMM-GMM
1
隐马尔科夫模型HMM的具体算法代码,学习HMM不可多得的好资源。
2019-12-21 18:56:27 299KB 向前后算法
1
C语言实现的HMM 语音识别算法,比较经典
2019-12-21 18:56:15 8KB HMM 语音识别 算法
1
隐马尔可夫模型(HMM)已成为语音识别中的主流技术,首先介绍了语音识别技术的原理和结构,然后介 绍了HMM 的三个基本问题及其解决方法,最后利用Maflab仿真工具设计了一个孤立词的语音识别系统,实现了数 字0—9的识别
2019-12-21 18:55:17 192KB HMM 语音识别 matlab
1
hmm算法matlab实现和实例 hmm_em.m function [LL, prior, transmat, obsmat, nrIterations] = ... dhmm_em(data, prior, transmat, obsmat, varargin) % LEARN_DHMM Find the ML/MAP parameters of an HMM with discrete outputs using EM. % [ll_trace, prior, transmat, obsmat, iterNr] = learn_dhmm(data, prior0, transmat0, obsmat0, ...) % % Notation: Q(t) = hidden state, Y(t) = observation % % INPUTS: % data{ex} or data(ex,:) if all sequences have the same length % prior(i) % transmat(i,j) % obsmat(i,o) % % Optional parameters may be passed as 'param_name', param_value pairs. % Parameter names are shown below; default values in [] - if none, argument is mandatory. % % 'max_iter' - max number of EM iterations [10] % 'thresh' - convergence threshold [1e-4] % 'verbose' - if 1, print out loglik at every iteration [1] % 'obs_prior_weight' - weight to apply to uniform dirichlet prior on observation matrix [0] % % To clamp some of the parameters, so learning does not change them: % 'adj_prior' - if 0, do not change prior [1] % 'adj_trans' - if 0, do not change transmat [1] % 'adj_obs' - if 0, do not change obsmat [1] % % Modified by Herbert Jaeger so xi are not computed individually % but only their sum (over time) as xi_summed; this is the only way how they are used % and it saves a lot of memory. [max_iter, thresh, verbose, obs_prior_weight, adj_prior, adj_trans, adj_obs] = ... process_options(varargin, 'max_iter', 10, 'thresh', 1e-4, 'verbose', 1, ... 'obs_prior_weight', 0, 'adj_prior', 1, 'adj_trans', 1, 'adj_obs', 1); previous_loglik = -inf; loglik = 0; converged = 0; num_iter = 1; LL = []; if ~iscell(data) data = num2cell(data, 2); % each row gets its own cell end while (num_iter <= max_iter) & ~converged % E step [loglik, exp_num_trans, exp_num_visits1, exp_num_emit] = ... compute_ess_dhmm(prior, transmat, obsmat, data, obs_prior_weight); % M step if adj_prior prior = normalise(exp_num_visits1); end if adj_trans & ~isempty(exp_num_trans) tran
2019-12-21 18:54:55 24KB hmm
1
MATLAB编写的学习隐马尔科夫模型的程序
2019-12-21 18:54:54 28KB MATLAB HMM 隐马尔科夫模型
1
里面包含有matlab的源代码、GUI界面
2019-12-21 18:54:25 91KB HMM,MATLAB
1
隐马尔科夫模型HMM的具体算法代码,包括前向、后向算法、EM参数重估等。
2019-12-21 18:54:12 304KB matlab
1
Hidden Markov Models (HMMs) provide a simple and effective framework for modelling time-varying spectral vector sequences. As a consequence, almost all present day large vocabulary continuous speech recognition (LVCSR) systems are based on HMMs. Whereas the basic principles underlying HMM-based LVCSR are rather straightforward, the approximations and simplifying assumptions involved in a direct implementation of these principles would result in a system which has poor accuracy and unacceptable sensitivity to changes in operating environment. Thus, the practical application of HMMs in modern systems involves considerable sophistication. The aim of this review is first to present the core architecture of a HMM-based LVCSR system and then describe the various refinements which are needed to achieve state-of-the-art performance. These refinements include feature projection, improved covariance modelling, discriminative parameter estimation, adaptation and normalisation, noise compensation and multi-pass system combination. The review concludes with a case study of LVCSR for Broadcast News and Conversation transcription in order to illustrate the techniques described.
2019-12-21 18:51:31 617KB HMM ASR AI
1
HMM隐马尔可夫模式的经典学习C语言源码,对学习hmm模式有非常好的帮助。有效的解决了HMM的三个问题:评估,译码和学习训练。
2019-12-21 18:51:10 361KB HMM 源代码 C语言
1