广义的负变系数 eKdV 内波模型新的解析解,刘瑞平,魏光美,本文研究了用于模拟海洋内孤立波的广义的负变系数eKdV方程借助符号计算工具给出了Painleve分析和自Backlund变换同时用Hirota双线性方法得�
2024-03-02 11:21:28 997KB 首发论文
1
二维负 Gardner-KP 方程的解析研究及其在海洋内孤立波中的应用,鲁营霖,魏光美,本文主要研究了二维负 Gardner-KP 方程. 首先借助符号计算进行了 Painlev'{e} 分析, 发现该方程是 Painlev'{e} 不可积的. 基于 Painlev'{e} 截断给出
2024-03-02 11:17:49 1.3MB 首发论文
1
matlab图像特效代码临床光声成像的卓越光声非负重建 该存储库包含用于使用快速高级共轭梯度算法从原始RF数据重建光声图像的MATLAB代码示例。 结果发表在我们的论文《用于临床光声成像的高级光声非负重建(SPANNER)》中。 该代码免费提供给所有人使用。 如果您正在发布使用此代码或其变体的任何作品,请记住它是免费获得的。 我们恳请您在出版物中引用我们的论文。 该代码按“原样”提供,没有任何明示或暗示的保证,包括但不限于对适销性或特定用途的隐含保证或条件。 在任何情况下,作者和/或斯坦福大学均不对任何种类的任何特殊,偶发,间接或继发的损害赔偿负责,无论是由于使用,数据或利润的损失,无论作者是否提出建议有关此类损害的可能性,和/或由于使用或执行本规范而引起的任何责任理论。 MATLAB是The MathWorks,Inc.的商标。本文档和网站中提及的其他公司和/或组织的商标仅是出于标识目的,并且是其各自公司和/或组织的财产。 接触: 请与Idan Steinberg博士()联系,以获取与此代码有关的查询。 抽象的: 光声(PA)成像可以通过增加分子信息来革新医学超声。 然而,由于有限的
2022-07-01 09:04:33 20KB 系统开源
1
本文件是论文《Tag-Aware Personalized Recommendation Using a Deep-Semantic Similarity Model with Negative Sampling》的原文翻译,是我通过 Google 翻译及我自己的理解翻译而来的。在翻译的内容中有很多英文标记的地方,便于结合原文进行理解。感谢论文原作者的辛苦实践,/bq,如有侵权,请联系我删除,谢谢~/bq。 [摘要] With the rapid growth of social tagging systems, many efforts have been put on tag-aware personalized recommendation. However, due to uncontrolled vocabularies, social tags are usually redundant, sparse, and ambiguous. In this paper, we propose a deep neural network approach to solve this problem by mapping both the tag-based user and item profiles to an abstract deep feature space, where the deepsemantic similarities between users and their target items (resp., irrelevant items) are maximized (resp., minimized). Due to huge numbers of online items, the training of this model is usually computationally expensive in the real-world context. Therefore, we introduce negative sampling, which significantly increases the model’s training efficiency (109.6 times quicker) and ensures the scalability in practice. Experimental results show that our model can significantly outperform the state-of-the-art baselines in tag-aware personalized recommendation: e.g., its mean reciprocal rank is between 5.7 and 16.5 times better than the baselines.
2022-05-15 10:33:23 295KB 推荐系统 negative samplin
1
K-SVD是一种经典的字典训练算法,依据误差最小原则,对误差项进行SVD分解,选择使误差最小的分解项作为更新的字典原子和对应的原子系数,经过不断的迭代从而得到优化的解。
2022-05-05 19:13:30 321KB ksvd 稀疏表示
1
MahNMF Manhattan Non-negative Matrix Factorization code % Manhattan Non-negative Matrix Factorization. % ManhNMF: Matlab Code for Efficient Robust Manhattan NMF Solver % Reference % [1] N. Guan, D. Tao, Z. Luo, and J. Shawe-taylor, "MahNMF: Manhattan % Non-negative Matrix Factorization," arXiv:1207.3438v1, 2012. % [2] N. Guan, D. Tao, Z. Luo, and J. Shawe-taylor, "MahNMF: Manhattan % Non-negative Matrix Factorization," Submitted to Journal of Machine Learning Research, 2013. % The model is X \approx W^TH, where X, W, and H are defined as follows: % X (m x n): data matrix including n samples in m-dimensional space; % W (r x m): basis matrix including r bases in m-dimensional space; % H (r x n): coefficients matrix includeing n encodings in r-dimensional space. % Written by Naiyang Guan (ny.guan@gmail.com) % Copyright 2012-2014 by Naiyang Guan and Dacheng Tao % Modified at Jan. 28 2013 % % X : Input data matrix (m x n) % r : Target low-rank % % (Below are optional arguments: can be set by providing name-value pairs) % MAX_ITER : Maximum number of iterations. Default is 1,000. % MIN_ITER : Minimum number of iterations. Default is 10. % MAX_TIME : Maximum amount of time in seconds. Default is 100,000. % W_INIT : (m x r) initial value for W. % H_INIT : (r x n) initial value for H. % LAM_INIT : initial value of smoothness parameter. Default is 1. % MDL_TYPE : Model type (Default is 'PLAIN'), % 'PLAIN' - MahNMF (min{||X-W^T*H||_1,s.t.,W >= 0 and H >= 0}.), % 'BXC' - Box Constrained MahNMF (min{||X-W^T*H||_1,s.t.,1 >= W >= 0 and 1 >= H >= 0}.), % 'MNR' - Manifold Regularized MahNMF % (min{||X-W^T*H||_1+.5*beta*TR(H*Lp*H^T),s.t.,W >= 0 and H >= 0}.), % 'GSP' - Group Sparse MahNMF % (min{||X-W^T*H||_1+.5*beta*\sum_{g\in G}||W^[g]||_{1,p},s.t.,W >= 0 and H >= 0}.), % 'SYM' - Symmetric MahNMF (min{||X-H*H^T||_1,s.t., H >= 0}.). % ALG_TYPE : Algorithm type (Default is 'AGD'), % 'AGD' - Accelerated Gradient Descent, % 'RRI' - Rank-one Residue Iteration. % BETA : Tradeoff parameter over regularization term. Default is 1e-3. % SIM_MTX : Similarity matrix constructed by 'constructW'. % GPP_MTX : Group pattern for boundary of all groups. % TOL_INNR : Stopping tolerance of inner iterations. Default is 1e-2. % TOL_OUTR : Stopping tolerance of outer iterations. Default is 1e-3. % If you want to obtain a more accurate solution, decrease TOL_INNR or TOL_OUTR and increase MAX_ITER at the same time. % VB_OUTR : 0 (default) - No debugging information is collected. % 1 (debugging purpose) - History of computation is returned by 'HIS' variable. % 2 (debugging purpose) - History of computation is additionally printed on screen. % VB_INNR : 0 (default) - No debugging information is collected. % 1 (debugging purpose) - History of computation is returned by 'HIS' variable. % 2 (debugging purpose) - History of computation is additionally printed on screen. % % W : Obtained basis matrix (r x m). % H : Obtained coefficients matrix (r x n). % iter : Number of iterations. % elapse : CPU time in seconds. % HIS : (debugging purpose) History of computation, % niter - total iteration number spent for Nesterov's optimal % gradient method, % cpus - CPU seconds at iteration rounds, % objf - objective function values at iteration rounds, % dlta - stopping criteria of block coordinate descent. % % % >>X=rand(1000,500); % >>ManhNMF(X,10); % >>ManhNMF(X,20,'verbose',1); % >>ManhNMF(X,30,'verbose',2,'w_init',rand(r,m)); % >>ManhNMF(X,5,'verbose',2,'tol_outr',1e-5); % Note: other files 'GetStopCriterion.m', 'ApproxFunC.m', and 'wmedianf.mexw32' should be included under the same % directory as this code.
2022-03-23 20:35:03 29KB MahNMF MahNMF code
1
非负矩阵分解(NMF)被证明是一种非常有效的分解方法,可用于数据分析中的降维,并且已广泛应用于计算机视觉,模式识别和信息检索中。 但是,NMF实际上是一种不受监督的方法,因为它无法利用有关数据的先验知识。 在本文中,我们提出了使用图拉普拉斯算子(CNMF-GL)进行约束的非负矩阵分解,该方法不仅利用了几何信息,而且还适当地使用了标签信息来增强NMF。 具体来说,我们期望图的正则化项能够保留原始数据的局部结构,同时具有相同标签和具有不同标签的数据点将具有相应的约束条件。 结果,学习的表示将具有更大的辨别力。 图像聚类的实验结果证明了该算法的有效性。
2022-02-27 19:53:22 282KB Non-negative matrix factorization ·
1
Algorithms for Non-negative Matrix Factorization
2022-01-17 16:00:19 1.07MB NMF
1
团队简介 大家好,我们是金融情报信息决定赛道的葫芦娃团队,本赛题的队伍成员均来自哈工大深圳的人类自然语言技术(HLT)小组,成员包括刘宇瀚,李嘉明,殷熔磾,刘道兴以及袁朝发。指导老师为徐睿峰教授。 方案分享 我们进行以下六点进行介绍 首先是任务简介与数据格式,具体的数据下载详见,也可以data文件夹里面的数据 整个数据存在一下的特征: 我们的整体方案流程如下: 数据集构造部分,我们按照如下形式构造,即内部实体级情感分类任务 数据预先部分我们采用如下操作: 为了更好学习到语料的信息,我们对语言模型在本数据集上又进行一次预训练 之后我们采用
1
养殖环境下恢复前草鱼图像“负样本”
2021-12-16 18:37:26 157.96MB 数据集
1