图像分割是基于地理对象的图像分析(GEOBIA)中至关重要的基础步骤。 许多多尺度分割算法已被广泛用于高分辨率(HR)遥感图像中。 这些细分算法需要一个预设参数(称为scale参数)来控制每个对象的平均大小。 但是,由于空间变化,单一尺度参数几乎不能描述具有不同土地覆被的区域的边界。 为了克服这一局限性,本研究提出了一种用于多尺度分割的自适应参数优化方法。 为了找到最佳的物体尺度,通过计算物体内部和内部物体之间的光谱角来应用局部光谱异质性度量。 与选择全局最优尺度参数不同,本研究旨在从所有不同尺度的结果中直接搜索最优对象,并将其组合为最终的分割结果。 在实验中,使用多分辨率分割生成不同比例的分割结果,并将QuickBird-2图像用作test.data。 在四张HR测试图像上的优化结果表明,与单尺度分割结果相比,该方法具有更好的分割效果。
2021-03-30 21:08:51 3.87MB Image segmentation Scale parameter
1
NanoLog A Nanosecond Scale Logging System.pdf
2021-03-23 17:02:00 547KB LOG
1
描述和演示使用ABAQUS和DIGIMAT(e-Xstream Engineering的线性和非线性多尺度材料建模软件)进行精确的非线性多尺度材料和产品建模的能力
2021-03-23 15:30:52 489KB DIGIMAT ABAQUS
1
Forecasting is a common data science task that helps organizations with capacity planning, goal setting, and anomaly detection. Despite its importance, there are serious challenges associated with producing reliable and high quality forecasts - especially when there are a variety of time series and analysts with expertise in time series modeling are relatively rare. To address these challenges, we describe a practical approach to forecasting "at scale" that combines configurable models with analyst-in-the-loop performance analysis. We propose a modular regression model with interpretable parameters that can be intuitively adjusted by analysts with domain knowledge about the time series. We describe performance analyses to compare and evaluate forecasting procedures, and automatically flag forecasts for manual review and adjustment. Tools that help analysts to use their expertise most effectively enable reliable, practical forecasting of business time series.
2021-03-17 20:05:22 1.08MB 机器学习
1
ImageNet Large Scale Visual Recognition Challenge.pdf
2021-03-16 17:15:00 8.09MB 深度学习
1
AN-0901:使用SCALE-2 IGBT驱动器控制多电平变换器拓扑的方法.pdf
2021-03-16 14:04:18 396KB IGBT、驱动
1
Multi-Scale Context Aggregation by Dilated Convolutions.zip
2021-03-16 09:27:44 2.63MB 深度学习
1
In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3×3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16–19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.
2021-03-15 10:55:36 185KB AI 机器学习 深度学习 学术论文
1
Large Scale Distributed Deep Networks
2021-03-12 09:13:55 354KB 深度学习
1
Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods
2021-03-12 09:13:54 894KB 深度学习
1