超拉普拉斯先验的盲去卷积算法论文中的Matlab代码可以运行。
2019-12-21 22:15:09 2.05MB 去模糊
1
Object-Oriented Analysis and Design Using UML
2019-12-21 22:13:35 3.89MB uml oo 226
1
何凯明先生09年CVPR会议最佳论文,Single Image Haze Removal Using Dark Channel Prior,中英文都有,并且有何恺明在CVPR会议上演讲时的ppt——基于暗通道先验的单幅图像去雾技术,并且有何凯明公开的源码guided-filter-code。
2019-12-21 22:12:03 12.86MB HazeRemoval dehaze defrog code
1
Radar Systems Analysis and Design using MATLAB, 英文版第三版,这个是2013年出版的,目前最新的一个版本,压缩包里是书的PDF与程序
2019-12-21 22:11:43 7.04MB radar Matlab
1
ios-native-popup-using-unity-master.zip
2019-12-21 22:08:46 229KB 使用 Unity 来实现 iOS
1
英文版PDF, 2018出版 Master reinforcement learning, a popular area of machine learning, starting with the basics: discover how agents and the environment evolve and then gain a clear picture of how they are inter-related. You’ll then work with theories related to re inforcement learning and see the concepts that build up the reinforcement learning process. Reinforcement Learning discusses algorithm implementations important for reinforcement learning, including Markov’s Decision process and Semi Markov Decision process. The next section shows you how to get started with Open AI before looking at Open AI Gym. You’ll then learn about Swarm Intelligence with Python in terms of reinforcement learning. The last part of the book starts with the TensorFlow environment and gives an outline of how reinforcement learning can be applied to TensorFlow. There’s also coverage of Keras, a framework that can be used with reinforcement learning. Finally, you'll delve into Google’s Deep Mind and see scenarios where reinforcement learning can be used. What You'll Learn Absorb the core concepts of the reinforcement learning process Use advanced topics of deep learning and AI Work with Open AI Gym, Open AI, and Python Harness reinforcement learning with TensorFlow and Keras using Python Who This Book Is For Data scientists, machine learning and deep learning professionals, developers who want to adapt and learn reinforcement learning. Table of Contents Chapter 1: Reinforcement Learning Basics Chapter 2: RL Theory and Algorithms Chapter 3: OpenAI Basics Chapter 4: Applying Python to Reinforcement Learning Chapter 5: Reinforcement Learning with Keras, TensorFlow, and ChainerRL Chapter 6: Google’s DeepMind and the Future of Reinforcement Learning
2019-12-21 22:07:40 10.99MB Machine Learning TensorFlow Keras
1
Image Deformation Using Moving Least Squares 算法的matlab实现。通过移动的最小二乘法改变和自定义的控制点操作图片。
2019-12-21 22:05:49 1.13MB MLS Deformation
1
Aircraft-Flight-Dynamics-Control-and-Simulation-Using-Matlab-and-Simulink-Singgih-Satrio-Wibowo-2007.pdf
2019-12-21 22:00:48 1.8MB Aircraft Flight Dynamics Simulation
1
该资源为百度Apollo中MPC控制所参考论文
2019-12-21 21:58:46 308KB Apollo
1
I am from the days when computer engineers and scientists had to write assembly language on IBM mainframes to develop high-performance programs. Programs were written on punch cards and compilation was a one-day process; you dropped o your punch-code written program and picked up the results the next day. If there was an error, you did it again. In those days, a good programmer had to understand the underlying machine hardware to produce good code. I get a little nervous when I see computer science students being taught only at a high abstraction level and languages like Ruby. Although abstraction is a beautiful thing to develop things without getting bogged down with unnecessary details, it is a bad thing when you are trying to develop super high performance code. Since the introduction of the rst CPU, computer architects added incredible features into CPU hardware to \forgive" bad programming skills; while you had to order the sequence of machine code instructions by hand two decades ago, CPUs do that in hardware for you today (e.g., out of order processing). A similar trend is clearly visible in the GPU world. Most of the techniques that were taught asperformance improvement techniquesin GPU programming ve years ago (e.g., thread divergence, shared memory bank conicts, and reduced usage of atomics) are becoming less relevant with the improved GPU architectures because GPU architects are adding hardware features that are improving these previous ineciencies so much that it won’t even matter if a programmer is sloppy about it within another 5{10 years. However, this is just a guess. What GPU architects can do depends on their (i)transistor budget, as well as (ii) their customers’ demands. When I saytransistor budget, I am referring to how many transistors the GPU manufacturers can cram into an Integrated Circuit (IC), aka a \chip." When I saycustomer demands, I mean that even if they can implement a feature, the applications that their customers are using might not
2019-12-21 21:54:28 4.97MB GPU CUDA Parallel
1