上传者: u013003382
|
上传时间: 2021-06-10 17:14:21
|
文件大小: 5.17MB
|
文件类型: AZW3
Mastering Machine Learning with scikit-learn - Second Edition by Gavin Hackeling
English | 24 July 2017 | ASIN: B06ZYRPFMZ | ISBN: 1783988363 | 254 Pages | AZW3 | 5.17 MB
Key Features
Master popular machine learning models including k-nearest neighbors, random forests, logistic regression, k-means, naive Bayes, and artificial neural networks
Learn how to build and evaluate performance of efficient models using scikit-learn
Practical guide to master your basics and learn from real life applications of machine learning
Book Description
Machine learning is the buzzword bringing computer science and statistics together to build smart and efficient models. Using powerful algorithms and techniques offered by machine learning you can automate any analytical model.
This book examines a variety of machine learning models including popular machine learning algorithms such as k-nearest neighbors, logistic regression, naive Bayes, k-means, decision trees, and artificial neural networks. It discusses data preprocessing, hyperparameter optimization, and ensemble methods. You will build systems that classify documents, recognize images, detect ads, and more. You will learn to use scikit-learn's API to extract features from categorical variables, text and images; evaluate model performance, and develop an intuition for how to improve your model's performance.
By the end of this book, you will master all required concepts of scikit-learn to build efficient models at work to carry out advanced tasks with the practical approach.
What you will learn
Review fundamental concepts such as bias and variance
Extract features from categorical variables, text, and images
Predict the values of continuous variables using linear regression and K Nearest Neighbors
Classify documents and images using logistic regression and support vector machines
Create ensembles of estimators using bagging and boosting techniques
Discover hidden structures in data using K-Means clustering
Evaluate the per