Machine Learning: An Algorithmic Perspective, Second Edition

Machine Learning: An Algorithmic Perspective, Second Edition

CRCML1

Stephen Marsland

Reference – 457 Pages – 205 B/W Illustrations
ISBN 9781466583283 – CAT# K18981
Series: Chapman & Hall/CRC Machine Learning & Pattern Recognition

  • Reflects recent developments in machine learning, including the rise of deep belief networks
  • Presents the necessary preliminaries, including basic probability and statistics
  • Discusses supervised learning using neural networks
  • Covers dimensionality reduction, the EM algorithm, nearest neighbor methods, optimal decision boundaries, kernel methods, and optimization
  • Describes evolutionary learning, reinforcement learning, tree-based learners, and methods to combine the predictions of many learners
  • Examines the importance of unsupervised learning, with a focus on the self-organizing feature map
  • Explores modern, statistically based approaches to machine learning
  • Provides working Python code for all the algorithms on the author’s website, enabling students to experiment with the code

Summary

A Proven, Hands-On Approach for Students without a Strong Statistical Foundation

Since the best-selling first edition was published, there have been several prominent developments in the field of machine learning, including the increasing work on the statistical interpretations of machine learning algorithms. Unfortunately, computer science students without a strong statistical background often find it hard to get started in this area.

Remedying this deficiency, Machine Learning: An Algorithmic Perspective, Second Edition helps students understand the algorithms of machine learning. It puts them on a path toward mastering the relevant mathematics and statistics as well as the necessary programming and experimentation.

New to the Second Edition

  • Two new chapters on deep belief networks and Gaussian processes
  • Reorganization of the chapters to make a more natural flow of content
  • Revision of the support vector machine material, including a simple implementation for experiments
  • New material on random forests, the perceptron convergence theorem, accuracy methods, and conjugate gradient optimization for the multi-layer perceptron
  • Additional discussions of the Kalman and particle filters
  • Improved code, including better use of naming conventions in Python

Suitable for both an introductory one-semester course and more advanced courses, the text strongly encourages students to practice with the code. Each chapter includes detailed examples along with further reading and problems. All of the code used to create the examples is available on the author’s website.

Table of Contents

Introduction. Linear Discriminants. The Multi-Layer Perceptron. Radial Basis Functions and Splines. Support Vector Machines. Learning with Trees. Decision by Committee: Ensemble Learning. Probability and Learning. Unsupervised Learning. Dimensionality Reduction. Optimization and Search. Evolutionary Learning. Reinforcement Learning. Markov Chain Monte Carlo (MCMC) Methods. Graphical Models. Python.

Author Bio

Stephen Marsland is a professor of scientific computing and the postgraduate director of the School of Engineering and Advanced Technology (SEAT) at Massey University. His research interests in mathematical computing include shape spaces, Euler equations, machine learning, and algorithms. He received a PhD from Manchester University.

Preview book (LINK)

You may also like...