Machine Learning: An Algorithmic Perspective, Second Edition

Author: Stephen Marsland

VitalSource eBook access code and instructions will be provided within the print book.

October 8, 2014 by Chapman and Hall/CRC
Textbook – 457 Pages – 205 B/W Illustrations
ISBN 9781466583283 – CAT# K18981
Series: Chapman & Hall/CRC Machine Learning & Pattern Recognition

Features

  • Reflects recent developments in machine learning, including the rise of deep belief networks
  • Presents the necessary preliminaries, including basic probability and statistics
  • Discusses supervised learning using neural networks
  • Covers dimensionality reduction, the EM algorithm, nearest neighbor methods, optimal decision boundaries, kernel methods, and optimization
  • Describes evolutionary learning, reinforcement learning, tree-based learners, and methods to combine the predictions of many learners
  • Examines the importance of unsupervised learning, with a focus on the self-organizing feature map
  • Explores modern, statistically based approaches to machine learning
  • Provides working Python code for all the algorithms on the author’s website, enabling students to experiment with the code

Summary

A Proven, Hands-On Approach for Students without a Strong Statistical Foundation

Since the best-selling first edition was published, there have been several prominent developments in the field of machine learning, including the increasing work on the statistical interpretations of machine learning algorithms. Unfortunately, computer science students without a strong statistical background often find it hard to get started in this area.

Remedying this deficiency, Machine Learning: An Algorithmic Perspective, Second Edition helps students understand the algorithms of machine learning. It puts them on a path toward mastering the relevant mathematics and statistics as well as the necessary programming and experimentation.

New to the Second Edition

  • Two new chapters on deep belief networks and Gaussian processes
  • Reorganization of the chapters to make a more natural flow of content
  • Revision of the support vector machine material, including a simple implementation for experiments
  • New material on random forests, the perceptron convergence theorem, accuracy methods, and conjugate gradient optimization for the multi-layer perceptron
  • Additional discussions of the Kalman and particle filters
  • Improved code, including better use of naming conventions in Python

Suitable for both an introductory one-semester course and more advanced courses, the text strongly encourages students to practice with the code. Each chapter includes detailed examples along with further reading and problems. All of the code used to create the examples is available on the author’s website.

Instructors

We provide complimentary e-inspection copies of primary textbooks to instructors considering our books for course adoption.

You may also like...