Artificial Intelligence: With an Introduction to Machine Learning, Second Edition

Authors: Richard E. Neapolitan, Xia Jiang

May 16, 2018 by Chapman and Hall/CRC
Textbook – 466 Pages – 211 B/W Illustrations
ISBN 9781138502383 – CAT# K39475
Series: Chapman & Hall/CRC Artificial Intelligence and Robotics Series

Features

  • Focuses on AI-based algorithms that are currently used to solve diverse problems
  • Enables students to solve problems and improve their computer science skills
  • Introduces difficult concepts with simple, accessible examples
  • Covers large-scale applications of probability-based methods
  • Includes new material on neural networks and deep learning
  • Provided with this edition are power point presentations and videos covering the most fundamental material

Summary

The first edition of this popular textbook, Contemporary Artificial Intelligence, provided an accessible and student friendly introduction to AI. This fully revised and expanded update, Artificial Intelligence: With an Introduction to Machine Learning, Second Edition, retains the same accessibility and problem-solving approach, while providing new material and methods.

The book is divided into five sections that focus on the most useful techniques that have emerged from AI. The first section of the book covers logic-based methods, while the second section focuses on probability-based methods. Emergent intelligence is featured in the third section and explores evolutionary computation and methods based on swarm intelligence. The newest section comes next and provides a detailed overview of neural networks and deep learning. The final section of the book focuses on natural language understanding.

Suitable for undergraduate and beginning graduate students, this class-tested textbook provides students and other readers with key AI methods and algorithms for solving challenging problems involving systems that behave intelligently in specialized domains such as medical and software diagnostics, financial decision making, speech and text recognition, genetic analysis, and more.

Table of Contents

1. Introduction to Artificial Intelligence
1.1 History of Artificial Intelligence
1.2 Outline of this Book

Part I LOGICAL INTELLIGENCE

2. Propositional Logic
2.1 Basics of Propositional Logic
2.2 Resolution
2.3 Artificial Intelligence Applications
2.4 Discussion and Further Reading

3. First-Order Logic
3.1 Basics of First-Order Logic
3.2 Artificial Intelligence Applications
3.3 Discussion and Further Reading

4. Certain Knowledge Representation
4.1 Taxonomic Knowledge
4.2 Frames
4.3 Nonmonotonic Logic
4.4 Discussion and Further Reading

5. Learning Deterministic Models
5.1 Supervised Learning
5.2 Regression
5.3 Parameter Estimation
5.4 Learning a Decision Tree

PART II PROBABILISTIC INTELLIGENCE

6. Probability
6.1 Probability Basics
6.2 RandomVariables
6.3 Meaning of Probability
6.4 RandomVariables in Applications
6.5 Probability in the Wumpus World

7. Uncertain Knowledge Representation
7.1 Intuitive Introduction to Bayesian Networks
7.2 Properties of Bayesian Networks
7.3 Causal Networks as Bayesian Networks
7.4 Inference in Bayesian Networks
7.5 Networks with Continuous Variables
7.6 Obtaining the Probabilities
7.7 Large-Scale Application: Promedas

8. Advanced Properties of Bayesian Network
8.1 Entailed Conditional Independencies
8.2 Faithfulness
8.3 Markov Equivalence
8.4 Markov Blankets and Boundaries

9. Decision Analysis
9.1 Decision Trees
9.2 Influence Diagrams
9.3 Modeling Risk Preferences
9.4 Analyzing Risk Directly
9.5 Good Decision versus Good Outcome
9.6 Sensitivity Analysis
9.7 Value of Information
9.8 Discussion and Further Reading

10. Learning Probabilistic Model Parameters
10.1 Learning a Single Parameter
10.2 Learning Parameters in a Bayesian Network .
10.3 Learning Parameters with Missing Data

11. Learning Probabilistic Model Structure
11.1 Structure Learning Problem
11.2 Score-Based Structure Learning
11.3 Constraint-Based Structure Learning
11.4 Application: MENTOR
11.5 Software Packages for Learning
11.6 Causal Learning
11.7 Class Probability Trees
11.8 Discussion and Further Reading

12. Unsupervised Learning and Reinforcement Learning
12.1 Unsupervised Learning
12.2 Reinforcement Learning
12.3 Discussion and Further Reading

PART III EMERGENT INTELLIGENCE

13. Evolutionary Computation
13.1 Genetics Review
13.2 Genetic Algorithms
13.3 Genetic Programming
13.4 Discussion and Further Reading

14. Swarm Intelligence
14.1 Ant System
14.2 Flocks
14.3 Discussion and Further Reading

PART IV NEURAL INTELLIGENCE

15. Neural Networks and Deep Learning
15.1 The Perceptron
15.2 Feedforward Neural Networks
15.3 Activation Functions
15.4 Application to Image Recognition
15.5 Discussion and Further Reading

PART V LANGUAGE UNDERSTANDING

16. Natural Language Understanding
16.1 Parsing
16.2 Semantic Interpretation
16.3 Concept/Knowledge Interpretation
16.4 Information Extraction
16.5 Discussion and Further Reading

Authors Bio

Richard E. Neapolitan is professor emeritus of computer science at Northeastern Illinois University and a former professor of bioinformatics at Northwestern University. He is currently president of Bayesian Network Solutions. His research interests include probability and statistics, decision support systems, cognitive science, and applications of probabilistic modeling to fields such as medicine, biology, and finance. Dr. Neapolitan is a prolific author and has published in the most prestigious journals in the broad area of reasoning under uncertainty. He has previously written five books, including the seminal 1989 Bayesian network text Probabilistic Reasoning in Expert Systems; Learning Bayesian Networks (2004); Foundations of Algorithms (1996, 1998, 2003, 2010, 2015), which has been translated into three languages; Probabilistic Methods for Financial and Marketing Informatics (2007); and Probabilistic Methods for Bioinformatics (2009). His approach to textbook writing is distinct in that he introduces a concept or methodology with simple examples, and then provides the theoretical underpinning. As a result, his books have the reputation for making difficult material easy to understand without sacrificing scientific rigor.

Xia Jiang is an associate professor in the Department of Biomedical Informatics at the University of Pittsburgh School of Medicine. She has over 16 years of teaching and research experience using artificial intelligence, machine learning, Bayesian networks, and causal learning to model and solve problems in biology, medicine, and translational science. Dr. Jiang pioneered the application of Bayesian networks and information theory to the task of learning causal interactions such as genetic epistasis from data, and she has conducted innovative research in the areas of cancer informatics, probabilistic medical decision support, and biosurveillance. She is the coauthor of the book Probabilistic Methods for Financial and Marketing Informatics (2007).

Downloads/Updates

Resource Updated Description Instructions
March 22, 2018 Instructor and Student Resources.

You may also like...