No accessibility information available.
Book Details
Machine Learning: From the Classics to Deep Networks, Transformers and Diffusion Models, Third Edition starts with the basics, including least squares regression and maximum likelihood methods, Bayesian decision theory, logistic regression, and decision trees. It then progresses to more recent techniques, covering sparse modelling methods, learning in reproducing kernel Hilbert spaces and support vector machines. Bayesian learning is treated in detail with emphasis on the EM algorithm and its approximate variational versions with a focus on mixture modelling, regression and classification. Nonparametric Bayesian learning, including Gaussian, Chinese restaurant, and Indian buffet processes are also presented. Monte Carlo methods, particle filtering, probabilistic graphical models with emphasis on Bayesian networks and hidden Markov models are treated in detail. Dimensionality reduction and latent variables modelling are considered in depth. Neural networks and deep learning are thoroughly presented, starting from the perceptron rule and multilayer perceptrons and moving on to convolutional and recurrent neural networks, adversarial learning, capsule networks, deep belief networks, GANs, and VAEs. The book also covers the fundamentals on statistical parameter estimation and optimization algorithms.
Focusing on the physical reasoning behind the mathematics, without sacrificing rigor, all methods and techniques are explained in depth, supported by examples and problems, providing an invaluable resource to the student and researcher for understanding and applying machine learning concepts.
Key Features
- Provides a number of case studies and applications on a variety of topics, such as target localization, channel equalization, image denoising, audio characterization, text authorship identification, visual tracking, change point detection, hyperspectral image unmixing, fMRI data analysis, machine translation, and text-to-image generation
- Most chapters include a number of computer exercises in both MatLab and Python, and the chapters dedicated to deep learning include exercises in PyTorch
- The new material includes an extended coverage of attention transformers, large language models, self-supervised learning and diffusion models
About the author
By Sergios Theodoridis, Professor of Machine Learning and Signal Processing, National and Kapodistrian University of Athens, Athens, Greece2. Probability and Stochastic Processes
3. Learning in Parametric Modelling: Basic Concepts and Directions
4. Mean-Square Error Linear Estimation
5. Stochastic Gradient Descent: the LMS Algorithm and its Family
6. The Least-Squares Family
7. Classification: A Tour of the Classics
8. Parameter Learning: A Convex Analytic Path
9. Sparsity-Aware Learning: Concepts and Theoretical Foundations
10. Sparsity-Aware Learning: Algorithms and Applications
11. Learning in Reproducing Kernel Hilbert Spaces
12. Bayesian Learning: Inference and the EM Algorithm
13. Bayesian Learning: Approximate Inference and Nonparametric Models
14. Monte Carlo Methods
15. Probabilistic Graphical Models: Part 1
16. Probabilistic Graphical Models: Part 2
17. Particle Filtering
18. Neural Networks and Deep Learning: Part 1
19. Neural Networks and Deep Learning: Part 2
20. Dimensionality Reduction and Latent Variables Modeling
Title Reviews
Machine Learning by dedicating a full chapter to recent breakthroughs such as Transformers, Self-supervision, and Diffusion models, while also providing a solid foundation in classical topics including regression, classification, sparse modeling, kernel methods, Bayesian learning, and graphical models. Neural networks are introduced through their historical evolution, beginning with the perceptron and progressing through convolutional and recurrent architectures, generative adversarial networks, and variational autoencoders. The exposition balances conceptual insight with analytical precision and is enriched with case studies, examples, problems, and computational exercises that illuminate both theory and practice. Written by a distinguished author with deep expertise in the field, this book is a timely and indispensable resource for educators, students, and researchers who seek more than a black box treatment and want to understand the principles that drive advances in Machine Learning. - Georgios Giannakis, McKnight Presidential Chair, ECE Dept., University of Minnesota.
Machine Learning (Third Edition) by Theodoridis provides a rigorous and conceptually unified treatment that situates modern ML methods within the broader framework of statistical inference, optimization, and probabilistic modeling. The text excels in its integration of classical pattern-recognition foundations with contemporary advances including variational inference, generative models, kernelized learning, and deep learning. It is a rare text that can serve simultaneously as a research companion, a teaching resource, and a bridge between statistical ML theory and practical algorithmic design. If you are a student looking for a machine-learning textbook that is clear, friendly, and genuinely helpful, Theodoridis’s Machine Learning (Third Edition) is an excellent choice. - Rama Chellapa, Bloomberg Distinguished Professor, Johns Hopkins University
The book offers a comprehensive treatment of Machine Learning, ranging from the classics of classification and regression to modern deep learning. There is a detailed exposition of convex analysis, compressed sensing and sparsity-aware learning. Subsequently the book gets into Bayesian analysis and a detailed coverage of graphical models, providing an excellent exposition of classical unsupervised learning. The last part covers neural networks including modern research topics like GANs, Diffusions and Transformers. Overall, this is a comprehensive and insightful resource for anyone seeking depth and breadth in Machine Learning. - Alexandros G Dimakis, EECS, UC Berkeley



















