Pattern.Recognition - Theodoridis.S.,.Koutroumbas.K.4ed,.AP,.2009
文件大小:
13431k
资源说明:Chapter 1. Introduction
1.1 Is Pattern Recognition Important?
1.2 Features, Feature Vectors, and Classifiers
1.3 Supervised, Unsupervised, and Semi-Supervised Learning
1.4 MATLAB Programs
1.5 Outline of the Book
Chapter 2. Classifiers Based on Bayes Decision Theory
2.1 Introduction
2.2 Bayes Decision Theory
2.3 Discriminant Functions and Decision Surfaces
2.4 Bayesian Classification for Normal Distributions
2.5 Estimation of Unknown Probability Density Functions
2.6 The Nearest Neighbor Rule
2.7 Bayesian Networks
2.8 Problems
References
Chapter 3. Linear Classifiers
3.1 Introduction
3.2 Linear Discriminant Functions and Decision Hyperplanes
3.3 The Perceptron Algorithm
3.4 Least Squares Methods
3.5 Mean Square Estimation Revisited
3.6 Logistic Discrimination
3.7 Support Vector Machines
3.8 Problems
MATLAB Programs and Exercises
References
Chapter 4. Nonlinear Classifiers
4.1 Introduction
4.2 The XOR Problem
4.3 The Two-Layer Perceptron
4.4 Three-Layer Perceptrons
4.5 Algorithms Based on Exact Classification of the Training Set
4.6 The Backpropagation Algorithm
4.7 Variations on the Backpropagation Theme
4.8 The Cost Function Choice
4.9 Choice of the Network Size
4.10 A Simulation Example
4.11 Networks with Weight Sharing
4.12 Generalized Linear Classifiers
4.13 Capacity of the l-Dimensional Space in Linear Dichotomies
4.14 Polynomial Classifiers
4.15 Radial Basis Function Networks
4.16 Universal Approximators
4.17 Probabilistic Neural Networks
4.18 Support Vector Machines: The Nonlinear Case
4.19 Beyond the SVM Paradigm
4.20 Decision Trees
4.21 Combining Classifiers
4.22 The Boosting Approach to Combine Classifiers
4.23 The Class Imbalance Problem
4.24 Discussion
4.25 Problems
References
Chapter 5. Feature Selection
5.1 Introduction
5.2 Preprocessing
5.3 The Peaking Phenomenon
5.4 Feature Selection Based on Statistical Hypothesis Testing
5.5 The Receiver Operating Characteristics (ROC) Curve
5.6 Class Separability Measures
5.7 Feature Subset Selection
5.8 Optimal Feature Generation
5.9 Neural Networks and Feature Generation/Selection
5.10 A Hint on Generalization Theory
5.11 The Bayesian Information Criterion
5.12 Problems
MATLAB Programs and Exercises
References
Chapter 6. Feature Generation I: Data Transformation and Dimensionality Reduction
6.1 Introduction
6.2 Basis Vectors and Images
6.3 The Karhunen–Loève Transform
6.4 The Singular Value Decomposition
6.5 Independent Component Analysis
6.6 Nonnegative Matrix Factorization
6.7 Nonlinear Dimensionality Reduction
6.8 The Discrete Fourier Transform (DFT)
6.9 The Discrete Cosine and Sine Transforms
6.10 The Hadamard Transform
6.11 The Haar Transform
6.12 The Haar Expansion Revisited
6.13 Discrete Time Wavelet Transform (DTWT)
6.14 The Multiresolution Interpretation
6.15 Wavelet Packets
6.16 A Look at Two-Dimensional Generalizations
6.17 Applications
6.18 Problems
MATLAB Programs and Exercises
References
Chapter 7. Feature Generation II
7.1 Introduction
7.2 Regional Features
7.3 Features for Shape and Size Characterization
7.4 A Glimpse at Fractals
7.5 Typical Features for Speech and Audio Classification
7.6 Problems
MATLAB Programs and Exercises
References
Chapter 8. Template Matching
8.1 Introduction
8.2 Measures Based on Optimal Path Searching Techniques
8.3 Measures Based on Correlations
8.4 Deformable Template Models
8.5 Content-Based Information Retrieval: Relevance Feedback
8.6 Problems
MATLAB Programs and Exercises
References
Chapter 9. Context-Dependent Classification
9.1 Introduction
9.2 The Bayes Classifier
9.3 Markov Chain Models
9.4 The Viterbi Algorithm
9.5 Channel Equalization
9.6 Hidden Markov Models
9.7 HMM with State Duration Modeling
9.8 Training Markov Models via Neural Networks
9.9 A Discussion of Markov Random Fields
9.10 Problems
MATLAB Programs and Exercises
References
Chapter 10. Supervised Learning: The Epilogue
10.1 Introduction
10.2 Error-Counting Approach
10.3 Exploiting the Finite Size of the Data Set
10.4 A Case Study from Medical Imaging
10.5 Semi-Supervised Learning
10.6 Problems
References
Chapter 11. Clustering: Basic Concepts
11.1 Introduction
11.2 Proximity Measures
11.3 Problems
References
Chapter 12. Clustering Algorithms I: Sequential Algorithms
12.1 Introduction
12.2 Categories of Clustering Algorithms
12.3 Sequential Clustering Algorithms
12.4 A Modification of BSAS
12.5 A Two-Threshold Sequential Scheme
12.6 Refinement Stages
12.7 Neural Network Implementation
12.8 Problems
MATLAB Programs and Exercises
References
Chapter 13. Clustering Algorithms II: Hierarchical Algorithms
13.1 Introduction
13.2 Agglomerative Algorithms
13.3 The Cophenetic Matrix
13.4 Divisive Algorithms
13.5 Hierarchical Algorithms for Large Data Sets
13.6 Choice of the Best Number of Clusters
13.7 Problems
MATLAB Programs and Exercises
References
Chapter 14. Clustering Algorithms III: Schemes Based on Function Optimization
14.1 Introduction
14.2 Mixture Decomposition Schemes
14.3 Fuzzy Clustering Algorithms
14.4 Possibilistic Clustering
14.5 Hard Clustering Algorithms
14.6 Vector Quantization
Appendix
14.7 Problems
MATLAB Programs and Excercises
References
Chapter 15. Clustering Algorithms IV
15.1 Introduction
15.2 Clustering Algorithms Based on Graph Theory
15.3 Competitive Learning Algorithms
15.4 Binary Morphology Clustering Algorithms (BMCAs)
15.5 Boundary Detection Algorithms
15.6 Valley-Seeking Clustering Algorithms
15.7 Clustering via Cost Optimization (Revisited)
15.8 Kernel Clustering Methods
15.9 Density-Based Algorithms for Large Data Sets
15.10 Clustering Algorithms for High-Dimensional Data Sets
15.11 Other Clustering Algorithms
15.12 Combination of Clusterings
15.13 Problems
MATLAB Programs and Exercises
References
Chapter 16. Cluster Validity
16.1 Introduction
16.2 Hypothesis Testing Revisited
16.3 Hypothesis Testing in Cluster Validity
16.4 Relative Criteria
16.5 Validity of Individual Clusters
16.6 Clustering Tendency
16.7 Problems
References
Appendix A. Hints from Probability and Statistics
A.1 Total Probability and the Bayes Rule
A.2 Mean and Variance
A.3 Statistical Independence
A.4 Marginalization
A.5 Characteristic Functions
A.6 Moments and Cumulants
A.7 Edgeworth Expansion of a Pdf
A.8 Kullback–Leibler Distance
A.9 Multivariate Gaussian or Normal Probability Density Function
A.10 Transformation of Random Variables
A.11 The Cramer–Rao Lower Bound
A.12 Central Limit Theorem
A.13 Chi-Square Distribution
A.14 t-Distribution
A.15 Beta Distribution
A.16 Poisson Distribution
A.17 Gamma Function
Appendix B. Linear Algebra Basics
B.1 Positive Definite and Symmetric Matrices
B.2 Correlation Matrix Diagonalization
Appendix C. Cost Function Optimization
C.1 Gradient Descent Algorithm
C.2 Newton’s Algorithm
C.3 Conjugate-Gradient Method
C.4 Optimization for Constrained Problems
Appendix D. Basic Definitions from Linear Systems Theory
D.1 Linear Time Invariant (LTI) Systems
D.2 Transfer Function
D.3 Serial and Parallel Connection
D.4 Two-Dimensional Generalizations
本源码包内暂不包含可直接显示的源代码文件,请下载源码包。