Machine Learning using Neural Networks and Support Vector Machines
Highlights
Course participants will gain:
- A solid understanding, both at intuitive level and on a mathematical basis, of classification using neural networks and support vector machines
- A grasp of the fundamental capabilities, liabilities, and limitations of the two methods of classification
- Understanding of the categories of problems these methods best apply to
Attendee Profile
Participants should be comfortable with college-level algebra (e.g., notions such as the Euclidean norm of a vector or linear equation systems). No prior knowledge of machine learning is necessary.
Outline
- Introduction to Neural Networks
- History
- What is a Neural Network?
- Examples
- Elements of a Neural Network
- Single-Layer Perceptrons
- Introduction
- Capabilities
- Bias
- Training
- Algorithm
- Summary
- Multi-Layer Perceptrons
- Introduction
- Terminology misunderstanding
- Workings
- Capabilities
- Training prerequisite
- Output activation
- The backpropagation algorithm
- Task
- Delta rule
- Gradient locality
- Regularization
- Local minima
- Accommodating Discrete Inputs
- One-Hot encoding
- Optimizing One-Hot encoding
- Interesting tidbits
- Outputs
- Multi-label classification
- Soft training
- NLP applications
- Conclusions of Neural Networks
- Introduction to Support Vector Machines
- Three metaphors
- Background: Structural Risk Minimization
- Capacity vs. Generalization
- Empirical Risk
- Risk Bound
- The VC dimension
- Example: Hyperplanes
- Corollary
- The SVM Connection
- Linear SVMs
- Computation
- Lagrangian
- The support vectors
- Testing
- Unseparable data
- Nonlinear SVMs
- Space Transformation. Reproducing Kernel Hilbert Spaces (RKHS)
- Kernel example
- Using kernels
- Loose Ends. Conclusions
- Multiple classes
- Soft outputs
- Conclusions