Machine Learning and Its Application: A Quick Guide for Beginners aims to cover most of the core topics required for study in machine learning curricula included in university and college courses. The textbook introduces readers to central concepts in machine learning and artificial intelligence, which include the types of machine learning algorithms and the statistical knowledge required for devising relevant computer algorithms. The book also covers advanced topics such as deep learning and feature engineering.
Machine Learning and Its Application: A Quick Guide for Beginners is an essential book for students and learners who want to understand the basics of machine learning and equip themselves with the knowledge to write algorithms for intelligent data processing applications.
Key Features:
- 8 organized chapters on core concepts of machine learning for learners
- Accessible text for beginners unfamiliar with complex mathematical concepts
- Introductory topics are included, including supervised learning, unsupervised learning, reinforcement learning and predictive statistics
- Advanced topics such as deep learning and feature engineering provide additional information
- Introduces readers to python programming with examples of code for understanding and practice
- Includes a summary of the text and a dedicated section for references
Machine Learning and Its Application: A Quick Guide for Beginners is an essential book for students and learners who want to understand the basics of machine learning and equip themselves with the knowledge to write algorithms for intelligent data processing applications.
Table of Contents
Chapter 1 Introduction1.1. What Is Artificial Intelligence?
1.1.1. Evolution Of Ai
1.1.2. Dimensions Of Ai (Types Of Ai)
1.1.3. Why Is Learning Ai Important?
1.2. Need For Machine Learning
1.3. What Is Learning In Machine Learning?
1.4. When Do We Need Machine Learning?
1.5. Types Of Learning
1.5.1. Supervised
1.5.2. Unsupervised
1.5.3. Semi-Supervised
1.5.4. Reinforcement
1.5.5. Self-Supervised
1.6. What Is The Need For This Book On Machine Learning?
1.7. Outline Of The Book
- Concluding Remarks
Chapter 2 Supervised Machine Learning: Classification
2.1. Introduction To Supervised Machine Learning
2.1.1. Supervised Machine Learning
2.1.2. What Is Classification?
2.1.3. Types Of Classification
2.2. Decision Tree
2.2.1. Overview
2.2.2. Algorithmic Framework
2.2.2.1. Different Terminologies Used In Decision Tree
2.2.2.2. Entropy
2.2.2.3. Information Gain
2.2.2.4. Gini Index
2.2.3. Hands-On Example
2.2.3.1. Building A Decision Tree With The Help Of Information Gain
2.2.4. Types Of Decision Tree Algorithms
2.2.5. Advantages And Disadvantages Of The Decision Tree
2.2.6. Programming Approach For Decision Tree
2.3. Random Forest
2.3.1. Overview
2.3.2. Why Use Random Forest?
2.3.3. Algorithmic Framework
2.3.3.1. Assumptions For Random Forest
2.3.4. Advantages And Disadvantages Of Random Forest
2.3.4.1. Advantages
2.3.4.2. Disadvantages
2.3.5. Programming Approach For Random Forest
2.4. K-1Earest Neighbor
2.4.1. Overview
2.4.2. When Do We Use Knn Algorithm?
2.4.3. How To Select The Value Of K?
2.4.4. Algorithmic Framework
2.4.4.1 How Does Knn Work?
2.4.4.2 Pseudo Code Of Knn
2.4.5. Programming Approach For K-Nearest Neighbor
2.5. Naïve Bayes Classifier
2.5.1. Overview
2.5.1.1. Conditional Probability Model Of Classification
2.5.1.2. Calculating The Prior And Conditional Probabilities
2.5.2. Hands-On Example
2.5.2.1. Make Predictions With Naive Bayes
2.5.3. Advantages And Disadvantages Of Naïve Bayes:
2.5.3.1. Advantages
2.5.3.2. Disadvantages
2.5.4. Tips For Using Naive Bayes Algorithm
2.5.5. Programming Approach For Naïve Bayes Classifier
2.6. Support Vector Machine
2.6.1. Overview
2.6.1.1. Decision Rule
2.6.2. Hands-On Example
2.6.2.1. Working Of Svm
2.6.3. The Kernel Trick
2.6.3.1. Choosing A Kernel Function
2.6.4. Advantages And Disadvantages Of Support Vector Machines
2.6.5. Programming Approach For Support Vector Machine
- Concluding Remarks
Chapter 3 Unsupervised Machine Learning: Clustering
3.1. Introduction To Unsupervised Machine Learning
3.1.1. What Is Clustering?
3.1.2. Types Of Clustering Methods
3.1.3. Real-Life Applications Of Clustering
3.2. K-Means Clustering
3.2.1. Overview
3.2.2. Algorithmic Framework
3.2.2.1. Introduction To K-Means Algorithm
3.2.3. Hands-On Example
3.2.4. Weakness Of K-Means Clustering
3.2.5. Strength And Application Of K-Means Clustering
3.2.6. Programming Approach For K-Means Clustering
3.3. Hierarchical Clustering
3.3.1. Overview
3.3.2. Algorithmic Framework
3.3.3. Hands-On Example
3.3.4. Programming Approach For Hierarchical Clustering
3.4. Self-Organizing Map
3.4.1. Overview
3.2.2.1. How Som Works?
3.4.2. Algorithmic Framework
3.4.3. Advantage And Disadvantage Of Som
3.4.3.1. Advantage
3.4.3.2. Disadvantage
3.4.3.3. Different Perspective Of Som
3.4.4. Programming Approach For K-Nearest Neighbor
- Concluding Remarks
Chapter 4 Regression: Prediction
4.1. Introduction To Regression
4.1.1. What Is Regression?
4.1.1.1. Linear Regression
4.1.1.2. Advantage
4.1.2. How Is It Different From Classification?
4.1.3. Applications Of Regression
4.2. Linear Regression
4.2.1. Overview
4.2.1.1. Simple Regression
4.2.1.2. Making A Prediction
4.2.1.3. Multi-Variable Regression
4.2.2. Linear Regression Line
4.2.2.1. Positive Linear Relationship
4.2.2.2. Negative Linear Relationship
4.2.2.3. Assumptions In Regression And Its Justification
4.2.3. Regression Algorithms
4.2.4. Programming Approach For Linear Regression
4.3. Logistic Regression
4.3.1. Overview
4.3.1.1. Comparison To Linear Regression
4.3.1.2. Binary Logistic Regression
4.3.1.3. Multiclass Logistic Regression
4.3.2. Algorithmic Framework
4.3.2.1. Predict With Logistic Regression
4.3.2.2. Data Preparation For Logistic Regression
4.3.3. Programming Approach For Logistic Regression
- Concluding Remarks
Chapter 5 Reinforcement Learning
5.1. Introduction To Reinforcement Learning
5.1.1. Overview
5.1.1.1. Data Preparation For Logistic Regression
5.1.1.2. How Rl Differs From Supervised Learning
5.1.2. Element Of Reinforcement Learning
5.2. Algorithmic Framework
5.2.1. Basic Steps Of Reinforcement Learning
5.2.2. Types Of Reinforcement Learning
5.2.3. Elements Of Reinforcement Learning
5.2.4. Models Of Reinforcement Learning
5.2.4.1. Markov's Decision Process (Mdp)
5.2.4.2. Q-Learning
5.3. Hands-On Example Of Reinforcement Learning
5.4. Real-World Examples Of A Reinforcement Learning Task
5.4.1. Advantages
5.4.1. Disadvantages
5.5. Programming Approach For Reinforcement Learning
- Concluding Remarks
Chapter 6 Deep Learning:A New Approach To Machine Learning ……………..
6.1. Introduction To Reinforcement Learning
6.1.1. Architecture Of Deep Learning
6.1.2. Working Principle Of Deep Learning
6.1.2.1. Training Phase
6.1.3. Types Of Deep Learning
6.1.4. Advantage And Disadvantage Of Deep Learning
6.1.4.1. Advantages
6.1.4.2. Disadvantages
6.1.5. Application Of Deep Learning
6.2. Artificial Neural Network
6.2.1. Overview
6.2.2. Perceptron Learning
6.2.2.1. Linear Threshold Unit (Tlu)
6.2.2.2. Single Layer Perceptron Learning
6.2.2.3. Multilayer Perceptron Learning
6.2.3. Working Principle Of Artificial Neural Network
6.2.4. Types Of Neural Network
6.2.5. Applications Of Neural Network
6.2.6. Programming Approach For Artificial Neural Network (Ann)
6.3. Components Of Neural Network
6.3.1. Layers
6.3.1.1. Input Layer
6.3.1.2. Output Layer
6.3.1.3. Hidden Layer
6.3.2. Weights And Bias Of A Neuron
6.3.2.1. Weights
6.3.2.2. Bias
6.3.3. Activation Functions
6.3.3.1. Multi-State Activation Functions
6.3.3.2. Identity Activation Functions
6.3.3.3. Binary Step Activation Functions
6.3.3.4. Sigmoid Activation Function
6.3.3.5. Tanh Activation Function
6.3.3.6. Rectified Linear Unit (Relu) Activation Function
6.3.3.7. Softmax Activation Function
6.3.3.8. Softplus Activation Function
6.3.3.9. Exponential Linear Unit Activation Function
6.3.3.10. Exponential Linear Unit Activation Function
6.3.3.11. Exponential Linear Unit Activation Function
6.3.4. Forward Propagation
6.3.5. Backpropagation
6.3.6. Learning Rate
6.3.7. Gradient Descent
6.3.7.1. Gradient
6.3.7.2. The General Idea
6.3.7.3. Type Of Gradient Descent
6.4. Convolutional Neural Network
6.4.1. Overview
6.4.2. Algorithmic Framework
6.4.2.1. Layers
6.4.3. Types Of Cnn
6.4.4. Programming Approach For Convolutional Neural Network
6.5. Recurrent Neural Network
6.5.1. Overview
6.5.2. Algorithmic Framework
6.5.2.1. Backpropagation Through Time
6.5.2.2. Need For More Than Rnn: Vanishing And Exploding Gradient Problem
6.5.2.3. Types Of Rnn
6.5.3. Hopfield Network
6.5.4. Long Short-Term Memory (Lstm)
6.5.4.1. Few Concepts In Lstm
6.5.5. Lstm Hyper-Parameter Tuning
6.5.6. Programming Approach For Recurrent Neural Network
- Concluding Remarks
Chapter 7 Feature Engineering ………………………………………………………………………………… 256 7.1. Introduction
7.1.1. Types Of Feature Selection
7.2. Filter-Based Approach: Hypothesis Testing
7.2.1. T-Test
7.2.1.1. Hypothesis
7.2.1.2. Tables Of T-Distribution
7.2.1.3. T-Score
7.2.1.4. P-Value
7.2.1.5. Degree Of Freedom
7.2.2. Z-Test
7.2.2.1. Z-Score Mean
7.2.2.2. What Exactly Is The Central Limit Theorem?
7.2.2.3. When To Perform A Z Test?
7.2.2.4. Difference Between T-Test And Z-Test
7.2.3. Anova
7.2.4. Manova
7.2.4.1. Assumptions
7.2.4.2. Exceptional Situations
7.2.4.3. Manova Vs. Anova
7.3. Filter-Based Approach: Correlation
7.3.1. Correlation Analysis
7.3.2. Pearson's Correlation
7.3.2.1. Correlation Coefficient
7.3.2.2. Assumptions
7.3.2.3. The Cramer's V Correlation
7.3.3. Chi-Square Test
7.3.3.1. Chi-Square P-Values
7.3.3.2. Algorithm For Chi-Square Test
7.3.3.3. Use Of Chi-Square Test
7.3.4. Spearman's Rank Correlation
7.4. Evolutionary Algorithms
7.4.1. Genetic Algorithm
7.4.1.1. Use Of Chi-Square Test
7.4.1.2. Search Space
7.4.1.3. Genetic Operators
7.4.1.4. Multi-Objective Functions
7.4.1.5. Termination
7.4.1.6. Algorithmic Framework
7.4.2. Particle Swarm Optimization
7.4.2.1. Particles
7.4.2.2. Swarms
7.4.2.3. Optimization
7.4.3. Ant Colony Optimization
7.4.3.1. Algorithmic Framework
7.4.3.2. Application Of Ant Colony Optimization
Author
- Indranath Chatterjee