The main focus of the book is on computational modelling of biological and natural intelligent systems in order to develop nature inspired artificially intelligent systems. These algorithmic models have as their main objective to facilitate the implementation of artificial intelligent systems for solving complex real-world systems (e.g. fuzzy systems are applied successfully to control systems, gear transmission, breaking systems; swarm intelligence to image classification). This second edition expands on all these paradigms, providing a more detailed and equal treatment of them all. Most recent advances in CI have been added, namely artificial immune systems, hybrid systems, and a section on how to perform empirical studies. �
Table of Contents
Contents Page List of Tables List of Figures List of Algorithms Preface
Part I INTRODUCTION 1 Introduction to Computational Intelligence 1.1 Computational Intelligence Paradigms 1.1.1
Artificial Neural Networks 1.1.2 Evolutionary Computation 1.1.3 Swarm Intelligence 1.1.4 Artificial Immune Systems
1.1.5 Fuzzy Systems 1.2 Short History 1.3 Assignments viii CONTENTS
Part II ARTIFICIAL NEURAL NETWORKS 2 The Artificial Neuron 2.1 Calculating the Net Input Signal 2.2 Activation
Functions 2.3 Artificial Neuron Geometry 2.4 Artificial Neuron Learning 2.4.1 Augmented Vectors 2.4.2 Gradient
Descent Learning Rule 2.4.3 Widrow-Hoff Learning Rule 2.4.4 Generalized Delta Learning Rule 2.4.5 Error-Correction
Learning Rule 2.5 Assignments 3 Supervised Learning Neural Networks 3.1 Neural Network Types 3.1.1 Feedforward
Neural Networks 3.1.2 Functional Link Neural Networks 3.1.3 Product Unit Neural Networks 3.1.4 Simple Recurrent
Neural Networks 3.1.5 Time-Delay Neural Networks 3.1.6 Cascade Networks 3.2 Supervised Learning Rules 3.2.1 The
Learning Problem 3.2.2 Gradient Descent Optimization 3.2.3 Scaled Conjugate Gradient 3.2.4 LeapFrog Optimization
3.2.5
Particle Swarm Optimization 3.3 Functioning of Hidden Units 3.4 Ensemble Neural Networks CONTENTS 3.5 Assignments
4 Unsupervised Learning Neural Networks 4.1 Background 4.2 Hebbian Learning Rule 4.3 Principal Component Learning
Rule 4.4 Learning Vector Quantizer-I 4.5 Self-Organizing Feature Maps 4.5.1 Stochastic Training Rule 4.5.2 Batch
Map 4.5.3 Growing SOM 4.5.4 Improving Convergence Speed 4.5.5 Clustering and Visualization 4.5.6 Using SOM 4.6
Assignments 5 Radial Basis Function Networks 5.1 Learning Vector Quantizer-II 5.2 Radial Basis Function Neural
Networks 5.2.1 Radial Basis Function Network Architecture 5.2.2 Radial Basis Functions 5.2.3 Training Algorithms
5.2.4 Radial Basis Function Network Variations 5.3 Assignments 6 Reinforcement Learning 6.1 Learning through Awards
6.2 Model-Free Reinforcement Learning Model 6.2.1 Temporal Difference Learning 6.2.2 Q-Learning x CONTENTS 6.3
Neural Networks and Reinforcement Learning 6.3.1 RPROP 6.3.2 Gradient Descent Reinforcement Learning 6.3.3 Connectionist
Q-Learning 6.4 Assignments 7 Performance Issues (Supervised Learning) 7.1 Performance Measures 7.1.1 Accuracy 7.1.2
Complexity 7.1.3 Convergence 7.2 Analysis of Performance 7.3 Performance Factors 7.3.1 Data Preparation 7.3.2 Weight
Initialization 7.3.3 Learning Rate and Momentum 7.3.4 Optimization Methid 7.3.5 Architecture Selection 7.3.6 Adaptive
Activation Functions 7.3.7 Active Learning 7.4 Assignments
Part III EVOLUTIONARY COMPUTATION 8 Introduction to Evolutionary Computation 8.1 Generic Evolutionary Algorithm
8.2 Representation - The Chromosome 8.3 Initial Population 8.4 Fitness Function CONTENTS 8.5 Selection 8.5.1 Selective
Pressure 8.5.2 Random Selection 8.5.3 Proportional Selection 8.5.4 Tournament Selection 8.5.5 Rank-Based Selection
8.5.6 Boltzmann Selection 8.5.7 (μ +, λ)-Selection 8.5.8 Elitism 8.5.9 Hall of Fame
8.6 Reproduction Operators 8.7 Stopping Conditions 8.8 Evolutionary Computation versus Classical Optimization 8.9
Assignments 9 Genetic Algorithms 9.1 Canonical Genetic Algorithm 9.2 Crossover 9.2.1 Binary Representations 9.2.2
Floating-Point Representation 9.3 Mutation 9.3.1 Binary Representations 9.3.2 Floating-Point Representations 9.3.3
Macromutation Operator - Headless Chicken 9.4 Control Parameters 9.5 Genetic Algorithm Variants 9.5.1 Generation
Gap Methods 9.5.2 Messy Genetic Algorithms 9.5.3 Interactive Evolution xii CONTENTS 9.5.4 Island Genetic Algorithms
9.6 Advanced Topics 9.6.1 Niching Genetic Algorithms 9.6.2 Constraint Handling 9.6.3 Multi-Objective Optimization
9.6.4 Dynamic Envir