IRMA-International.org: Creator of Knowledge
Information Resources Management Association
Advancing the Concepts & Practices of Information Resources Management in Modern Organizations

Artificial Higher Order Neural Networks for Computer Science and Engineering: Trends for Emerging Applications

Artificial Higher Order Neural Networks for Computer Science and Engineering: Trends for Emerging Applications
Author(s)/Editor(s): Ming Zhang (Christopher Newport University, USA)
Copyright: ©2010
DOI: 10.4018/978-1-61520-711-4
ISBN13: 9781615207114
ISBN10: 1615207112
EISBN13: 9781615207121

Purchase

View Artificial Higher Order Neural Networks for Computer Science and Engineering: Trends for Emerging Applications on the publisher's website for pricing and purchasing information.


Description

Artificial neural network research is one of the promising new directions for the next generation of computers and open box artificial Higher Order Neural Networks (HONNs) play an important role in this future.

Artificial Higher Order Neural Networks for Computer Science and Engineering: Trends for Emerging Applications introduces Higher Order Neural Networks (HONNs) to computer scientists and computer engineers as an open box neural networks tool when compared to traditional artificial neural networks. Since HONNs are open box models, they can be easily used in information science, information technology, management, economics, and business. This book details the techniques, theory and applications essential to engaging and capitalizing on this developing technology.



Table of Contents

More...
Less...

Preface

This is the first book which introduces Higher Order Neural Networks (HONNs) to people working in the fields of computer science and computer engineering, and presents to them that HONNs is an open box neural networks tool compare to traditional artificial neural networks. This is the first book which includes details of the most popular HONNs models and provides opportunities for millions of people working in the computer science and computer engineering areas to know what HONNs are, and how to use HONNs in computer science and computer engineering areas.

Artificial Neural Networks (ANNs) are known to excel in pattern recognition, pattern matching and mathematical function approximation. However they suffer from several well known limitations– they can often become stuck in local, rather than global minima, as well as taking unacceptably long times to converge in practice. Of particular concern, especially from the perspective of data simulation and predictions, is their inability to handle non-smooth, discontinuous training data, and complex mappings (associations). Another limitation of ANN is a ‘black box’ nature – meaning that explanations (reasons) for their decisions are not immediately obvious, unlike techniques such as Decision Trees. This then is the motivation for developing artificial Higher Order Neural Networks (HONNs), since HONNs are ‘open-box’ models and each neuron and weight are mapped to function variable and coefficient.

In recent years, researchers use HONNs for pattern recognition, nonlinear simulation, classification, and prediction in the computer science and computer engineering areas. The results show that HONNs are always faster, more accurate, and easier to explain. This is the second motivation for using HONNs in computer science and computer engineering areas, since HONNs can automatically select the initial coefficients, even automatically select the model for applications in computer science and computer engineering areas.

This book introduces the HONN group models and adaptive HONNs, and makes sure the people working in the computer science and computer engineering areas can understand HONN group models and adaptive HONN models, which can simulate not only nonlinear data, but also discontinuous and unsmooth nonlinear data. The HONNs knowledge from this book will be used in many different areas. This book explains why HONNs can approximate any nonlinear data to any degree of accuracy, and make sure people working in the computer science and computer engineering can understand why HONNs are much easier to use, and HONNs can have better nonlinear data simulation accuracy.

Let millions of people working in the computer science and computer engineering areas know that HONNs are much easier to use and can have better simulation results, and understand how to successfully use HONNs model and hardware designs for nonlinear data simulation and prediction. HONNs will challenge traditional artificial neural network products and change the research methodology that people are currently using in computer science and computer engineering areas for the pattern recognition, nonlinear simulation, classification, and prediction. Artificial neural network research is one of the new directions for new generation computers. Current research suggests that open box artificial HONNs play an important role in this new direction. Since HONNs are open box models, they can be easily accepted and used by the people working in the information science, information technology, management, economics, and business areas.

The book is organized into four sections and a total of twenty two chapters. Section 1, Artificial Higher Order Neural Networks for Computer Science, includes chapter 1 to chapter 6. Section 2, Artificial Higher Order Neural Networks for Simulation and Modeling, is from chapter 7 to chapter 11. Section 3, Artificial Higher Order Neural Networks for Computer Engineering, contains chapter 12 to chapter 16. Section 4, Artificial Higher Order Neural Network Models and Applications, consists of chapter 17 to chapter 22. The brief descriptions of each of the chapters are as follows:

Chapter 1, Higher Order Neural Network Group-based Adaptive Tolerance Trees, presents the artificial Higher Order Neural Network Group-based Adaptive Tolerance (HONNGAT) Tree model for translation-invariant face recognition. Moreover, face perception classification, detection of front faces with glasses and/or beards, and face recognition results using HONNGAT Trees are presented. When 10% random number noise is added, the accuracy of HONNGAT Tree for face recognition is 1% higher that artificial neural network Group-based Adaptive Tolerance (GAT) Tree, and is 6 % higher than a general tree. When the gamma value of the Gaussian Noise exceeds 0.3, the accuracy of HONNGAT Tree for face recognition is 2% higher than GAT Tree, and is about 9 % higher than that of a general tree. The artificial higher order neural network group-based adaptive tolerance tree model is an open box model and can be used to describe complex systems.

Chapter 2, Higher Order Neural Networks for Symbolic, Sub-symbolic and Chaotic Computations, deals with discrete and recurrent artificial neural networks with a homogenous type of neuron. With this architecture, this chapter shows how to perform symbolic computations by executing high-level programs within this network dynamics. Next, using higher order synaptic connections, it is possible to integrate common sub-symbolic learning algorithms into the previous architecture. Thirdly, taking advantage of the chaotic properties of dynamical systems, this chapter presents some uses of chaotic computations with the same neurons and synapses, and, thus, creating a hybrid system of three computation types.

Chapter 3, Evolutionary Algorithm Training of Higher Order Neural Networks, aims to further explore the capabilities of the Higher Order Neural Networks class and especially the Pi-Sigma Neural Networks. The performance of Pi-Sigma Networks is evaluated through several well known neural network training benchmarks. In the experiments reported here, Distributed Evolutionary Algorithms are implemented for Pi-Sigma neural networks training. More specifically, the distributed versions of the Differential Evolution and the Particle Swarm Optimization algorithms have been employed. Each processor of a distributed computing environment is assigned a subpopulation of potential solutions. The subpopulations are independently evolved in parallel and occasional migration is allowed to facilitate the cooperation between them. The novelty of the proposed approach is that it is applied to train Pi-Sigma networks using threshold activation functions, while the weights and biases were confined in a narrow band of integers (constrained in the range [-32, 32]). Thus, the trained Pi-Sigma neural networks can be represented by using only 6 bits. Such networks are better suited for hardware implementation than the real weight ones and this property is very important in real-life applications. Experimental results suggest that the proposed training process is fast, stable and reliable and the distributed trained Pi-Sigma networks exhibit good generalization capabilities.

Chapter 4, Adaptive Higher Order Neural Network Models for Data Mining, discusses the data mining. Data mining, the extraction of hidden patterns and valuable information from large databases, is a powerful technology with great potential to help companies survive competition. Data mining tools search databases for hidden patterns, finding predictive information that business experts may overlook because it lies outside their expectations. This chapter addresses using Artificial Neural Networks (ANNs) for data mining because ANNs are a natural technology which may hold superior predictive capability, compared with other data mining approaches. The chapter proposes Adaptive HONN models which hold potential in effectively dealing with discontinuous data and business data with high order nonlinearity. The proposed adaptive models demonstrate advantages in handling several benchmark data mining problems.

Chapter 5, Robust Adaptive Control Using Higher Order Neural Networks and Projection, presents a novel robust adaptive approach for a class of unknown nonlinear systems. Firstly, the neural networks are designed to identify the nonlinear systems. Dead-zone and projection techniques are applied to weights training, in order to avoid singular cases. Secondly, a linearization controller is proposed based on the neuro identifier. Since the approximation capability of the neural networks is limited, four types of compensators are addressed. This chapter also proposes a robust neuro-observer, which has an extended Luenberger structure. Its weights are learned on-line by a new adaptive gradient-like technique. The control scheme is based on the proposed neuro-observer. The final structure is composed by two parts: the neuro-observer and the tracking controller. The simulations of a two-link robot show the effectiveness of the proposed algorithm.

Chapter 6, On the Equivalence between Ordinary Neural Networks and Higher Order Neural Networks, studyiesthe equivalence between multilayer feedforward neural networks referred as Ordinary Neural Networks (ONNs) that contain only summation (Sigma) as activation units, and multilayer feedforward Higher order Neural Networks (HONNs) that contains Sigma and product (Pi) activation units. Since the time they were introduced by Giles and Maxwell (1987), HONNs have been used in many supervised classification and function approximation applications. Within the context discussed in this chapter, HONNs are given in a form that weights are adjustable real-valued numbers (on contrary to most of the previous works were HONN weights are non-negative integers). In doing that, HONNs have more expressive power and the possibility of being trapped in a local minim is reduced. This real-valued weights notion was not possible without introducing a proper normalization to the input data as well as proposing a modification to the neuron activation function. Using simple mathematics and by proposing a normalization to the input data, it is easy to show that HONNs are equivalent to ONNs. The converted ONN posses the features of the HONN and they have exactly the same functionality and output. The proposed conversion of HONN to ONN would permit using the huge amount of optimization algorithms to speed up the convergence of HONN and/or finding better topology. Recurrent HONNs and cascaded correlation HONNs can be simply defined via their equivalent ONNs and then trained with backpropagation, scaled conjugate gradient, Lavenberg-Marqudat algorithm, brain damage algorithms. Using the developed equivalency model, this chapter also gives an easy method to convert a HONN to its equivalent ONN. Results on XOR and function approximation problems showed that ONNs obtained from their corresponding HONNs converged well to a solution. Different optimization training algorithms have been tested on feedforward structure and cascade correlation where the later have shown excellent function approximation results.

Chapter 7, Rainfall Estimation Using Neuron-Adaptive Higher Order Neural Networks, studies the rainfall estimation. Real world data is often nonlinear, discontinuous and may comprise high frequency, multi-polynomial components. Not surprisingly, it is hard to find the best models for modeling such data. Classical neural network models are unable to automatically determine the optimum model and appropriate order for data approximation. In order to solve this problem, Neuron-Adaptive Higher Order Neural Network (NAHONN) Models have been introduced. Definitions of one-dimensional, two-dimensional, and n-dimensional NAHONN models are studied. Specialized NAHONN models are also described. NAHONN models are shown to be "open box”. These models are further shown to be capable of automatically finding not only the optimum model but also the appropriate order for high frequency, multi-polynomial, discontinuous data. Rainfall estimation experimental results confirm model convergence. This chapter further demonstrates that NAHONN models are capable of modeling satellite data. When the Xie and Scofield (1989) technique was used, the average error of the operator-computed IFFA rainfall estimates was 30.41%. For the Artificial Neural Network (ANN) reasoning network, the training error was 6.55% and the test error 16.91%, respectively. When the neural network group was used on these same fifteen cases, the average training error of rainfall estimation was 1.43%, and the average test error of rainfall estimation was 3.89%. When the neuron-adaptive artificial neural network group models was used on these same fifteen cases, the average training error of rainfall estimation was 1.31%, and the average test error of rainfall estimation was 3.40%. When the artificial neuron-adaptive higher order neural network model was used on these same fifteen cases, the average training error of rainfall estimation was 1.20%, and the average test error of rainfall estimation was 3.12%.

Chapter 8, Analysis of Quantization Effects on Higher Order Function and Multilayer Feedforward Neural Networks, investigates the combined effects of quantization and clipping on Higher Order function neural networks (HOFNN) and multilayer feedforward neural networks (MLFNN). Statistical models are used to analyze the effects of quantization in a digital implementation. This chapter analyzes the performance degradation caused as a function of the number of fixed-point and floating-point quantization bits under the assumption of different probability distributions for the quantized variables, and then compares the training performance between situations with and without weight clipping, and derive in detail the effect of the quantization error on forward and backward propagation. No matter what distribution the initial weights comply with, the weights distribution will approximate a normal distribution for the training of floating-point or high-precision fixed-point quantization. Only when the number of quantization bits is very low, the weights distribution may cluster to ±1 for the training with fixed-point quantization. This chapter establishes and analyzes the relationships for a true nonlinear neuron between inputs and outputs bit resolution, training and quantization methods, the number of network layers, network order and performance degradation, all based on statistical models, and for on-chip and off-chip training. The experimental simulation results verify the presented theoretical analysis.

Chapter 9, Improving Sparsity in Kernelized Nonlinear Feature Extraction Algorithms by Polynomial Kernel Higher Order Neural Networks, studies the polynomial kernel higher order neural networks. As a general framework to represent data, the kernel method can be used if the interactions between elements of the domain occur only through inner product. As a major stride towards the nonlinear feature extraction and dimension reduction, two important kernel-based feature extraction algorithms, kernel principal component analysis and kernel Fisher discriminant, have been proposed. They are both used to create a projection of multivariate data onto a space of lower dimensionality, while attempting to preserve as much of the structural nature of the data as possible. However, both methods suffer from the complete loss of sparsity and redundancy in the nonlinear feature representation. In an attempt to mitigate these drawbacks, this chapter focuses on the application of the newly developed polynomial kernel higher order neural networks in improving the sparsity and thereby obtaining a succinct representation for kernel-based nonlinear feature extraction algorithms. Particularly, the learning algorithm is based on linear programming support vector regression, which outperforms the conventional quadratic programming support vector regression in model sparsity and computational efficiency.

Chapter 10, Analysis and Improvement of Function Approximation Capabilities of Pi-Sigma Higher Order Neural Networks, finds that A Pi-Sigma higher order neural network (Pi-Sigma HONN) is a type of higher order neural network, where, as its name implies, weighted sums of inputs are calculated first and then the sums are multiplied by each other to produce higher order terms that constitute the network outputs. This type of higher order neural networks have good function approximation capabilities. In this chapter, the structural feature of Pi-Sigma HONNs is discussed in contrast to other types of neural networks. The reason for their good function approximation capabilities is given based on pseudo-theoretical analysis together with empirical illustrations. Then, based on the analysis, an improved version of Pi-Sigma HONNs is proposed which has yet better functions approximation capabilities.

Chapter 11, Dynamic ridge Polynomial Higher Order Neural Network, proposes a novel Dynamic Ridge Polynomial Higher Order Neural Network (DRPHONN). The architecture of the new DRPHONN incorporates recurrent links into the structure of the ordinary Ridge Polynomial Higher Order Neural Network (RPHONN). RPHONN is a type of feedforward Higher Order Neural Network (HONN) (Giles & Maxwell, 1987) which implements a static mapping of the input vectors. In order to model dynamical functions of the brain, it is essential to utilize a system that is capable of storing internal states and can implement complex dynamic system. Neural networks with recurrent connections are dynamical systems with temporal state representations. The dynamic structure approach has been successfully used for solving varieties of problems, such as time series forecasting, approximating a dynamical system (Kimura & Nakano, 2000), forecasting a stream flow, and system control. Motivated by the ability of recurrent dynamic systems in real world applications, the proposed DRPHONN architecture is presented in this chapter.

Chapter 12, Fifty Years of Electronic Hardware Implementations of First and Higher Order Neural Networks, celebrates 50 years of first and higher order neural network (HONN) implementations in terms of the physical layout and structure of electronic hardware, which offers high speed, low latency in compact, low cost, low power, mass produced systems. Low latency is essential for practical applications in real time control for which software implementations running on CPUs are too slow. The chapter traces the chronological development of electronic neural networks (ENN) discussing selected papers in detail from analog electronic hardware, through probabilistic RAM, generalizing RAM, custom silicon Very Large Scale Integrated (VLSI) circuit, Neuromorphic chips, pulse stream interconnected neurons to Application Specific Integrated circuits (ASICs) and Zero Instruction Set Chips (ZISCs). Reconfigurable Field Programmable Gate Arrays (FPGAs) are given particular attention as the most recent generation incorporate Digital Signal Processing (DSP) units to provide full System on Chip (SoC) capability offering the possibility of real-time on-line and on-chip learning.

Chapter 13, Recurrent Higher Order Neural Network Control for Output Trajectory Tracking with Neural Observers and Constrained Inputs, presents the design of an adaptive recurrent neural observer-controller scheme for nonlinear systems whose model is assumed to be unknown and with constrained inputs. The control scheme is composed of a neural observer based on Recurrent High Order Neural Networks which builds the state vector of the unknown plant dynamics and a learning adaptation law for the neural network weights for both the observer and identifier. These laws are obtained via control Lyapunov functions. Then, a control law, which stabilizes the tracking error dynamics is developed using the Lyapunov and the inverse optimal control methodologies. Tracking error boundedness is established as a function of design parameters.

Chapter 14, Artificial Higher Order Neural Network Training on Limited Precision Processors, investigates the training of networks using Back Propagation and Levenberg-Marquardt algorithms in limited precision achieving high overall calculation accuracy, using on-line training, a new type of HONN known as the Correlation HONN (CHONN), discrete XOR and continuous optical waveguide sidewall roughness datasets by simulation to find the precision at which the training and operation is feasible. The BP algorithm converged to a precision beyond which the performance did not improve. The results support previous findings in literature for Artificial Neural Network operation that discrete datasets require lower precision than continuous datasets. The importance of this chapter findings is that they demonstrate the feasibility of on-line, real-time, low-latency training on limited precision electronic hardware.

Chapter 15, Recurrent Higher Order Neural Observers for Anaerobic Processes, proposes the design of a discrete-time neural observer which requires no prior knowledge of the model of an anaerobic process, for estimate biomass, substrate and inorganic carbon which are variables difficult to measure and very important for anaerobic process control in a completely stirred tank reactor (CSTR) with biomass filter; this observer is based on a recurrent higher order neural network, trained with an extended Kalman filter based algorithm.

Chapter 16, Electric Machines Excitation Control via Higher Order Neural Networks, is demonstrating a practical design of an intelligent type of controller using higher order neural network (HONN) concepts, for the excitation control of a practical power generating system. This type of controller is suitable for real time operation, and aims to improve the dynamic characteristics of the generating unit by acting properly on its original excitation system. The modeling of the power system under study consists of a synchronous generator connected via a transformer and a transmission line to an infinite bus. For comparison purposes and also for producing useful data in order for the demonstrating neural network controllers to be trained, digital simulations of the above system are performed using fuzzy logic control (FLC) techniques, which are based on previous work. Then, two neural network controllers are designed and applied by adopting the HONN architectures. The first one utilizes a single pi-sigma neural network (PSNN) and the significant advantages over the standard multi layered perceptron (MLP) are discussed. Secondly, an enhanced controller is designed, leading to a ridge polynomial neural network (RPNN) by combining multiple PSNNs if needed. Both controllers used, can be pre-trained rapidly from the corresponding FLC output signal and act as model dynamics capturers. The dynamic performances of the fuzzy logic controller (FLC) along with those of the two demonstrated controllers are presented by comparison using the well known integral square error criterion (ISE). The latter controllers, show excellent convergence properties and accuracy for function approximation. Typical transient responses of the system are shown for comparison in order to demonstrate the effectiveness of the designed controllers. The computer simulation results obtained show clearly that the performance of the developed controllers offers competitive damping effects on the synchronous generator’s oscillations, with respect to the associated ones of the FLC, over a wider range of operating conditions, while their hardware implementation is apparently much easier and the computational time needed for real-time applications is drastically reduced.

Chapter 17, Higher Order Neural Networks: Fundamental Theory and Applications, provides fundamental principles of higher order neural units (HONUs) and higher order neural networks (HONNs). An essential core of HONNs can be found in higher order weighted combinations or correlations between the input variables. By using some typical examples, this chapter describes how and why higher order combinations or correlations can be effective.

Chapter 18, Identification of Nonlinear Systems Using a New Neuro-Fuzzy Dynamical System Definition Based on High Order Neural Network Function Approximators, studies the nonlinear systems. A new definition of Adaptive Dynamic Fuzzy Systems (ADFS) is presented in this chapter for the identification of unknown nonlinear dynamical systems. The proposed scheme uses the concept of Adaptive Fuzzy Systems operating in conjunction with High Order Neural Network Functions (HONNFs). Since the plant is considered unknown, this chapter first proposes its approximation by a special form of an adaptive fuzzy system and in the sequel the fuzzy rules are approximated by appropriate HONNFs. Thus the identification scheme leads up to a Recurrent High Order Neural Network, which however takes into account the fuzzy output partitions of the initial ADFS. Weight updating laws, for the involved HONNFs, are provided, which guarantee that the identification error reaches zero exponentially fast. Simulations illustrate the potency of the method and comparisons on well known benchmarks are given.

Chapter 19, Neuro–Fuzzy Control Schemes Based on High Order Neural Network Function Approximators, studies the control schemes. The indirect or direct adaptive regulation of unknown nonlinear dynamical systems is considered in this chapter. Since the plant is considered unknown, this chapter first proposes its approximation by a special form of a fuzzy dynamical system (FDS) and in the sequel the fuzzy rules are approximated by appropriate HONNFs. The system is regulated to zero adaptively by providing weight updating laws for the involved HONNFs, which guarantee that both the identification error and the system states reach zero exponentially fast. At the same time, all signals in the closed loop are kept bounded. The existence of the control signal is always assured by introducing a novel method of parameter hopping, which is incorporated in the weight updating laws. The indirect control scheme is developed for square systems (number of inputs equal to the number of states) as well as for systems in Brunovsky canonical form. The direct control scheme is developed for systems in square form. Simulations illustrate the potency of the method and comparisons with conventional approaches on benchmarking systems are given.

Chapter 20, Back-Stepping Control of Quadrotor: A Dynamically Tuned Higher Order Like Neural Network Approach, Studies the control of quadrotor. The dynamics of a quadrotor is a simplified form of helicopter dynamics that exhibit the same basic problems of strong coupling, multi-input/multi-output design, and unknown nonlinearities. The Lagrangian model of a typical quadrotor that involves four inputs and six outputs results in an underactuated system. There are several design techniques are available for nonlinear control of mechanical underactuated system. One of the most popular among them is backstepping. Backstepping is a well known recursive procedure where underactuation characteristic of the system is resolved by defining ‘desired’ virtual control and virtual state variables. Virtual control variables is determined in each recursive step assuming the corresponding subsystem is Lyapunov stable and virtual states are typically the errors of actual and desired virtual control variables. The application of the backstepping is even more interesting when a virtual control law is applied to a Lagrangian subsystem. The necessary information to select virtual control and state variables for these systems can be obtained through model identification methods. One of these methods includes Neural Network approximation to identify the unknown parameters of the system. The unknown parameters may include uncertain aerodynamic force and moment coefficients or unmodeled dynamics. These aerodynamic coefficients generally are the functions of higher order state polynomials. This chapter discusses how can implement linear in parameter first order neural network approximation methods to identify these unknown higher order state polynomials in every recursive step of the backstepping. Thus the first order neural network eventually estimates the higher order state polynomials which is in fact a higher order like neural net (HOLNN). Moreover, when these artificial Neural Networks are placed into a control loop, they become dynamic artificial Neural Network whose weights are tuned only. Due to the inherent characteristics of the quadrotor, the Lagrangian form for the position dynamics is bilinear in the controls, which is confronted using a bilinear inverse kinematics solution. The result is a controller of intuitively appealing structure having an outer kinematics loop for position control and an inner dynamics loop for attitude control. The stability of the control law is guaranteed by a Lyapunov proof. The control approach described in this chapter is robust since it explicitly deals with un-modeled state dependent disturbances without needing any prior knowledge of the same. A simulation study validates the results such as decoupling, tracking etc obtained in the paper.

Chapter 21, Artificial Tactile Sensing and Robotic Surgery Using Higher Order Neural Networks, introduces a new medical instrument, namely, the Tactile Tumor Detector (TTD) able to simulate the sense of touch in clinical and surgical applications. All theoretical and experimental attempts for its construction are presented. Theoretical analyses are mostly based on finite element method (FEM), artificial neural networks (ANN), and higher order neural networks (HONN). The TTD is used for detecting abnormal masses in biological tissue, specifically for breast examinations. This chapter presents a research work on ANN and HONN done on the theoretical results of the TTD to reduce the subjectivity of estimation in diagnosing tumor characteristics. This chapter uses HONN as a stronger open box intelligent unit than traditional black box neural networks (NN) for estimating the characteristics of tumor and tissue. The results show that by having an HONN model of our nonlinear input-output mapping, there are many advantages compared with ANN model, including faster running for new data, lesser RMS error and better fitting properties.

Chapter 22, A Theoretical and Empirical Study of Functional Link Neural Networks (FLNNs) for Classification, focus on theoretical and empirical study of functional link neural networks (FLNNs) for classification. We present a hybrid Chebyshev functional link neural network (cFLNN) without hidden layer with evolvable particle swarm optimization (ePSO) for classification. The resulted classifier is then used for assigning proper class label to an unknown sample. The hybrid cFLNN is a type of feed-forward neural networks, which have the ability to transform the non-linear input space into higher dimensional space where linear separability is possible. In particular, the proposed hybrid cFLNN combines the best attribute of evolvable particle swarm optimization (ePSO), back-propagation learning (BP-Learning), and Chebyshev functional link neural networks (CFLNN). This chapter shows its effectiveness of classifying the unknown pattern using the datasets. The computational results are then compared with other higher order neural networks (HONNs) like functional link neural network with a generic basis functions, Pi-Sigma neural network (PSNN), radial basis function neural network (RBFNN), and ridge polynomial neural network (RPNN).

More...
Less...

Reviews and Testimonials

This is the first book which introduces Higher Order Neural Networks (HONNs) to people working in the fields of computer science and computer engineering, and presents to them that HONNs is an open box neural networks tool compare to traditional artificial neural networks. This is the first book which includes details of the most popular HONNs models and provides opportunities for millions of people working in the computer science and computer engineering areas to know what HONNs are, and how to use HONNs in computer science and computer engineering areas.

– Ming Zhang, Christopher Newport University, USA

Author's/Editor's Biography

Ming Zhang (Ed.)
Ming Zhang was born in Shanghai, China. He received the MS degree in information processing and PhD degree in the research area of computer vision from East China Normal University, Shanghai, China, in 1982 and 1989, respectively. He held Postdoctoral Fellowships in artificial neural networks with the Chinese Academy of the Sciences in 1989 and the USA National Research Council in 1991. He was a face recognition airport security system project manager and PhD co-supervisor at the University of Wollongong, Australia in 1992. Since 1994, he was a lecturer at the Monash University, Australia, with a research area of artificial neural network financial information system. From 1995 to 1999, he was a senior lecturer and PhD supervisor at the University of Western Sydney, Australia, with the research interest of artificial neural networks. He also held Senior Research Associate Fellowship in artificial neural networks with the USA National Research Council in 1999. He is currently a Full Professor and graduate student supervisor in computer science at the Christopher Newport University, VA, USA. With more than 100 papers published, his current research includes artificial neural network models for face recognition, weather forecasting, financial data simulation, and management.

More...
Less...

Body Bottom