- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Application of learning theory in neural modeling of...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Application of learning theory in neural modeling of dynamic systems Najarian, Kayvan
Abstract
Neural networks have been successfully used to model a number of complex nonlinear systems. Although neural networks can create successful models of some nonlinear systems, they are known to overfit the data in some other applications. Therefore, in order to use neural networks reliably, it is necessary to explore the conditions under which neural models perform equally well on the testing and training data sets. This calls for the design of the neural models that create a balance between the testing and training performances. The newly introduced Probably Approximately Correct (PAC) learning theory addresses the issue of testing-training balance. However, conventional PAC learning only allows static modeling and cannot be applied to dynamic models. In this thesis, PAC learning is extended to more general learning schemes that handle dynamic modeling tasks. The resulting PAC paradigms are then applied to assess the learning properties of several families of dynamic neural networks, including Radial Bases Functions Networks, single-hidden-layer Sigmoid Neural Networks, and Volterra Networks. Another concern with the use of neural networks for some dynamic modeling tasks is the issue of stochastic stability. Little is known about the stochastic stability of many neural models used in practical applications. The lack of knowledge over the stability of neural models further limits the use of such models. In this thesis, sufficient conditions for stochastic stability of different families of neural networks are presented, which address the above mentioned concern. Based on the resulting learning frameworks, evolutionary algorithms are then presented that search for a suitable suitable dynamic neural modeling which perform equally well on testing and training data. The evolutionary algorithms are then used for modeling of two applications. The first application deals with next-scan-estimation of a two-dimensional paper basis weight measurement on a paper machine. In the second application, a neural model for a neuromuscular blockade system is developed. The results indicate that accurate and reliable dynamic neural models can be obtained, provided that the learning complexity of such models are controlled during the training procedure.
Item Metadata
Title |
Application of learning theory in neural modeling of dynamic systems
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
2000
|
Description |
Neural networks have been successfully used to model a number of complex nonlinear systems.
Although neural networks can create successful models of some nonlinear systems,
they are known to overfit the data in some other applications. Therefore, in order to
use neural networks reliably, it is necessary to explore the conditions under which neural
models perform equally well on the testing and training data sets. This calls for the design
of the neural models that create a balance between the testing and training performances.
The newly introduced Probably Approximately Correct (PAC) learning theory addresses
the issue of testing-training balance. However, conventional PAC learning only
allows static modeling and cannot be applied to dynamic models. In this thesis, PAC
learning is extended to more general learning schemes that handle dynamic modeling
tasks. The resulting PAC paradigms are then applied to assess the learning properties of
several families of dynamic neural networks, including Radial Bases Functions Networks, single-hidden-layer Sigmoid Neural Networks, and Volterra Networks.
Another concern with the use of neural networks for some dynamic modeling tasks is
the issue of stochastic stability. Little is known about the stochastic stability of many
neural models used in practical applications. The lack of knowledge over the stability of
neural models further limits the use of such models. In this thesis, sufficient conditions for
stochastic stability of different families of neural networks are presented, which address
the above mentioned concern.
Based on the resulting learning frameworks, evolutionary algorithms are then presented
that search for a suitable suitable dynamic neural modeling which perform equally
well on testing and training data. The evolutionary algorithms are then used for modeling
of two applications. The first application deals with next-scan-estimation of a two-dimensional
paper basis weight measurement on a paper machine. In the second application,
a neural model for a neuromuscular blockade system is developed. The results
indicate that accurate and reliable dynamic neural models can be obtained, provided that
the learning complexity of such models are controlled during the training procedure.
|
Extent |
7552910 bytes
|
Genre | |
Type | |
File Format |
application/pdf
|
Language |
eng
|
Date Available |
2009-07-27
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.
|
DOI |
10.14288/1.0065167
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2000-11
|
Campus | |
Scholarly Level |
Graduate
|
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
For non-commercial purposes only, such as research, private study and education. Additional conditions apply, see Terms of Use https://open.library.ubc.ca/terms_of_use.