Networks and the best approximation property

@article{Girossi1990NetworksAT,
  title={Networks and the best approximation property},
  author={Federico Girossi and Tommaso Poggio},
  journal={Biological Cybernetics},
  year={1990},
  volume={63},
  pages={169-176},
  url={https://api.semanticscholar.org/CorpusID:18824241}
}
The main result of this paper is that multilayer perceptron networks, of the type used in backpropagation, do not have the best approximation property and it is proved that networks derived from regularization theory and including Radial Basis Functions, have a similar property.

Sigmoidal FFANN’s and the best approximation property

It is established that the functional sets represented by finite sized networks as well as arbitrary sized networks are open and the absence of the best approximation properties for these networks is established.

Best Approximation of Gaussian Neural Networks With Nodes Uniformly Spaced

This analysis gives a theoretical proof concerned with the existence of best approximations but addresses the problems of architectural selection and guidance for selecting the variance and the oversampling parameters is provided for practitioners.

Using Radial Basis Function Networks for Function Approximation and Classification

Many aspects associated with the RBF network, such as network structure, universal approimation capability, radial basis functions,RBF network learning, structure optimization, normalized RBF networks, application to dynamic system modeling, and nonlinear complex-valued signal processing, are described.

Approximation Capabilities of Neural Networks

The capabilities of a neural network to approximate arbitrary continuous functions are shown and a practical neural network is built to approximate a continuous function.

A compact network with improved generalization using wavelet basis function network for static non-linear functions

A learning procedure for the proposed wavelet neural network with guaranteed convergence to the global minimum error in the parameter function space is developed and the simulation results indicate the efficiency of the proposed approach.

Universal Approximation using Incremental Constructive Feedforward Networks with Random Hidden Nodes

This paper proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer.

Learning Algorithms for RBF Functions and Subspace Based Functions

    L. Xu
    Computer Science, Mathematics
  • 2012
The renaissance of neural network and then machine learning since the 1980’s is featured by two streams of extensive studies, one on multilayer perceptron and the other on radial basis function…
...

Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions

Multilayer feedforward networks possess universal approximation capabilities by virtue of the presence of intermediate layers with sufficiently many parallel processors; the properties of the intermediate-layer activation function are not so crucial.

Approximation by superpositions of a sigmoidal function

    G. Cybenko
    Computer Science, Mathematics
  • 1989
In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real…

Extensions of a Theory of Networks for Approximation and Learning

The theory of regularization theory is extended by introducing ways ofaling with t.wo aspect of learning: learning in presence of unreliable examples or outliel·s, an<llearning from positive and negative examples.

Nonlinear Approximation Theory

The first investigations of nonlinear approximation problems were made by P.L. Chebyshev in the last century, and the entire theory of uniform approxima tion is strongly connected with his name. By…

Radial Basis Functions, Multi-Variable Functional Interpolation and Adaptive Networks

Abstract : The relationship between 'learning' in adaptive layered networks and the fitting of data with high dimensional surfaces is discussed. This leads naturally to a picture of 'generalization…

Fast Learning in Networks of Locally-Tuned Processing Units

We propose a network architecture which uses a single internal layer of locally-tuned processing units to learn both classification tasks and real-valued function approximations (Moody and Darken…

Approximation from a curve of functions

for all ?-polynomials P,(A, T) of order n. Here I[ lip denotes the norm in Lp [0, 1]. A solution is a best Lp approximation to f . Not unexpectedly, this problem does not always have a solution. The…

Introduction to approximation theory

Introduction: 1 Examples and prospectus 2 Metric spaces 3 Normed linear spaces 4 Inner-product spaces 5 Convexity 6 Existence and unicity of best approximations 7 Convex functions The Tchebycheff…

Ill-posed problems in early vision

Mathematical results on ill-posed and ill-conditioned problems are reviewed and the formal aspects of regularization theory in the linear case are introduced. Specific topics in early vision and…