It is known that finite linear combinations of ridge functions (neural networks) that give good approximations to certain kinds of response surfaces exist, but it is unknown in general how to obtain such approximations. In addition, there are very few known results about the quantitative rates of these approximations. The purpose of the presentation is to show that a new set of functions, namely the ridgelets, provide an elegant answer to the issues raised above.
First, I will briefly present the continuous and discrete ridgelet transforms. Both transforms represent quite general functions f as a superposition of ridge functions in a stable and concrete way.
Second, I will show how to use the ridgelet transform to derive new approximation bounds. That is, I introduce a new family of functional classes and show that, in some sense, ridgelet-like expansions are optimal for approximating functions from these classes. I explain how these classes model "real-life" signals. As a surprising and remarkable example, I discuss the case of approximating radial functions. I will also explain why ridgelets offer decisive improvements over traditional neural networks.
Finally, I will present an application of ridgelets to the problem of image compression. There exist adaptations of ridgelet-like decompositions that, in principle, attain optimal compression.