Why Do We Even Have Neural Networks? | by Egor Howell | Dec, 2023


Alternatives to Neural Networks: Taylor Series & Fourier Series

https://www.flaticon.com/free-icons/neural-network.” title=”neural network icons” Neural network icons created by imaginationlol — Flaticon.

I have recently been writing a series of articles explaining the key concepts behind modern-day neural networks:

Egor Howell

Neural Networks

One reason why neural networks are so powerful and popular is that they exhibit the universal approximation theorem. This means that a neural network can “learn” any function no matter how complex.

“Functions describe the world.”

A function, f(x), takes some input, x, and gives an output y:

How a mathematical function works. Diagram by author.

This function defines the relationship between the input and output. In most cases, we have the inputs and the corresponding outputs with the goal of the neural network to learn, or approximate, the function that maps between them.

Neural networks were invented around the 1950s and 1960s. Yet, at that time there were other known universal approximaters out there. So, why do we even have neural networks …

The Taylor Series represents a function as an infinite sum of terms calculated from the values of its derivatives at a single point. In other words, it’s a sum of infinite polynomials to approximate a function.

The above expression represents a function f(x) as an infinite sum, where f^n is the n-th derivative or order of f at the point a, and n! denotes the factorial of n.

See here if you are interested in learning why we use Taylor Series. Long story short, they are used to make ugly functions nice to work with!

There exists a simplification of the Taylor Series called the Maclaurin series where a = 0.

Where in this case a_0, a_1, etc are the coefficients for the corresponding polynomials. The goal of the…



Source link

Leave a Comment