neural-networks.io

neural-networks.io

Limitations

 

# Network architecture

We'll consider the following network single-layer architecture with two inputs ( \( a, b \) ) and one output ( \( y \) ).

 

# Logic OR function

Let's assume we want to train an artificial single-layer neural network to learn logic functions. Let's start with the OR logic function:

aby = a + b
000
011
101
111

The space of the OR fonction can be drawn. X-axis and Y-axis are respectively the \( a \) and \( b \) inputs. The green line is the separation line (\( y=0 \)). As illustrated below, the network can find an optimal solution:

 

# Logic XOR function

Assume we now want to train the network on the XOR logic function:

aby = a ⊕ b
000
011
101
110

As for the OR function, space can be drawn. Unfortunatly, the network isn't able to disriminate ones from zeros.

 

# Conclusion

The transfert function of this single-layer network is given by:

\begin{equation} y= w_1a + w_2b +w_3 \label{eq:transfert-function} \end{equation}

The equation \eqref{eq:transfert-function} is a linear model. This explain why the frontier between ones and zeros is necessary a line. The XOR function is a non-linear problem that can't be classified with a linear model. Fortunatly, multilayer perceptron (MLP) can deal with non-linear problems.