A history of Lisp and its use in neural networks - Part II

Monday, July 30, 2018

A history of Lisp and its use in neural networks - Part II

Written by Sergio Sancho Azcoitia, Security Researcher for ElevenPaths

Last month, we started a two-article series talking about Lisp. Part I covered the history, its beginnings and uses when it comes to creating neural networks. Today however, we will show you how Lisp works, and how you can create a simple neural network. 

Before we begin, let’s refresh our memory on how neural networks work. Neural networks base themselves on a series of nodes or “neurons” that have connections between themselves and form a series of layers. The way they work is compared to the behavior of the human brain when it solves a problem, as the signal progresses, it will take one path or another according to a series of pre-established parameters.  

Figure 1. Layers are an important characteristic of neural networks 

Our neural network in question will be simple, meaning it will only consist of three layers, and its function will be to find whom we are thinking about, out of all our team members. If we see the image below, the nodes or neurons correspond to the names of our teammates and the layers correspond to the conditions that lead us to one answer or another.

Figure 2. Even though this neural network consists of 3 layers, you can always increase the complexity of a neural network by adding more layers and sublayers (Source: Gengiskanhg, Wikipedia Spain)

To create the different layers in the network, we will only use questions with “yes” or “no” answers. The questions will be carefully chosen so that when answering them, either a response is offered or you move onto the next layer. This process will be repeated as many times as necessary until a final response is obtained. As the user answers questions, the number of possible candidates for a final answer will decrease. In this brief example, for each question answered, an answer is obtained and, if not, a possible candidate is discarded.

Figure 3. Our example of a neural network that tells us who are thinking about

As we have said before, this example is simple, but starting with this as a base, you care create neural networks that are much more complex, adding layers and sublayers. A clear example of this is expert system Akinator, that we have mentioned before on the blog. In the case of Akinator, it counts with an immense neural network that increases as new characters (nodes) are added to the game. 

The Lisp language is one the best examples to understand some of the concepts surrounding Artificial Intelligence, and functional programming (such as recursion). This is one of the reasons it has become the favorite of many MIT researchers to develop his projects during the years after its appearance.

Don't miss out on a single post. Subscribe to LUCA Data Speaks.

You can also follow us on TwitterYouTube and LinkedIn

No comments:

Post a Comment