Uncategorized

The Ultimate, Holy Sh** MegaGuide to Neural Networks

Table of Contents

  1. Table of Contents
    1. Introduction
    2. Deep Dive
      • High level
      • Low Level
    3. Implementation in Keras
    4. Concluding Remarks

 

  1. Introduction

If you have read about artificial intelligence, you have almost definitely heard the terms deep learning or neural networks. The technology is beginning to be applied to just about every facet of our lives.

  1. Deep Dive
    • High level

To explain the very basic concept of a neural network, we can consider the following example:

Imagine you an NBA scout and there is far too much talent to survey for this upcoming season. You need to think fast, so naturally you resort to building a neural network to help you out.

Instead of needing to review each player in depth, a network can predict which players will likely be the best picks (note this is being done).

You realize that for the past 10 years you have all the historical college data of NBA players: height, weight, points, rebounds, wins, losses, assists, free throws made, blocks, and games played.

Since these players are now in the NBA, you know how well they performed too. You pick 1000 random players from the NBA and determine if they ended up being a “good” pick or a “bad” pick out of college.

For each one of these players you look at those 10 features (height, weight, points) and see how well they did. As you study all these examples, you start to see patterns emerge from the data.

You want to double check that you did a good job determining what patterns make a good NBA player, so you pull 100 different NBA players and check that you did a good job. You find that with these 100 players that you checked; you were right about 90% of the time. Nice job!!

Now that you know the patterns to look for and you have even verified that the patterns work for picking college players, lets go crunch some numbers instead of going and watching all the players games.

You pull together the stats for those college players and analyze them for patterns. Boom, you make recommendations to your manager that you are convinced these are the best players to pick and they are drafted to the NBA.

 

What I just described is a very rudimentary example of how a neural network works. In practice, this would have to be much more complicated for the task of reliably picking college players to draft. However, this is a good analogy. Even though I talked about basketball, the components of the neural network were still there. Studying the examples of how well college players performed in the NBA is called the training phase. Double checking the patterns that you found with a new set of NBA players is called the validation phase. Lastly, testing the patterns that you’ve found with the college players is called the testing phase.

For the brave of heart, I will go into more technical detail below about each of the exact components of the neural network.

  • Low Level

The low-level explanation of a neural network requires going into the different pieces that make up the network. In order to do that, we need to first understand the perceptron:

Source

  • Perceptron

The components of the perceptron are x1, x2, and x3 which can be thought of as features or attributes of the input data. This would correspond to a single row of a table and x1, x2, and x3 would be the columns. If we are building a neural network around players that have made it to the NBA.

  • Weights

Weights relate the input to the output. These are responsible for making an input feature important or not important. Higher weights correspond to higher feature importance.

  • Activation function

There needs to be a “bar” that needs to be hit in order to classify the NBA player into a “good” or “bad” player. When the model multiplies the input stats with the weights and adds the biases, it comes up with a numeric value. How can this be translated into “good” or “bad”?

  • Bias

The bias or offset value is the value to ensure that the weights times the inputs are where they need to be in order to place where they should with the activation function.

  • Non-linearity and why its important in neural networks

This is also essential in neural networks so that there is non-linearity. Without this piece of the network, we would just be learning linear relationships between inputs and outputs and that’s no fun. That is analogous to just memorizing what will be on the test and not being able to reason and figure out new problems.

  • Deep learning and adding more layers

What the field is truly interested in when it comes to neural networks is deep learning. Deep learning refers to stacking layers of these neural units on top of each other. At each layer of the network, new relationships and patterns can be found since each section has their own introduction of non-linearity.

  • Backpropagation

In the perceptron model, the weights are crucial to determining the relationships between the inputs and the outputs. For the weights to update iteratively, a process called backpropagation is needed. For each step (epoch) that the neural network makes it determines how close to the target variable it is. If it is still not close enough, it will go back and tweak the weights and biases to try and get the prediction closer to the target.

In order to have backpropagation, these should be function that is trying to minimize. This is called a loss function – meaning let’s try to minimize the total difference between our predicted values and the real values.

The main way that backpropagation is used is by gradient descent. Gradient descent computes the steepness of each weight in the network to the output value. Another way of saying this is if I change a weight by a lot, will this have a large or small impact on the output. I don’t want to be adjusting all the inputs uniformly. That would result in a wild prediction since some input features are better at predicting performance.

With the discussion of gradient descent, there needs to be a discussion of learning rate. I like to think of this as how conservative do, we want to be during our learning process.

  1. Implementation in Python with Keras

Now that we have covered the background of neural networks, we will move onto how neural networks can be implemented in Python. Fortunately, neural network libraries have made significant progress in the last several years – both in terms of performance as well as user accessibility. Keras is one of my favorite libraries for quick implementation of deep learning techniques. With Keras, one can use several different backends such as Tensorflow for computation. In the code below, I implement a simple neural network with Keras and make predictions all in just a few lines of code.

  1. Concluding Remarks

Neural networks are the foundation for a field called deep learning. They can take advantage of increased computing power in the most optimized way possible. As demonstrated in this guide, implementing a neural network is not nearly as daunting as it seems. New libraries such as Keras make it a breeze. Deep learning is the future and having a deep understanding as well as skills to implement them are extremely valuable in today’s world.

Show More

Nick Allyn

Hello, my name is Nick Allyn. I am extremely passionate about the field of artificial intelligence. I believe that artificial intelligence will save millions of lives in the coming years due in higher cancer survival rates, cleaner air, as well as autonomous cars.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Close