See the code below:
from sklearn.neural_network import MLPClassifier mlp = MLPClassifier(hidden_layer_sizes=(10),solver='sgd', learning_rate_init=0.01,max_iter=500) mlp.fit(X_train, y_train) print mlp.score(X_test,y_test)
That’s right, those 4 lines code can create a Neural Net with one hidden layer! 😐
Scikit-learn just released stable version 0.18. One of the new features is `MLPClassifer` and you can see in the code above, it’s powerful enough to create a simple neural net program.
That code just a snippet of my Iris Classifier Program that you can see on Github. Of course, in practice, you still need to create loader, pre-process, pre-training, or other modules. But, if you see other python libraries like Keras, Lasagne, or Theano, I think this is the easiest way to create a simple neural net. I said “simple” because when you need to create more complex model that need more complex algorithm or many addons, I think it’ll become difficult to use `MLPClassifier`. But, for some simple project, I think I’ll choose this scikit tool 🙂
Line 1: you need to load `MLPClassifier`
Line 2: we create an object called ‘mlp’ which is a `MLPClassifier`. We set hidden_layer_size to (10) which means we add one hidden layer with 10 neurons. Then we set solver as ‘sgd’ because we will use Stochastic Gradient Descent as optimizer. Then we set learning_rate_init to 0.01, this is a learning rate value (be careful, don’t confuse with alpha parameter in MLPClassifer). Then the last, we set 500 as the maximum number of training iteration.
Line 3: Train the model
Line 4: Test the model
That’s it! and my first run I got more than 90% accuration in Iris Dataset 🙂 try it!
And actually, you can create “Deeper” neural net with add more layers easily, just change `hidden_layer_sizes`. For example, `hidden_layer_sizes=(10,10,10)` will create 3 hidden layer with 10 neuron each.
source image: Flickr