You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 10, 2021. It is now read-only.
I am trying to learn Neural Networks using scikit-neuralnetwork framework and I know basics about Neural Networks and now trying to implement it with scikit-learn. but I am confused on 2 points.
1- what is the structure of this NN given below? Somehow, in some examples felt to me, some people don't put input layer as a layer. Otherwise, I am thinking this as a 2 layer NN has input layer with 100 nodes and 1 node at the ouput layer.
2- Does scikit-neuralnetwork do back propagation within the code that I put below or how it trains the system and changes the weights in the neural network?
Thank you!
from sknn.mlp import Classifier, Layer
nn = Classifier(
layers=[
Layer("Maxout", units=100, pieces=2),
Layer("Softmax")],
learning_rate=0.001,
n_iter=25)
nn.fit(X_train, y_train)
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I am trying to learn Neural Networks using scikit-neuralnetwork framework and I know basics about Neural Networks and now trying to implement it with scikit-learn. but I am confused on 2 points.
1- what is the structure of this NN given below? Somehow, in some examples felt to me, some people don't put input layer as a layer. Otherwise, I am thinking this as a 2 layer NN has input layer with 100 nodes and 1 node at the ouput layer.
2- Does scikit-neuralnetwork do back propagation within the code that I put below or how it trains the system and changes the weights in the neural network?
Thank you!
The text was updated successfully, but these errors were encountered: