Implements:
-neural networks regression
-binary and multinomial classification
-autoencoders
Several features available for a more efficient gradient descent:
- mini-batch GD
- optimization algorithms: 'Adam', 'RMSProp', 'Momentum'
- Batch normalization
- Ridge regulrization
Calling Function:"neuralNet"
Graphics: The library also comes with a visualizer (neuralNet_visualizer sub-folder)
- visualize model summary (error rate, gradient check values, ...)
- plot objective function for validation and training
- plot weights evolution through network layers
NN parameters are very flexible:
- Type= "Classification" or "Regression"
- DW (Depth/Width of the network)=vector where each value indicates neurons in indexed layer
- loss (loss function)= "RSS" or "Deviance" (either logistic loss or residual sum of squares)
- outputFunc (output function)= "Sigmoid", "Softmax", "Identity"
- activationFunc (activation function)= "tanh", "sigmoid", "linear" (relu on the way)
- rate=learning rate
- weightDecay=TRUE/FALSE
- lambda=coefficient of the weight decay
- a host of <NN cheking tools such as gradient checker, traceObjective function, trace loss function, ...
- set type="Regression"
- set loss="RSS"
- if input is 0...1 then set output to "Softmax" or "Sigmoid" if parameters >1 or ==1 respectively
- Other parameters are tuned identically to other neural networks
Remark: For runtime optimization reasons, the purpose of this implementation is experimentation and is not intended to be used in production environment.
Please send all your remarks to [email protected]