.. EpyNN documentation master file, created by
sphinx-quickstart on Tue Jul 6 18:46:11 2021.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to EpyNN's documentation!
=================================
.. raw:: html
**EpyNN is written in pure Python/NumPy.**
If you use EpyNN in academia, please cite:
Malard F., Danner L., Rouzies E., Meyer J. G., Lescop E., Olivier-Van Stichelen S. **EpyNN: Educational python for Neural Networks**, *SoftwareX* 19 (2022).
Please email to fmalard@epynn.net or solivier@mcw.edu for any comments.
What is EpyNN?
--------------------
EpyNN is a production-ready but first **Educational python resource for Neural Networks**. EpyNN is designed for Supervised Machine Learning (SML) approaches by means of Neural Networks.
EpyNN includes **scalable**, **minimalistic** and **homogeneous** implementations of major Neural Network architectures in **pure Python/Numpy**. Because EpyNN is meant to be **educational**, it contains several dummy and real world examples of problems including data processing schemes and pipelines for training and prediction.
Do I need EpyNN?
--------------------
EpyNN is intended for **teachers**, **students**, **scientists**, or more generally anyone with minimal skills in Python programming **who wish to understand** and build from basic implementations of NN architectures.
Although EpyNN can be used for production, it is meant to be a library of **homogeneous architecture templates** and **practical examples** which is expected to save an important amount of time for people who wish to learn, teach or **develop from scratch**.
Is EpyNN reliable?
--------------------
EpyNN has been cross-validated against TensorFlow/Keras API and provides identical results for identical configurations in the limit of float64 precision.
Herein we report evaluation for **264** distinct combinations of:
* Architecture schemes.
* Number of units in layers.
* Activation functions.
* Loss functions.
Comparison was achieved by computing the Root Mean Square Deviation (RMSD) between:
* EpyNN and TensorFlow/Keras averaged per-sample loss computed from output probabilities using TensorFlow/Keras loss function.
* EpyNN and TensorFlow/Keras averaged per-sample per-output probabilities.
RMSD distribution for two training epochs with learning rate set to 0.01, for all experiments, shown below.
.. image:: _static/keras_epynn/keras_epynn.svg
Note that the minimal error considering float64 precision and RMS operation is 1e-08. For the sake of rigor, RMSD = 0.0 was counted as RMSD < 1e-08.
Architecture schemes:
* **(Dense)n**: Perceptron (n = 1) to Deep Feed-Forward (n > 2).
* **(RNN)n_Dense**: With one (n = 1) and two (n = 2) *RNN* layers.
* **(LSTM)n_Dense**: With one (n = 1) and two (n = 2) *LSTM* layers.
* **(GRU)n_Dense**: With one (n = 1) and two (n = 2) *GRU* layers.
* **(Convolution_Pooling)n_Dense**: With one (n = 1) and two (n = 2) Convolution_Pooling blocks.
Note that combinations of those are likely to be as reliable, but the exponential number of possibilities makes them unreported here.
Activation functions:
* **Sigmoid**: For output layer.
* **Hyperbolic tangent**: For output layer and hidden layers.
* **Softmax**: For output layer.
* **ReLU**: For *Convolution* and hidden *Dense* layer.
* **ELU**: For hidden *Dense* layer.
Loss functions:
* **MSE**: With sigmoid, tanh and softmax in the output *Dense* layer.
* **MAE**: With sigmoid, tanh and softmax in the output *Dense* layer.
* **MSLE**: With sigmoid and softmax in the output *Dense* layer.
* **BCE**: With sigmoid and softmax in the output *Dense* layer.
* **CCE**: With softmax in the output *Dense* layer.
These experiments, including all executable codes, can be downloaded below:
.. only:: builder_html or readthedocs
:download:`nncheck `
.. toctree::
:maxdepth: 3
:hidden:
quickstart
Introduction
.. toctree::
:maxdepth: 3
:caption: Models & Functions
:hidden:
EpyNN_Model
Layer_Model
Data_Model
activation
loss
.. toctree::
:maxdepth: 3
:caption: Layers
:hidden:
Embedding
Dense
RNN
LSTM
GRU
Convolution
Pooling
Dropout
Flatten
.. toctree::
:maxdepth: 3
:caption: Examples and more
:hidden:
data_examples
run_examples
Details
glossary