.. EpyNN documentation master file, created by sphinx-quickstart on Tue Jul 6 18:46:11 2021. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. .. toctree:: Neural Network - Model =============================== Source files in ``EpyNN/epynn/network/``. In most Python implementations of Neural Network frameworks, the top-level object is the Network itself. The EpyNN Model (:py:class:`epynn.network.models.EpyNN`) is the top-level object which represents the Neural Network designed by the end-user, as a whole. The design of a Neural Network represents a list of second-level objects called layers, each type of layer having its own architecture. EpyNN Model ------------------------------ .. image:: _static/model-01.svg :alt: Neural Network The EpyNN model of neural network network is defined by: * An *embedding* - or input - layer in first position in the list of layers. * An *output* layer in last position in the list of layers. * Optional *hidden* layers. * A loss function which is the error function to be minimized during the training phase. The training phase of a Neural Network can be briefly summarized such as: * *Forward* propagation: input data *X* in the embedding layer are propagated *forward* and transformed through each layer of the network to output the predicted values of interest *A*. * *Loss* evaluation: Predicted values *A* are compared with target values *Y* through the *loss* function (i. e. Mean Squared Error). * *Backward* propagation: The error is propagated backward through the network and each layer internally computes *gradients* further used to adjust the *trainable* parameters. See below for more details. .. autoclass:: epynn.network.models.EpyNN .. automethod:: __init__ Forward ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automethod:: epynn.network.models.EpyNN.forward .. literalinclude:: ./../epynn/network/forward.py :pyobject: model_forward The forward propagation for a Neural Network model is straightforward. Given the ``layers`` attribute representing the list of layers contained in the model, a ``for`` loop statement is used to iterate over ``model.layers``. For each layer, the ``forward()`` method is called. Therefore, the forward propagation for one model is defined by successive forward propagation for each layer within the model. Backward ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automethod:: epynn.network.models.EpyNN.backward .. literalinclude:: ./../epynn/network/backward.py :pyobject: model_backward For the backward propagation we use a ``for`` loop statement to iterate backward over ``model.layers``. For each layer, the ``backward()`` method is called, followed by the ``compute_gradients()`` and ``update_parameters()`` methods. Initialize ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automethod:: epynn.network.models.EpyNN.initialize See :ref:`se_hPars` for details about ``se_hPars`` argument. Note that: * Each layer is assigned with *independent pseudo-random number generator* such as ``layer.np_rng = np.random.default_rng(seed=seed_layer)``. When set to an ``int`` value, the seed parameter is incremented by one after each generator initialization. * There is a *single loss function* that can be assigned to one model to actually drive the regression. * The ``metrics`` argument can be provided with a list of metrics but also with loss functions to compute the associated cost in a purely indicative manner. The regression is never driven by the value of the ``metrics`` argument whether or not it is provided with a loss function. Training ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automethod:: epynn.network.models.EpyNN.train .. literalinclude:: ./../epynn/network/training.py :pyobject: model_training The full training procedure for one model includes: * A ``for`` loop statement iterating over a user-defined number of training epochs. * For each epoch, training batches are built from the training set. * For each epoch, a ``for`` loop statement iterates over training batches. * For each batch, the ``model.forward()`` is computed. The training loss derivative is then computed by comparison of the true values *Y* with the predicted values *A*. The ``model.backward()`` method is then called. Accuracy and training loss are then reported for the current batch. * For each epoch and once all training batches have been processed, model evaluation is achieved using the ``model.evaluate()`` method and results are reported report using the ``model.report()`` method. .. automethod:: epynn.network.models.EpyNN.evaluate Compute metrics including cost for dsets. Metrics of interest are provided as argument when calling the ``model.initialize()`` method. .. automethod:: epynn.network.models.EpyNN.report Report selected metrics for dsets at current epoch. .. automethod:: epynn.network.models.EpyNN.plot .. automethod:: epynn.network.models.EpyNN.write If the model you trained fits your requirements, write on disk to use it later. Prediction ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. automethod:: epynn.network.models.EpyNN.predict .. literalinclude:: ./../epynn/network/models.py :pyobject: EpyNN.predict :lines: 1,2,15- A model previously written on disk can be read from it using the :py:func:`epynn.commons.library.read_model` function. Prediction using a pre-trained model includes: * Data pre-processing: if data used for training were `one-hot encoded`_, they must also be encoded for prediction. Similarly, if a global scaling within \[0, 1\] was applied during the training phase, it should be applied to the data before prediction. Note that scaling should be used with caution because data range may be unrelated between training data and samples for prediction. * Data embedding into the dataSet object: This object is convenient because it has multiple attributes to store data provided by user as well as relevant outputs from prediction protocol. * Prediction: This is equal to a single ``model.forward()`` pass. Probabilities are stored in ``dset.A`` while decisions are stored in ``dset.P``. .. _one-hot encoded: epynnlive/dummy_string/prepare_dataset.html#One-hot-encoding-of-string-features .. _se_hPars: Model Hyperparameters ------------------------------ Defaults hyperparameters settings upon model initialization. When loaded from :py:meth:`epynn.network.models.EpyNN.initialize` these are **global hyperparameters** which apply on every layer **with no anterior assignment**. See :ref:`layer_se_hPars` for details on local hyperparameters assignment and more in-depth description. Also note that active hyperparameters can be accessed from ``model.layers[i].se_hPars``. .. autoclass:: epynn.settings.se_hPars .. literalinclude:: ../epynn/settings.py :language: python :start-after: HYPERPARAMETERS SETTINGS Because EpyNN is meant to be friendly for less experienced peoples in programming and/or Neural Networks, we made the choice to feed these hyperparameters as ``dict`` rather than unfolding it in a long sequence of arguments. Still, this setup allows the following for the advanced user: * Save your best ``se_hPars`` at the location of your choice. * Implement specific hyperparameters such as ``se_hPars_Dense`` in a separate file and import to provide the ``se_hPars`` argument upon layer instantiation. * Implement program-wide new hyperparameters by adding the corresponding ``key: value`` pair in :class:`epynn.settings.se_hPars` and use it for hyperparameters assignment. Live Examples ------------------------------ * `Practical description of the EpyNN object`_. * `Network training - Examples`_. .. _Practical description of the EpyNN object: epynnlive/dummy_boolean/train.html#The-EpyNN-Network-object .. _Network training - Examples: run_examples.html .. _EpyNN model: EpyNN_Model.rst