Expert Documentation
Every object contained in EpyNN is documented below.
Commons
General functions
- epynn.commons.library.configure_directory(clear=False)[source]
Configure working directory.
- Parameters
clear (bool, optional) – Remove and make directories, defaults to False.
- epynn.commons.library.read_model(model_path=None)[source]
Read EpyNN model from disk.
- Parameters
model_path (str or NoneType, optional) – Where to read model from, defaults to None which reads the last saved model in models directory.
- epynn.commons.library.read_pickle(f)[source]
Read pickle binary file.
- Parameters
f (str) – Filename.
- Returns
File content.
- Return type
Object
- epynn.commons.library.settings_verification()[source]
Import default
epynn.settings.se_hPars
if not present in working directory.
- epynn.commons.library.write_model(model, model_path=None)[source]
Write EpyNN model on disk.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network object.model_path (str or NoneType, optional) – Where to write model, defaults to None which sets path in models directory.
Shared models
- class epynn.commons.models.Layer(se_hPars=None)[source]
Definition of a parent base layer prototype. Any given layer prototype inherits from this class and is defined with respect to a specific architecture (Dense, RNN, Convolution…). The parent base layer defines instance attributes common to any child layer prototype.
I/O operations
- epynn.commons.io.encode_dataset(X_data, element_to_idx, elements_size)[source]
One-hot encode a set of sequences.
- epynn.commons.io.index_elements_auto(X_data)[source]
Determine elements size and generate dictionary for one-hot encoding or features or label.
- epynn.commons.io.one_hot_decode_sequence(sequence, idx_to_element)[source]
One-hot decode sequence.
- epynn.commons.io.one_hot_encode_sequence(sequence, element_to_idx, elements_size)[source]
One-hot encode sequence.
Activation and weight initialization
- epynn.commons.maths.activation_tune(se_hPars)[source]
Set layer’s hyperparameters as temporary globals.
For each forward and backward pass the function is called from within the layer.
- epynn.commons.maths.clip_gradient(layer, max_norm=0.25)[source]
Clip to avoid vanishing or exploding gradients.
- Parameters
layer (Object) – An instance of trainable layer.
max_norm (float, optional) – Maximal clipping coefficient allowed, defaults to 0.25.
- epynn.commons.maths.elu(x, deriv=False)[source]
Compute ELU activation or derivative.
- Parameters
x (class:numpy.ndarray) – Input array to pass in function.
deriv (bool, optional) – To compute derivative, defaults to False.
- Returns
Output array passed in function.
- Return type
numpy.ndarray
- epynn.commons.maths.hadamard(dA, dLinear)[source]
Element-wise matrix multiplication with support for softmax derivative.
This is implemented for Dense layer and is compatible with other layers satisfying requirements.
- Parameters
dA (
numpy.ndarray
) – Input of backward propagation of shape (m, n).dLinear (
numpy.ndarray
) – Linear activation product passed through the derivative of the non-linear activation function with shape (m, n) or (m, n, n).
- epynn.commons.maths.identity(x, deriv=False)[source]
Compute identity activation or derivative.
Note this is for testing purpose, cannot be used with backpropagation.
- Parameters
x (class:numpy.ndarray) – Input array to pass in function.
deriv (bool, optional) – To compute derivative, defaults to False.
- Returns
Output array passed in function.
- Return type
numpy.ndarray
- epynn.commons.maths.lrelu(x, deriv=False)[source]
Compute LReLU activation or derivative.
- Parameters
x (class:numpy.ndarray) – Input array to pass in function.
deriv (bool, optional) – To compute derivative, defaults to False.
- Returns
Output array passed in function.
- Return type
numpy.ndarray
- epynn.commons.maths.orthogonal(shape, rng=<module 'numpy.random' from '/home/synthase/.local/lib/python3.9/site-packages/numpy/random/__init__.py'>)[source]
Orthogonal initialization for weight array.
- epynn.commons.maths.relu(x, deriv=False)[source]
Compute ReLU activation or derivative.
- Parameters
x (class:numpy.ndarray) – Input array to pass in function.
deriv (bool, optional) – To compute derivative, defaults to False.
- Returns
Output array passed in function.
- Return type
numpy.ndarray
- epynn.commons.maths.sigmoid(x, deriv=False)[source]
Compute Sigmoid activation or derivative.
- Parameters
x (class:numpy.ndarray) – Input array to pass in function.
deriv (bool, optional) – To compute derivative, defaults to False.
- Returns
Output array passed in function.
- Return type
numpy.ndarray
- epynn.commons.maths.softmax(x, deriv=False)[source]
Compute softmax activation or derivative.
For Dense layer only.
For other layers, you can change element-wise matrix multiplication operator ‘*’ by
epynn.maths.hadamard()
which handles the softmax derivative jacobian matrix.- Parameters
x (class:numpy.ndarray) – Input array to pass in function.
deriv (bool, optional) – To compute derivative, defaults to False.
- Returns
Output array passed in function.
- Return type
numpy.ndarray
- epynn.commons.maths.tanh(x, deriv=False)[source]
Compute tanh activation or derivative.
- Parameters
x (class:numpy.ndarray) – Input array to pass in function.
deriv (bool, optional) – To compute derivative, defaults to False.
- Returns
Output array passed in function.
- Return type
numpy.ndarray
Loss functions
- epynn.commons.loss.BCE(Y, A, deriv=False)[source]
Binary Cross-Entropy.
- Parameters
Y (
numpy.ndarray
) – True labels for a set of samples.A (
numpy.ndarray
) – Output of forward propagation.deriv (bool, optional) – To compute the derivative.
- Returns
Loss.
- Return type
numpy.ndarray
- epynn.commons.loss.CCE(Y, A, deriv=False)[source]
Categorical Cross-Entropy.
- Parameters
Y (
numpy.ndarray
) – True labels for a set of samples.A (
numpy.ndarray
) – Output of forward propagation.deriv (bool, optional) – To compute the derivative.
- Returns
Loss.
- Return type
numpy.ndarray
- epynn.commons.loss.MAE(Y, A, deriv=False)[source]
Mean Absolute Error.
- Parameters
Y (
numpy.ndarray
) – True labels for a set of samples.A (
numpy.ndarray
) – Output of forward propagation.deriv (bool, optional) – To compute the derivative.
- Returns
Loss.
- Return type
numpy.ndarray
- epynn.commons.loss.MSE(Y, A, deriv=False)[source]
Mean Squared Error.
- Parameters
Y (
numpy.ndarray
) – True labels for a set of samples.A (
numpy.ndarray
) – Output of forward propagation.deriv (bool, optional) – To compute the derivative.
- Returns
Loss.
- Return type
numpy.ndarray
- epynn.commons.loss.MSLE(Y, A, deriv=False)[source]
Mean Squared Logarythmic Error.
- Parameters
Y (
numpy.ndarray
) – True labels for a set of samples.A (
numpy.ndarray
) – Output of forward propagation.deriv (bool, optional) – To compute the derivative.
- Returns
Loss.
- Return type
numpy.ndarray
Metrics function
- epynn.commons.metrics.NPV(Y, A)[source]
Fraction of negative samples among excluded instances.
- Parameters
Y (
numpy.ndarray
) – True labels for a set of samples.A (
numpy.ndarray
) – Output of forward propagation.
- Returns
Negative Predictive Value.
- Return type
numpy.ndarray
- epynn.commons.metrics.accuracy(Y, A)[source]
Accuracy of prediction.
- Parameters
Y (
numpy.ndarray
) – True labels for a set of samples.A (
numpy.ndarray
) – Output of forward propagation.
- Returns
Accuracy for each sample.
- Return type
numpy.ndarray
- epynn.commons.metrics.fscore(Y, A)[source]
F-Score that is the harmonic mean of recall and precision.
- Parameters
Y (
numpy.ndarray
) – True labels for a set of samples.A (
numpy.ndarray
) – Output of forward propagation.
- Returns
F-score.
- Return type
numpy.ndarray
- epynn.commons.metrics.metrics_functions(key=None)[source]
Callback function for metrics.
- Parameters
key (str, optional) – Name of the metrics function, defaults to None which returns all functions.
- Returns
Metrics functions or computed metrics.
- Return type
dict[str: function] or
numpy.ndarray
- epynn.commons.metrics.precision(Y, A)[source]
Fraction of positive samples among retrieved instances.
- Parameters
Y (
numpy.ndarray
) – True labels for a set of samples.A (
numpy.ndarray
) – Output of forward propagation.
- Returns
Precision.
- Return type
numpy.ndarray
Learning rate schedulers
Logs
- epynn.commons.logs.current_logs(model, colors)[source]
Build logs with respect to headers for current epoch which includes epoch, learning rates, training metrics, costs and experiment name.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network object.
- Returns
Logs for current epoch.
- Return type
- epynn.commons.logs.dsets_labels_logs(dsets)[source]
Build tabular logs describing datasets Y dimension.
- Parameters
dsets (list[
epynn.commons.models.dataSet
]) – Attribute of an instance of embedding layer object. Contains active (non-empty) sets.- Returns
Logs describing datasets Y dimension.
- Return type
texttable.Texttable
- epynn.commons.logs.dsets_samples_logs(dsets, se_dataset)[source]
Build tabular logs describing datasets.
- Parameters
dsets (list[
epynn.commons.models.dataSet
]) – Attribute of an instance of embedding layer object. Contains active (non-empty) sets.se_dataset (dict[str, int or bool]) – Attribute of an instance of embedding layer object.
- Returns
Logs describing datasets.
- Return type
texttable.Texttable
- epynn.commons.logs.headers_logs(model, colors)[source]
Generate headers to log epochs, learning rates, training metrics, costs and experiment name.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network object.
- Returns
Headers with respect to training setup.
- Return type
- epynn.commons.logs.initialize_logs_print(model)[source]
Print model initialization logs which include information about datasets, model architecture and shapes as well as layers hyperparameters.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network object.
- epynn.commons.logs.layers_lrate_logs(layers)[source]
Build tabular logs for layers hyperparameters related to learning rate.
- Parameters
layers (list[Object]) – Attribute of an instance of EpyNN network object.
- Returns
Logs for layers hyperparameters related to learning rate.
- Return type
texttable.Texttable
- epynn.commons.logs.layers_others_logs(layers)[source]
Build tabular logs for layers hyperparameters related to activation functions.
- Parameters
layers (list[Object]) – Attribute of an instance of EpyNN network object.
- Returns
Logs for layers hyperparameters related to activation functions.
- Return type
texttable.Texttable
- epynn.commons.logs.network_logs(network)[source]
Build tabular logs of current network architecture and shapes.
- epynn.commons.logs.pretty_json(network)[source]
Pretty json print for traceback during model initialization.
Plots
Convolution
Model
- class epynn.convolution.models.Convolution(unit_filters=1, filter_size=(3, 3), strides=None, padding=0, activate=<function relu>, initialization=<function xavier>, use_bias=True, se_hPars=None)[source]
Definition of a convolution layer prototype.
- Parameters
unit_filters (int, optional) – Number of unit filters in convolution layer, defaults to 1.
filter_size (int or tuple[int], optional) – Height and width for convolution window, defaults to (3, 3).
strides (int or tuple[int], optional) – Height and width to shift the convolution window by, defaults to None which equals filter_size.
padding (int, optional) – Number of zeros to pad each features plane with, defaults to 0.
activate (function, optional) – Non-linear activation of unit filters, defaults to relu.
initialization (function, optional) – Weight initialization function for convolution layer, defaults to xavier.
use_bias (bool, optional) – Whether the layer uses bias, defaults to True.
se_hPars (dict[str, str or float] or NoneType, optional) – Layer hyper-parameters, defaults to None and inherits from model.
- backward(dX)[source]
Wrapper for
epynn.convolution.backward.convolution_backward()
.- Parameters
dX (
numpy.ndarray
) – Output of backward propagation from next layer.- Returns
Output of backward propagation for current layer.
- Return type
numpy.ndarray
- compute_gradients()[source]
Wrapper for
epynn.convolution.parameters.convolution_compute_gradients()
.
- compute_shapes(A)[source]
Wrapper for
epynn.convolution.parameters.convolution_compute_shapes()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- forward(A)[source]
Wrapper for
epynn.convolution.forward.convolution_forward()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.- Returns
Output of forward propagation for current layer.
- Return type
numpy.ndarray
- initialize_parameters()[source]
Wrapper for
epynn.convolution.parameters.convolution_initialize_parameters()
.
- update_parameters()[source]
Wrapper for
epynn.convolution.parameters.convolution_update_parameters()
.
Forward
- epynn.convolution.forward.convolution_forward(layer, A)[source]
Forward propagate signal to next layer.
- epynn.convolution.forward.initialize_forward(layer, A)[source]
Forward cache initialization.
- Parameters
layer (
epynn.convolution.models.Convolution
) – An instance of convolution layer.A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- Returns
Input of forward propagation for current layer.
- Return type
numpy.ndarray
- Returns
Input of forward propagation for current layer.
- Return type
numpy.ndarray
- Returns
Input blocks of forward propagation for current layer.
- Return type
numpy.ndarray
Backward
- epynn.convolution.backward.convolution_backward(layer, dX)[source]
Backward propagate error gradients to previous layer.
- epynn.convolution.backward.initialize_backward(layer, dX)[source]
Backward cache initialization.
- Parameters
layer (
epynn.convolution.models.Convolution
) – An instance of convolution layer.dX (
numpy.ndarray
) – Output of backward propagation from next layer.
- Returns
Input of backward propagation for current layer.
- Return type
numpy.ndarray
Parameters
- epynn.convolution.parameters.convolution_compute_gradients(layer)[source]
Compute gradients with respect to weight and bias for layer.
- epynn.convolution.parameters.convolution_compute_shapes(layer, A)[source]
Compute forward shapes and dimensions for layer.
Dense
Model
- class epynn.dense.models.Dense(units=1, activate=<function sigmoid>, initialization=<function xavier>, se_hPars=None)[source]
Definition of a dense layer prototype.
- Parameters
units (int, optional) – Number of units in dense layer, defaults to 1.
activate (function, optional) – Non-linear activation of units, defaults to sigmoid.
initialization (function, optional) – Weight initialization function for dense layer, defaults to xavier.
se_hPars (dict[str, str or float] or NoneType, optional) – Layer hyper-parameters, defaults to None and inherits from model.
- backward(dX)[source]
Wrapper for
epynn.dense.backward.dense_backward()
.- Parameters
dX (
numpy.ndarray
) – Output of backward propagation from next layer.- Returns
Output of backward propagation for current layer.
- Return type
numpy.ndarray
- compute_gradients()[source]
Wrapper for
epynn.dense.parameters.dense_compute_gradients()
.
- compute_shapes(A)[source]
Wrapper for
epynn.dense.parameters.dense_compute_shapes()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- forward(A)[source]
Wrapper for
epynn.dense.forward.dense_forward()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.- Returns
Output of forward propagation for current layer.
- Return type
numpy.ndarray
- initialize_parameters()[source]
Wrapper for
epynn.dense.parameters.dense_initialize_parameters()
.
- update_parameters()[source]
Wrapper for
epynn.dense.parameters.dense_update_parameters()
.
Forward
- epynn.dense.forward.initialize_forward(layer, A)[source]
Forward cache initialization.
- Parameters
layer (
epynn.dense.models.Dense
) – An instance of dense layer.A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- Returns
Input of forward propagation for current layer.
- Return type
numpy.ndarray
Backward
- epynn.dense.backward.dense_backward(layer, dX)[source]
Backward propagate error gradients to previous layer.
- epynn.dense.backward.initialize_backward(layer, dX)[source]
Backward cache initialization.
- Parameters
layer (
epynn.dense.models.Dense
) – An instance of dense layer.dX (
numpy.ndarray
) – Output of backward propagation from next layer.
- Returns
Input of backward propagation for current layer.
- Return type
numpy.ndarray
Parameters
- epynn.dense.parameters.dense_compute_gradients(layer)[source]
Compute gradients with respect to weight and bias for layer.
- epynn.dense.parameters.dense_compute_shapes(layer, A)[source]
Compute forward shapes and dimensions from input for layer.
Dropout
Model
- class epynn.dropout.models.Dropout(drop_prob=0.5, axis=())[source]
Definition of a dropout layer prototype.
- Parameters
- backward(dX)[source]
Wrapper for
epynn.dropout.backward.dropout_backward()
.- Parameters
dX (
numpy.ndarray
) – Output of backward propagation from next layer.- Returns
Output of backward propagation for current layer.
- Return type
numpy.ndarray
- compute_gradients()[source]
Wrapper for
epynn.dropout.parameters.dropout_compute_gradients()
. Dummy method, there are no gradients to compute in layer.
- compute_shapes(A)[source]
Wrapper for
epynn.dropout.parameters.dropout_compute_shapes()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- forward(A)[source]
Wrapper for
epynn.dropout.forward.dropout_forward()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.- Returns
Output of forward propagation for current layer.
- Return type
numpy.ndarray
- initialize_parameters()[source]
Wrapper for
epynn.dropout.parameters.dropout_initialize_parameters()
.
- update_parameters()[source]
Wrapper for
epynn.dropout.parameters.dropout_update_parameters()
. Dummy method, there are no parameters to update in layer.
Forward
- epynn.dropout.forward.initialize_forward(layer, A)[source]
Forward cache initialization.
- Parameters
layer (
epynn.dropout.models.Dropout
) – An instance of dropout layer.A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- Returns
Input of forward propagation for current layer.
- Return type
numpy.ndarray
Backward
- epynn.dropout.backward.dropout_backward(layer, dX)[source]
Backward propagate error gradients to previous layer.
- epynn.dropout.backward.initialize_backward(layer, dX)[source]
Backward cache initialization.
- Parameters
layer (
epynn.dropout.models.Dropout
) – An instance of dropout layer.dX (
numpy.ndarray
) – Output of backward propagation from next layer.
- Returns
Input of backward propagation for current layer.
- Return type
numpy.ndarray
Parameters
- epynn.dropout.parameters.dropout_compute_gradients(layer)[source]
Compute gradients with respect to weight and bias for layer.
- epynn.dropout.parameters.dropout_compute_shapes(layer, A)[source]
Compute forward shapes and dimensions from input for layer.
Embedding
Model
- class epynn.embedding.models.Embedding(X_data=None, Y_data=None, relative_size=(2, 1, 0), batch_size=None, X_encode=False, Y_encode=False, X_scale=False)[source]
Definition of an embedding layer prototype.
- Parameters
X_data (list[list[float or str or list[float or str]]] or NoneType, optional) – Dataset containing samples features, defaults to None which returns an empty layer.
Y_data (list[int or list[int]] or NoneType, optional) – Dataset containing samples label, defaults to None.
relative_size (tuple[int], optional) – For training, validation and testing sets. Defaults to (2, 1, 1)
batch_size (int or NoneType, optional) – For training batches, defaults to None which makes a single batch out of the training data.
X_encode – Set to True to one-hot encode features, default to False.
Y_encode – Set to True to one-hot encode labels, default to False.
X_scale (bool, optional) – Normalize sample features within [0, 1], default to False.
- backward(dX)[source]
Wrapper for
epynn.embedding.backward.embedding_backward()
.- Parameters
dX (
numpy.ndarray
) – Output of backward propagation from next layer.- Returns
Output of backward propagation for current layer.
- Return type
numpy.ndarray
- compute_gradients()[source]
Wrapper for
epynn.embedding.parameters.embedding_compute_gradients()
. Dummy method, there are no gradients to compute in layer.
- compute_shapes(A)[source]
Wrapper for
epynn.embedding.parameters.embedding_compute_shapes()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- forward(A)[source]
Wrapper for
epynn.embedding.forward.embedding_forward()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.- Returns
Output of forward propagation for current layer.
- Return type
numpy.ndarray
- initialize_parameters()[source]
Wrapper for
epynn.embedding.parameters.embedding_initialize_parameters()
.
- training_batches(init=False)[source]
Wrapper for
epynn.embedding.dataset.mini_batches()
.- Parameters
init (bool, optional) – Wether to prepare a zip of X and Y data, defaults to False.
- update_parameters()[source]
Wrapper for
epynn.embedding.parameters.embedding_update_parameters()
. Dummy method, there are no parameters to update in layer.
Data processing
- epynn.embedding.dataset.embedding_check(X_data, Y_data=None, X_scale=False)[source]
Pre-processing.
- Parameters
X_data – Set of sample features.
Y_data – Set of samples label.
X_scale (bool, optional) – Set to True to normalize sample features within [0, 1].
- Returns
Sample features and label.
- Return type
tuple[
numpy.ndarray
]
- epynn.embedding.dataset.embedding_encode(layer, X_data, Y_data, X_encode, Y_encode)[source]
One-hot encoding for samples features and label.
- Parameters
layer (
epynn.embedding.models.Embedding
) – An instance of theepynn.embedding.models.Embedding
X_data – Set of sample features.
Y_data – Set of samples label.
X_encode – Set to True to one-hot encode features.
Y_encode – Set to True to one-hot encode labels.
- Returns
Encoded set of sample features, if applicable.
:rtype :
numpy.ndarray
- Returns
Encoded set of sample label, if applicable.
:rtype :
numpy.ndarray
- epynn.embedding.dataset.embedding_prepare(layer, X_data, Y_data)[source]
Prepare dataset for Embedding layer object.
- Parameters
layer (
epynn.embedding.models.Embedding
) – An instance of theepynn.embedding.models.Embedding
X_data – Set of sample features.
Y_data – Set of samples label.
- Returns
All training, validation and testing sets along with batched training set
- Return type
tuple[
epynn.commons.models.dataSet
]
- epynn.embedding.dataset.mini_batches(layer)[source]
Shuffle and divide dataset in batches for each training epoch.
- Parameters
layer (
epynn.embedding.models.Embedding
) – An instance of theepynn.embedding.models.Embedding
- Returns
Batches made from dataset with respect to batch_size
- Return type
list[Object]
Forward
- epynn.embedding.forward.embedding_forward(layer, A)[source]
Forward propagate signal to next layer.
- epynn.embedding.forward.initialize_forward(layer, A)[source]
Forward cache initialization.
- Parameters
layer (
epynn.embedding.models.Embedding
) – An instance of embedding layer.A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- Returns
Input of forward propagation for current layer.
- Return type
numpy.ndarray
Backward
- epynn.embedding.backward.embedding_backward(layer, dX)[source]
Backward propagate error gradients to previous layer.
- epynn.embedding.backward.initialize_backward(layer, dX)[source]
Backward cache initialization.
- Parameters
layer (
epynn.embedding.models.Embedding
) – An instance of embedding layer.dX (
numpy.ndarray
) – Output of backward propagation from next layer.
- Returns
Input of backward propagation for current layer.
- Return type
numpy.ndarray
Parameters
- epynn.embedding.parameters.embedding_compute_gradients(layer)[source]
Compute gradients with respect to weight and bias for layer.
- epynn.embedding.parameters.embedding_compute_shapes(layer, A)[source]
Compute forward shapes and dimensions from input for layer.
Flatten
Model
- class epynn.flatten.models.Flatten[source]
Definition of a flatten layer prototype.
- backward(dX)[source]
Wrapper for
epynn.flatten.backward.flatten_backward()
.- Parameters
dX (
numpy.ndarray
) – Output of backward propagation from next layer.- Returns
Output of backward propagation for current layer.
- Return type
numpy.ndarray
- compute_gradients()[source]
Wrapper for
epynn.flatten.parameters.flatten_compute_gradients()
. Dummy method, there are no gradients to compute in layer.
- compute_shapes(A)[source]
Wrapper for
epynn.flatten.parameters.flatten_compute_shapes()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- forward(A)[source]
Wrapper for
epynn.flatten.forward.flatten_forward()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.- Returns
Output of forward propagation for current layer.
- Return type
numpy.ndarray
- initialize_parameters()[source]
Wrapper for
epynn.flatten.parameters.flatten_initialize_parameters()
.
- update_parameters()[source]
Wrapper for
epynn.flatten.parameters.flatten_update_parameters()
. Dummy method, there are no parameters to update in layer.
Forward
- epynn.flatten.forward.initialize_forward(layer, A)[source]
Forward cache initialization.
- Parameters
layer (
epynn.flatten.models.Flatten
) – An instance of flatten layer.A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- Returns
Input of forward propagation for current layer.
- Return type
numpy.ndarray
Backward
- epynn.flatten.backward.flatten_backward(layer, dX)[source]
Backward propagate error gradients to previous layer.
- epynn.flatten.backward.initialize_backward(layer, dX)[source]
Backward cache initialization.
- Parameters
layer (
epynn.flatten.models.Flatten
) – An instance of flatten layer.dX (
numpy.ndarray
) – Output of backward propagation from next layer.
- Returns
Input of backward propagation for current layer.
- Return type
numpy.ndarray
Parameters
- epynn.flatten.parameters.flatten_compute_gradients(layer)[source]
Compute gradients with respect to weight and bias for layer.
- epynn.flatten.parameters.flatten_compute_shapes(layer, A)[source]
Compute forward shapes and dimensions from input for layer.
GRU
Model
- class epynn.gru.models.GRU(unit_cells=1, activate=<function tanh>, activate_update=<function sigmoid>, activate_reset=<function sigmoid>, initialization=<function orthogonal>, clip_gradients=False, sequences=False, se_hPars=None)[source]
Definition of a GRU layer prototype.
- Parameters
units (int, optional) – Number of unit cells in GRU layer, defaults to 1.
activate (function, optional) – Non-linear activation of hidden hat (hh) state, defaults to tanh.
activate_output (function, optional) – Non-linear activation of update gate, defaults to sigmoid.
activate_candidate (function, optional) – Non-linear activation of reset gate, defaults to sigmoid.
initialization (function, optional) – Weight initialization function for GRU layer, defaults to orthogonal.
clip_gradients (bool, optional) – May prevent exploding/vanishing gradients, defaults to False.
sequences (bool, optional) – Whether to return only the last hidden state or the full sequence, defaults to False.
se_hPars (dict[str, str or float] or NoneType, optional) – Layer hyper-parameters, defaults to None and inherits from model.
- backward(dX)[source]
Wrapper for
epynn.gru.backward.gru_backward()
.- Parameters
dX (
numpy.ndarray
) – Output of backward propagation from next layer.- Returns
Output of backward propagation for current layer.
- Return type
numpy.ndarray
- compute_gradients()[source]
Wrapper for
epynn.gru.parameters.gru_compute_gradients()
.
- compute_shapes(A)[source]
Wrapper for
epynn.gru.parameters.gru_compute_shapes()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- forward(A)[source]
Wrapper for
epynn.gru.forward.gru_forward()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.- Returns
Output of forward propagation for current layer.
- Return type
numpy.ndarray
- initialize_parameters()[source]
Wrapper for
epynn.gru.parameters.gru_initialize_parameters()
.
- update_parameters()[source]
Wrapper for
epynn.gru.parameters.gru_update_parameters()
.
Forward
- epynn.gru.forward.initialize_forward(layer, A)[source]
Forward cache initialization.
- Parameters
layer (
epynn.gru.models.GRU
) – An instance of GRU layer.A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- Returns
Input of forward propagation for current layer.
- Return type
numpy.ndarray
- Returns
Previous hidden state initialized with zeros.
- Return type
numpy.ndarray
Backward
- epynn.gru.backward.gru_backward(layer, dX)[source]
Backward propagate error gradients to previous layer.
- epynn.gru.backward.initialize_backward(layer, dX)[source]
Backward cache initialization.
- Parameters
layer (
epynn.gru.models.GRU
) – An instance of GRU layer.dX (
numpy.ndarray
) – Output of backward propagation from next layer.
- Returns
Input of backward propagation for current layer.
- Return type
numpy.ndarray
- Returns
Next hidden state initialized with zeros.
- Return type
numpy.ndarray
Parameters
- epynn.gru.parameters.gru_compute_gradients(layer)[source]
Compute gradients with respect to weight and bias for layer.
- epynn.gru.parameters.gru_compute_shapes(layer, A)[source]
Compute forward shapes and dimensions from input for layer.
LSTM
Model
- class epynn.lstm.models.LSTM(unit_cells=1, activate=<function tanh>, activate_output=<function sigmoid>, activate_candidate=<function tanh>, activate_input=<function sigmoid>, activate_forget=<function sigmoid>, initialization=<function orthogonal>, clip_gradients=False, sequences=False, se_hPars=None)[source]
Definition of a LSTM layer prototype.
- Parameters
units (int, optional) – Number of unit cells in LSTM layer, defaults to 1.
activate (function, optional) – Non-linear activation of hidden and memory states, defaults to tanh.
activate_output (function, optional) – Non-linear activation of output gate, defaults to sigmoid.
activate_candidate (function, optional) – Non-linear activation of candidate, defaults to tanh.
activate_input (function, optional) – Non-linear activation of input gate, defaults to sigmoid.
activate_forget (function, optional) – Non-linear activation of forget gate, defaults to sigmoid.
initialization (function, optional) – Weight initialization function for LSTM layer, defaults to orthogonal.
clip_gradients (bool, optional) – May prevent exploding/vanishing gradients, defaults to False.
sequences (bool, optional) – Whether to return only the last hidden state or the full sequence, defaults to False.
se_hPars (dict[str, str or float] or NoneType, optional) – Layer hyper-parameters, defaults to None and inherits from model.
- backward(dX)[source]
Is a wrapper for
epynn.lstm.backward.lstm_backward()
.- Parameters
dX (
numpy.ndarray
) – Output of backward propagation from next layer.- Returns
Output of backward propagation for current layer.
- Return type
numpy.ndarray
- compute_gradients()[source]
Is a wrapper for
epynn.lstm.parameters.lstm_compute_gradients()
.
- compute_shapes(A)[source]
Is a wrapper for
epynn.lstm.parameters.lstm_compute_shapes()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- forward(A)[source]
Is a wrapper for
epynn.lstm.forward.lstm_forward()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.- Returns
Output of forward propagation for current layer.
- Return type
numpy.ndarray
- initialize_parameters()[source]
Is a wrapper for
epynn.lstm.parameters.lstm_initialize_parameters()
.
- update_parameters()[source]
Is a wrapper for
epynn.lstm.parameters.lstm_update_parameters()
.
Forward
- epynn.lstm.forward.initialize_forward(layer, A)[source]
Forward cache initialization.
- Parameters
layer (
epynn.lstm.models.LSTM
) – An instance of LSTM layer.A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- Returns
Input of forward propagation for current layer.
- Return type
numpy.ndarray
- Returns
Previous hidden state initialized with zeros.
- Return type
numpy.ndarray
- Returns
Previous memory state initialized with zeros.
- Return type
numpy.ndarray
Backward
- epynn.lstm.backward.initialize_backward(layer, dX)[source]
Backward cache initialization.
- Parameters
layer (
epynn.lstm.models.LSTM
) – An instance of LSTM layer.dX (
numpy.ndarray
) – Output of backward propagation from next layer.
- Returns
Input of backward propagation for current layer.
- Return type
numpy.ndarray
- Returns
Next hidden state initialized with zeros.
- Return type
numpy.ndarray
- Returns
Next memory state initialized with zeros.
- Return type
numpy.ndarray
Parameters
- epynn.lstm.parameters.lstm_compute_gradients(layer)[source]
Compute gradients with respect to weight and bias for layer.
- epynn.lstm.parameters.lstm_compute_shapes(layer, A)[source]
Compute forward shapes and dimensions from input for layer.
Network
Model
- class epynn.network.models.EpyNN(layers, name='EpyNN')[source]
Definition of a Neural Network prototype following the EpyNN scheme.
- Parameters
- backward(dA)[source]
Wrapper for
epynn.network.backward.model_backward()
.- Parameters
dA (
numpy.ndarray
) – Derivative of the loss function with respect to the output of forward propagation.
- batch_report(batch, A)[source]
Wrapper for
epynn.network.report.single_batch_report()
.
- evaluate()[source]
Wrapper for
epynn.network.evaluate.model_evaluate()
. Good spot for further implementation of early stopping procedures.
- forward(X)[source]
Wrapper for
epynn.network.forward.model_forward()
.- Parameters
X (
numpy.ndarray
) – Set of sample features.- Returns
Output of forward propagation through all layers in the Network.
- Return type
numpy.ndarray
- initialize(loss='MSE', se_hPars={'ELU_alpha': 1, 'LRELU_alpha': 0.3, 'cycle_descent': 0, 'cycle_epochs': 0, 'decay_k': 0, 'learning_rate': 0.1, 'schedule': 'steady', 'softmax_temperature': 1}, metrics=['accuracy'], seed=None, params=True, end='\n')[source]
Wrapper for
epynn.network.initialize.model_initialize()
. Perform a dry epoch including all but not the parameters update step.- Parameters
loss (str, optional) – Loss function to use for training, defaults to ‘MSE’. See
epynn.commons.loss
for built-in functions.se_hPars (dict[str: float or str], optional) – Global hyperparameters, defaults to
epynn.settings.se_hPars
. If local hyperparameters were assigned to one layer, these remain unchanged.metrics (list[str], optional) – Metrics to monitor and print on terminal report or plot, defaults to [‘accuracy’]. See
epynn.commons.metrics
for built-in metrics. Note that it also accept loss functions string identifiers.seed (int or NoneType, optional) – Reproducibility in pseudo-random procedures.
params (bool, optional) – Layer parameters initialization, defaults to True.
end (str in ['n', 'r'], optional) – Whether to print every line for initialization steps or overwrite, default to n.
- plot(pyplot=True, path=None)[source]
Wrapper for
epynn.commons.plot.pyplot_metrics()
. Plot metrics from model training.- Parameters
pyplot (bool, optional) – Plot of results on GUI using matplotlib.
path (str or bool or NoneType, optional) – Write matplotlib plot, defaults to None which writes in the plots subdirectory created from
epynn.commons.library.configure_directory()
. To not write the plot at all, set to False.
- predict(X_data, X_encode=False, X_scale=False)[source]
Perform prediction of label from unlabeled samples in dataset.
- Parameters
- Returns
Data embedding and output of forward propagation.
- Return type
- report()[source]
Wrapper for
epynn.network.report.model_report()
.
- train(epochs, verbose=None, init_logs=True)[source]
Wrapper for
epynn.network.training.model_training()
. Apart, it computes learning rate along learning epochs.
- write(path=None)[source]
Write model on disk.
- Parameters
path (str or NoneType, optional) – Path to write the model on disk, defaults to None which writes in the models subdirectory created from
epynn.commons.library.configure_directory()
.
Forward
Backward
Initialization
- epynn.network.initialize.model_assign_seeds(model)[source]
Seed model and layers with independant pseudo-random number generators.
Model is seeded from user-input. Layers are seeded by incrementing the input by one in order to not generate same numbers for all objects
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network.
- epynn.network.initialize.model_initialize(model, params=True, end='\n')[source]
Initialize EpyNN network.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network.params (bool, optional) – Layer parameters initialization, defaults to True.
end (str in ['n', 'r']) – Wether to print every line for steps or overwrite, default to n.
- Raises
Exception – If any layer other than Dense was provided with softmax activation. See
epynn.maths.softmax()
.
- epynn.network.initialize.model_initialize_exceptions(model, trace)[source]
Handle error in model initialization and show logs.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network.trace (traceback object) – Traceback of fatal error.
Training
- epynn.network.training.model_training(model)[source]
Perform the training of the Neural Network.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network.
Evaluation
- epynn.network.evaluate.batch_evaluate(model, Y, A)[source]
Compute metrics for current batch.
Will evaluate current batch against accuracy and training loss.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network.Y (
numpy.ndarray
) – True labels for batch samples.A (
numpy.ndarray
) – Output of forward propagation for batch.
- epynn.network.evaluate.model_evaluate(model)[source]
Compute metrics including cost for model.
Will evaluate training, testing and validation sets against metrics set in model.se_config.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network.
Report
- epynn.network.report.initialize_model_report(model, timeout)[source]
Report exhaustive initialization logs for datasets, model architecture and shapes, layers hyperparameters.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network.timeout (int) – Time to hold on initialization logs.
- epynn.network.report.model_report(model)[source]
Report selected metrics for datasets at current epoch.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network object.
- epynn.network.report.single_batch_report(model, batch, A)[source]
Report accuracy and cost for current batch.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network.batch (
epynn.commons.models.dataSet
) – An instance of batch dataSet.A (
numpy.ndarray
) – Output of forward propagation for batch.
Hyperparameters
- epynn.network.hyperparameters.model_hyperparameters(model)[source]
Set hyperparameters for each layer in model.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network.
- epynn.network.hyperparameters.model_learning_rate(model)[source]
Schedule learning rate for each layer in model.
- Parameters
model (
epynn.network.models.EpyNN
) – An instance of EpyNN network.
Pooling
Model
- class epynn.pooling.models.Pooling(pool_size=(2, 2), strides=None, pool=<function amax>)[source]
Definition of a pooling layer prototype.
- Parameters
pool_size (int or tuple[int], optional) – Height and width for pooling window, defaults to (2, 2).
strides (int or tuple[int], optional) – Height and width to shift the pooling window by, defaults to None which equals pool_size.
pool (function, optional) – Pooling activation of units, defaults to
np.max()
. Use one of min or max pooling.
- backward(dX)[source]
Wrapper for
epynn.pooling.backward.pooling_backward()
.- Parameters
dX (
numpy.ndarray
) – Output of backward propagation from next layer.- Returns
Output of backward propagation for current layer.
- Return type
numpy.ndarray
- compute_gradients()[source]
Wrapper for
epynn.pooling.parameters.pooling_compute_gradients()
. Dummy method, there are no gradients to compute in layer.
- compute_shapes(A)[source]
Wrapper for
epynn.pooling.parameters.pooling_compute_shapes()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- forward(A)[source]
Wrapper for
epynn.pooling.forward.pooling_forward()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.- Returns
Output of forward propagation for current layer.
- Return type
numpy.ndarray
- update_parameters()[source]
Wrapper for
epynn.pooling.parameters.pooling_update_parameters()
. Dummy method, there are no parameters to update in layer.
Forward
- epynn.pooling.forward.initialize_forward(layer, A)[source]
Forward cache initialization.
- Parameters
layer (
epynn.pooling.models.Pooling
) – An instance of pooling layer.A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- Returns
Input of forward propagation for current layer.
- Return type
numpy.ndarray
- Returns
Input of forward propagation for current layer.
- Return type
numpy.ndarray
- Returns
Input blocks of forward propagation for current layer.
- Return type
numpy.ndarray
Backward
- epynn.pooling.backward.initialize_backward(layer, dX)[source]
Backward cache initialization.
- Parameters
layer (
epynn.pooling.models.Pooling
) – An instance of pooling layer.dX (
numpy.ndarray
) – Output of backward propagation from next layer.
- Returns
Input of backward propagation for current layer.
- Return type
numpy.ndarray
Parameters
- epynn.pooling.parameters.pooling_compute_gradients(layer)[source]
Compute gradients with respect to weight and bias for layer.
- epynn.pooling.parameters.pooling_compute_shapes(layer, A)[source]
Compute forward shapes and dimensions from input for layer.
RNN
Model
- class epynn.rnn.models.RNN(unit_cells=1, activate=<function tanh>, initialization=<function xavier>, clip_gradients=True, sequences=False, se_hPars=None)[source]
Definition of a RNN layer prototype.
- Parameters
units (int, optional) – Number of unit cells in RNN layer, defaults to 1.
activate (function, optional) – Non-linear activation of hidden state, defaults to tanh.
initialization (function, optional) – Weight initialization function for RNN layer, defaults to xavier.
clip_gradients (bool, optional) – May prevent exploding/vanishing gradients, defaults to False.
sequences (bool, optional) – Whether to return only the last hidden state or the full sequence, defaults to False.
se_hPars (dict[str, str or float] or NoneType, optional) – Layer hyper-parameters, defaults to None and inherits from model.
- backward(dX)[source]
Wrapper for
epynn.rnn.backward.rnn_backward()
.- Parameters
dX (
numpy.ndarray
) – Output of backward propagation from next layer.- Returns
Output of backward propagation for current layer.
- Return type
numpy.ndarray
- compute_gradients()[source]
Wrapper for
epynn.rnn.parameters.rnn_compute_gradients()
.
- compute_shapes(A)[source]
Wrapper for
epynn.rnn.parameters.rnn_compute_shapes()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- forward(A)[source]
Wrapper for
epynn.rnn.forward.rnn_forward()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.- Returns
Output of forward propagation for current layer.
- Return type
numpy.ndarray
- initialize_parameters()[source]
Wrapper for
epynn.rnn.parameters.rnn_initialize_parameters()
.
- update_parameters()[source]
Wrapper for
epynn.rnn.parameters.rnn_update_parameters()
.
Forward
- epynn.rnn.forward.initialize_forward(layer, A)[source]
Forward cache initialization.
- Parameters
layer (
epynn.rnn.models.RNN
) – An instance of RNN layer.A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- Returns
Input of forward propagation for current layer.
- Return type
numpy.ndarray
- Returns
Previous hidden state initialized with zeros.
- Return type
numpy.ndarray
Backward
- epynn.rnn.backward.initialize_backward(layer, dX)[source]
Backward cache initialization.
- Parameters
layer (
epynn.rnn.models.RNN
) – An instance of RNN layer.dX (
numpy.ndarray
) – Output of backward propagation from next layer.
- Returns
Input of backward propagation for current layer.
- Return type
numpy.ndarray
- Returns
Next hidden state initialized with zeros.
- Return type
numpy.ndarray
Parameters
- epynn.rnn.parameters.rnn_compute_gradients(layer)[source]
Compute gradients with respect to weight and bias for layer.
- epynn.rnn.parameters.rnn_compute_shapes(layer, A)[source]
Compute forward shapes and dimensions from input for layer.
Template
Model
- class epynn.template.models.Template[source]
Definition of a template layer prototype. This is a pass-through or inactive layer prototype which contains method definitions used for all active layers. For all layer prototypes, methods are wrappers of functions which contain the specific implementations.
- backward(dX)[source]
Is a wrapper for
epynn.template.backward.template_backward()
.- Parameters
dX (
numpy.ndarray
) – Output of backward propagation from next layer.- Returns
Output of backward propagation for current layer.
- Return type
numpy.ndarray
- compute_gradients()[source]
Is a wrapper for
epynn.template.parameters.template_compute_gradients()
. Dummy method, there are no gradients to compute in layer.
- compute_shapes(A)[source]
Is a wrapper for
epynn.template.parameters.template_compute_shapes()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- forward(A)[source]
Is a wrapper for
epynn.template.forward.template_forward()
.- Parameters
A (
numpy.ndarray
) – Output of forward propagation from previous layer.- Returns
Output of forward propagation for current layer.
- Return type
numpy.ndarray
- initialize_parameters()[source]
Is a wrapper for
epynn.template.parameters.template_initialize_parameters()
.
- update_parameters()[source]
Is a wrapper for
epynn.template.parameters.template_update_parameters()
. Dummy method, there are no parameters to update in layer.
Forward
- epynn.template.forward.initialize_forward(layer, A)[source]
Forward cache initialization.
- Parameters
layer (
epynn.template.models.Template
) – An instance of template layer.A (
numpy.ndarray
) – Output of forward propagation from previous layer.
- Returns
Input of forward propagation for current layer.
- Return type
numpy.ndarray
Backward
- epynn.template.backward.initialize_backward(layer, dX)[source]
Backward cache initialization.
- Parameters
layer (
epynn.template.models.Template
) – An instance of template layer.dX (
numpy.ndarray
) – Output of backward propagation from next layer.
- Returns
Input of backward propagation for current layer.
- Return type
numpy.ndarray
Parameters
- epynn.template.parameters.template_compute_gradients(layer)[source]
Compute gradients with respect to weight and bias for layer.
- epynn.template.parameters.template_compute_shapes(layer, A)[source]
Compute forward shapes and dimensions from input for layer.
Settings
- epynn.settings.se_hPars = {'ELU_alpha': 1, 'LRELU_alpha': 0.3, 'cycle_descent': 0, 'cycle_epochs': 0, 'decay_k': 0, 'learning_rate': 0.1, 'schedule': 'steady', 'softmax_temperature': 1}
Hyperparameters dictionary settings.
Set hyperparameters for model and layer.