Expert Documentation

Every object contained in EpyNN is documented below.

Commons

General functions

epynn.commons.library.configure_directory(clear=False)[source]

Configure working directory.

Parameters

clear (bool, optional) – Remove and make directories, defaults to False.

epynn.commons.library.read_file(f)[source]

Read text file.

Parameters

f (str) – Filename.

Returns

File content.

Return type

str

epynn.commons.library.read_model(model_path=None)[source]

Read EpyNN model from disk.

Parameters

model_path (str or NoneType, optional) – Where to read model from, defaults to None which reads the last saved model in models directory.

epynn.commons.library.read_pickle(f)[source]

Read pickle binary file.

Parameters

f (str) – Filename.

Returns

File content.

Return type

Object

epynn.commons.library.settings_verification()[source]

Import default epynn.settings.se_hPars if not present in working directory.

epynn.commons.library.write_model(model, model_path=None)[source]

Write EpyNN model on disk.

Parameters
  • model (epynn.network.models.EpyNN) – An instance of EpyNN network object.

  • model_path (str or NoneType, optional) – Where to write model, defaults to None which sets path in models directory.

epynn.commons.library.write_pickle(f, c)[source]

Write pickle binary file.

Parameters
  • f (str) – Filename.

  • c (Object) – Content to write.

Shared models

class epynn.commons.models.Layer(se_hPars=None)[source]

Definition of a parent base layer prototype. Any given layer prototype inherits from this class and is defined with respect to a specific architecture (Dense, RNN, Convolution…). The parent base layer defines instance attributes common to any child layer prototype.

update_shapes(cache, shapes)[source]

Update shapes from cache.

Parameters
  • cache (dict[str, numpy.ndarray]) – Cache from forward or backward propagation.

  • shapes (dict[str, tuple[int]]) – Corresponding shapes.

class epynn.commons.models.dataSet(X_data, Y_data=None, name='dummy')[source]

Definition of a dataSet object prototype.

Parameters
  • X_data (list[list[int or float]]) – Set of sample features.

  • Y_data (list[list[int] or int] or NoneType, optional) – Set of sample label, defaults to None.

  • name (str, optional) – Name of set, defaults to ‘dummy’.

I/O operations

epynn.commons.io.encode_dataset(X_data, element_to_idx, elements_size)[source]

One-hot encode a set of sequences.

Parameters
  • X_data (numpy.ndarray) – Contains sequences.

  • element_to_idx (dict[str or int or float, int]) – Converter with word as key and index as value.

  • elements_size (int) – Number of keys in converter.

Returns

One-hot encoded dataset.

Return type

list[numpy.ndarray]

epynn.commons.io.index_elements_auto(X_data)[source]

Determine elements size and generate dictionary for one-hot encoding or features or label.

Parameters

X_data (numpy.ndarray) – Dataset containing samples features or samples label.

Returns

One-hot encoding converter.

Return type

dict[str or int or float, int]

Returns

One-hot decoding converter.

Return type

dict[int, str or int or float]

Returns

Vocabulary size.

Return type

int

epynn.commons.io.one_hot_decode_sequence(sequence, idx_to_element)[source]

One-hot decode sequence.

Parameters
  • sequence (list or numpy.ndarray) – One-hot encoded sequence.

  • idx_to_element (dict[int, str or int or float]) – Converter with index as key and word as value.

Returns

One-hot decoded sequence.

Return type

list[str or int or float]

epynn.commons.io.one_hot_encode(i, elements_size)[source]

Generate one-hot encoding array.

Parameters
  • i (int) – One-hot index for current word.

  • elements_size (int) – Number of keys in the word to index encoder.

Returns

One-hot encoding array for current word.

Return type

numpy.ndarray

epynn.commons.io.one_hot_encode_sequence(sequence, element_to_idx, elements_size)[source]

One-hot encode sequence.

Parameters
  • sequence (list or numpy.ndarray) – Sequential data.

  • element_to_idx (dict[str or int or float, int]) – Converter with word as key and index as value.

  • elements_size (int) – Number of keys in converter.

Returns

One-hot encoded sequence.

Return type

numpy.ndarray

epynn.commons.io.padding(X_data, padding, forward=True)[source]

Image padding.

Parameters
  • X_data (numpy.ndarray) – Array representing a set of images.

  • padding (int) – Number of zeros to add in each side of the image.

  • forward (bool, optional) – Set to False to remove padding, defaults to True.

epynn.commons.io.scale_features(X_data)[source]

Scale input array within [0, 1].

Parameters

X_data (numpy.ndarray) – Raw data.

Returns

Normalized data.

Return type

numpy.ndarray

Activation and weight initialization

epynn.commons.maths.activation_tune(se_hPars)[source]

Set layer’s hyperparameters as temporary globals.

For each forward and backward pass the function is called from within the layer.

Parameters

se_hPars (dict[str, str or float]) – Local hyperparameters for layers.

epynn.commons.maths.clip_gradient(layer, max_norm=0.25)[source]

Clip to avoid vanishing or exploding gradients.

Parameters
  • layer (Object) – An instance of trainable layer.

  • max_norm (float, optional) – Maximal clipping coefficient allowed, defaults to 0.25.

epynn.commons.maths.elu(x, deriv=False)[source]

Compute ELU activation or derivative.

Parameters
  • x (class:numpy.ndarray) – Input array to pass in function.

  • deriv (bool, optional) – To compute derivative, defaults to False.

Returns

Output array passed in function.

Return type

numpy.ndarray

epynn.commons.maths.hadamard(dA, dLinear)[source]

Element-wise matrix multiplication with support for softmax derivative.

This is implemented for Dense layer and is compatible with other layers satisfying requirements.

Parameters
  • dA (numpy.ndarray) – Input of backward propagation of shape (m, n).

  • dLinear (numpy.ndarray) – Linear activation product passed through the derivative of the non-linear activation function with shape (m, n) or (m, n, n).

epynn.commons.maths.identity(x, deriv=False)[source]

Compute identity activation or derivative.

Note this is for testing purpose, cannot be used with backpropagation.

Parameters
  • x (class:numpy.ndarray) – Input array to pass in function.

  • deriv (bool, optional) – To compute derivative, defaults to False.

Returns

Output array passed in function.

Return type

numpy.ndarray

epynn.commons.maths.lrelu(x, deriv=False)[source]

Compute LReLU activation or derivative.

Parameters
  • x (class:numpy.ndarray) – Input array to pass in function.

  • deriv (bool, optional) – To compute derivative, defaults to False.

Returns

Output array passed in function.

Return type

numpy.ndarray

epynn.commons.maths.orthogonal(shape, rng=<module 'numpy.random' from '/home/synthase/.local/lib/python3.9/site-packages/numpy/random/__init__.py'>)[source]

Orthogonal initialization for weight array.

Parameters
  • shape (tuple[int]) – Shape of weight array.

  • rng (numpy.random) – Pseudo-random number generator, defaults to np.random.

Returns

Initialized weight array.

Return type

numpy.ndarray

epynn.commons.maths.relu(x, deriv=False)[source]

Compute ReLU activation or derivative.

Parameters
  • x (class:numpy.ndarray) – Input array to pass in function.

  • deriv (bool, optional) – To compute derivative, defaults to False.

Returns

Output array passed in function.

Return type

numpy.ndarray

epynn.commons.maths.sigmoid(x, deriv=False)[source]

Compute Sigmoid activation or derivative.

Parameters
  • x (class:numpy.ndarray) – Input array to pass in function.

  • deriv (bool, optional) – To compute derivative, defaults to False.

Returns

Output array passed in function.

Return type

numpy.ndarray

epynn.commons.maths.softmax(x, deriv=False)[source]

Compute softmax activation or derivative.

For Dense layer only.

For other layers, you can change element-wise matrix multiplication operator ‘*’ by epynn.maths.hadamard() which handles the softmax derivative jacobian matrix.

Parameters
  • x (class:numpy.ndarray) – Input array to pass in function.

  • deriv (bool, optional) – To compute derivative, defaults to False.

Returns

Output array passed in function.

Return type

numpy.ndarray

epynn.commons.maths.tanh(x, deriv=False)[source]

Compute tanh activation or derivative.

Parameters
  • x (class:numpy.ndarray) – Input array to pass in function.

  • deriv (bool, optional) – To compute derivative, defaults to False.

Returns

Output array passed in function.

Return type

numpy.ndarray

epynn.commons.maths.xavier(shape, rng=<module 'numpy.random' from '/home/synthase/.local/lib/python3.9/site-packages/numpy/random/__init__.py'>)[source]

Xavier Normal Distribution initialization for weight array.

Parameters
  • shape (tuple[int]) – Shape of weight array.

  • rng (numpy.random) – Pseudo-random number generator, defaults to np.random.

Returns

Initialized weight array.

Return type

numpy.ndarray

Loss functions

epynn.commons.loss.BCE(Y, A, deriv=False)[source]

Binary Cross-Entropy.

Parameters
  • Y (numpy.ndarray) – True labels for a set of samples.

  • A (numpy.ndarray) – Output of forward propagation.

  • deriv (bool, optional) – To compute the derivative.

Returns

Loss.

Return type

numpy.ndarray

epynn.commons.loss.CCE(Y, A, deriv=False)[source]

Categorical Cross-Entropy.

Parameters
  • Y (numpy.ndarray) – True labels for a set of samples.

  • A (numpy.ndarray) – Output of forward propagation.

  • deriv (bool, optional) – To compute the derivative.

Returns

Loss.

Return type

numpy.ndarray

epynn.commons.loss.MAE(Y, A, deriv=False)[source]

Mean Absolute Error.

Parameters
  • Y (numpy.ndarray) – True labels for a set of samples.

  • A (numpy.ndarray) – Output of forward propagation.

  • deriv (bool, optional) – To compute the derivative.

Returns

Loss.

Return type

numpy.ndarray

epynn.commons.loss.MSE(Y, A, deriv=False)[source]

Mean Squared Error.

Parameters
  • Y (numpy.ndarray) – True labels for a set of samples.

  • A (numpy.ndarray) – Output of forward propagation.

  • deriv (bool, optional) – To compute the derivative.

Returns

Loss.

Return type

numpy.ndarray

epynn.commons.loss.MSLE(Y, A, deriv=False)[source]

Mean Squared Logarythmic Error.

Parameters
  • Y (numpy.ndarray) – True labels for a set of samples.

  • A (numpy.ndarray) – Output of forward propagation.

  • deriv (bool, optional) – To compute the derivative.

Returns

Loss.

Return type

numpy.ndarray

epynn.commons.loss.loss_functions(key=None, output_activation=None)[source]

Callback function for loss.

Parameters
  • key (str, optional) – Name of the loss function, defaults to None which returns all functions.

  • output_activation (str, optional) – Name of the activation function for output layer.

Raises
  • Exception – If key is CCE and output activation is different from softmax.

  • Exception – If key is either CCE, BCE or MSLE and output activation is tanh.

Returns

Loss functions or computed loss.

Return type

dict[str, function] or numpy.ndarray

Metrics function

epynn.commons.metrics.NPV(Y, A)[source]

Fraction of negative samples among excluded instances.

Parameters
  • Y (numpy.ndarray) – True labels for a set of samples.

  • A (numpy.ndarray) – Output of forward propagation.

Returns

Negative Predictive Value.

Return type

numpy.ndarray

epynn.commons.metrics.accuracy(Y, A)[source]

Accuracy of prediction.

Parameters
  • Y (numpy.ndarray) – True labels for a set of samples.

  • A (numpy.ndarray) – Output of forward propagation.

Returns

Accuracy for each sample.

Return type

numpy.ndarray

epynn.commons.metrics.fscore(Y, A)[source]

F-Score that is the harmonic mean of recall and precision.

Parameters
  • Y (numpy.ndarray) – True labels for a set of samples.

  • A (numpy.ndarray) – Output of forward propagation.

Returns

F-score.

Return type

numpy.ndarray

epynn.commons.metrics.metrics_functions(key=None)[source]

Callback function for metrics.

Parameters

key (str, optional) – Name of the metrics function, defaults to None which returns all functions.

Returns

Metrics functions or computed metrics.

Return type

dict[str: function] or numpy.ndarray

epynn.commons.metrics.precision(Y, A)[source]

Fraction of positive samples among retrieved instances.

Parameters
  • Y (numpy.ndarray) – True labels for a set of samples.

  • A (numpy.ndarray) – Output of forward propagation.

Returns

Precision.

Return type

numpy.ndarray

epynn.commons.metrics.recall(Y, A)[source]

Fraction of positive instances retrieved over the total.

Parameters
  • Y (numpy.ndarray) – True labels for a set of samples.

  • A (numpy.ndarray) – Output of forward propagation.

Returns

Recall.

Return type

numpy.ndarray

epynn.commons.metrics.specificity(Y, A)[source]

Fraction of negative samples among excluded instances.

Parameters
  • Y (numpy.ndarray) – True labels for a set of samples.

  • A (numpy.ndarray) – Output of forward propagation.

Returns

Specificity.

Return type

numpy.ndarray

Learning rate schedulers

epynn.commons.schedule.exp_decay(hPars)[source]

Exponential decay schedule for learning rate.

Parameters

hPars (tuple[int or float]) – Contains hyperparameters.

Returns

Scheduled learning rate.

Return type

list[float]

epynn.commons.schedule.lin_decay(hPars)[source]

Linear decay schedule for learning rate.

Parameters

hPars (tuple[int or float]) – Contains hyperparameters.

Returns

Scheduled learning rate.

Return type

list[float]

epynn.commons.schedule.schedule_functions(schedule, hPars)[source]

Roots hyperparameters to relevant scheduler.

Parameters
  • schedule (str) – Schedule mode.

  • hPars (tuple[int or float]) – Contains hyperparameters.

Returns

Scheduled learning rate.

Return type

list[float]

epynn.commons.schedule.steady(hPars)[source]

Steady schedule for learning rate.

Parameters

hPars (tuple[int or float]) – Contains hyperparameters.

Returns

Scheduled learning rate.

Return type

list[float]

Logs

epynn.commons.logs.current_logs(model, colors)[source]

Build logs with respect to headers for current epoch which includes epoch, learning rates, training metrics, costs and experiment name.

Parameters
Returns

Logs for current epoch.

Return type

list[str]

epynn.commons.logs.dsets_labels_logs(dsets)[source]

Build tabular logs describing datasets Y dimension.

Parameters

dsets (list[epynn.commons.models.dataSet]) – Attribute of an instance of embedding layer object. Contains active (non-empty) sets.

Returns

Logs describing datasets Y dimension.

Return type

texttable.Texttable

epynn.commons.logs.dsets_samples_logs(dsets, se_dataset)[source]

Build tabular logs describing datasets.

Parameters
  • dsets (list[epynn.commons.models.dataSet]) – Attribute of an instance of embedding layer object. Contains active (non-empty) sets.

  • se_dataset (dict[str, int or bool]) – Attribute of an instance of embedding layer object.

Returns

Logs describing datasets.

Return type

texttable.Texttable

epynn.commons.logs.headers_logs(model, colors)[source]

Generate headers to log epochs, learning rates, training metrics, costs and experiment name.

Parameters
Returns

Headers with respect to training setup.

Return type

list[str]

epynn.commons.logs.initialize_logs_print(model)[source]

Print model initialization logs which include information about datasets, model architecture and shapes as well as layers hyperparameters.

Parameters

model (epynn.network.models.EpyNN) – An instance of EpyNN network object.

epynn.commons.logs.layers_lrate_logs(layers)[source]

Build tabular logs for layers hyperparameters related to learning rate.

Parameters

layers (list[Object]) – Attribute of an instance of EpyNN network object.

Returns

Logs for layers hyperparameters related to learning rate.

Return type

texttable.Texttable

epynn.commons.logs.layers_others_logs(layers)[source]

Build tabular logs for layers hyperparameters related to activation functions.

Parameters

layers (list[Object]) – Attribute of an instance of EpyNN network object.

Returns

Logs for layers hyperparameters related to activation functions.

Return type

texttable.Texttable

epynn.commons.logs.network_logs(network)[source]

Build tabular logs of current network architecture and shapes.

Parameters

network (dict[str, dict[str, str or tuple[int]]]) – Attribute of an instance of EpyNN network object.

Returns

Logs for network architecture and shapes.

Return type

texttable.Texttable

epynn.commons.logs.pretty_json(network)[source]

Pretty json print for traceback during model initialization.

Parameters

network (dict[str, dict[str, str or tuple[int]]]) – Attribute of an instance of EpyNN network object.

Returns

Formatted input.

Return type

json

epynn.commons.logs.process_logs(msg, level=0)[source]

Pretty print of EpyNN events.

Parameters
  • msg (str) – Message to print on terminal.

  • level (int, optional) – Set color for print, defaults to 0 which renders white.

epynn.commons.logs.set_highlighted_excepthook()[source]

Lexer to pretty print tracebacks.

epynn.commons.logs.start_counter(timeout=3)[source]

Timeout between print of initialization logs and beginning of run.

Parameters

timeout (int, optional) – Time in seconds, defaults to 3.

Plots

epynn.commons.plot.pyplot_metrics(model, path)[source]

Plot metrics/costs from training with matplotlib.

Parameters
  • model (epynn.meta.models.EpyNN) – An instance of EpyNN network object.

  • path (bool or NoneType) – Write matplotlib plot.

Convolution

Model

class epynn.convolution.models.Convolution(unit_filters=1, filter_size=(3, 3), strides=None, padding=0, activate=<function relu>, initialization=<function xavier>, use_bias=True, se_hPars=None)[source]

Definition of a convolution layer prototype.

Parameters
  • unit_filters (int, optional) – Number of unit filters in convolution layer, defaults to 1.

  • filter_size (int or tuple[int], optional) – Height and width for convolution window, defaults to (3, 3).

  • strides (int or tuple[int], optional) – Height and width to shift the convolution window by, defaults to None which equals filter_size.

  • padding (int, optional) – Number of zeros to pad each features plane with, defaults to 0.

  • activate (function, optional) – Non-linear activation of unit filters, defaults to relu.

  • initialization (function, optional) – Weight initialization function for convolution layer, defaults to xavier.

  • use_bias (bool, optional) – Whether the layer uses bias, defaults to True.

  • se_hPars (dict[str, str or float] or NoneType, optional) – Layer hyper-parameters, defaults to None and inherits from model.

backward(dX)[source]

Wrapper for epynn.convolution.backward.convolution_backward().

Parameters

dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Output of backward propagation for current layer.

Return type

numpy.ndarray

compute_gradients()[source]

Wrapper for epynn.convolution.parameters.convolution_compute_gradients().

compute_shapes(A)[source]

Wrapper for epynn.convolution.parameters.convolution_compute_shapes().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

forward(A)[source]

Wrapper for epynn.convolution.forward.convolution_forward().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Output of forward propagation for current layer.

Return type

numpy.ndarray

initialize_parameters()[source]

Wrapper for epynn.convolution.parameters.convolution_initialize_parameters().

update_parameters()[source]

Wrapper for epynn.convolution.parameters.convolution_update_parameters().

Forward

epynn.convolution.forward.convolution_forward(layer, A)[source]

Forward propagate signal to next layer.

epynn.convolution.forward.initialize_forward(layer, A)[source]

Forward cache initialization.

Parameters
Returns

Input of forward propagation for current layer.

Return type

numpy.ndarray

Returns

Input of forward propagation for current layer.

Return type

numpy.ndarray

Returns

Input blocks of forward propagation for current layer.

Return type

numpy.ndarray

Backward

epynn.convolution.backward.convolution_backward(layer, dX)[source]

Backward propagate error gradients to previous layer.

epynn.convolution.backward.initialize_backward(layer, dX)[source]

Backward cache initialization.

Parameters
Returns

Input of backward propagation for current layer.

Return type

numpy.ndarray

Parameters

epynn.convolution.parameters.convolution_compute_gradients(layer)[source]

Compute gradients with respect to weight and bias for layer.

epynn.convolution.parameters.convolution_compute_shapes(layer, A)[source]

Compute forward shapes and dimensions for layer.

epynn.convolution.parameters.convolution_initialize_parameters(layer)[source]

Initialize parameters for layer.

epynn.convolution.parameters.convolution_update_parameters(layer)[source]

Update parameters for layer.

Dense

Model

class epynn.dense.models.Dense(units=1, activate=<function sigmoid>, initialization=<function xavier>, se_hPars=None)[source]

Definition of a dense layer prototype.

Parameters
  • units (int, optional) – Number of units in dense layer, defaults to 1.

  • activate (function, optional) – Non-linear activation of units, defaults to sigmoid.

  • initialization (function, optional) – Weight initialization function for dense layer, defaults to xavier.

  • se_hPars (dict[str, str or float] or NoneType, optional) – Layer hyper-parameters, defaults to None and inherits from model.

backward(dX)[source]

Wrapper for epynn.dense.backward.dense_backward().

Parameters

dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Output of backward propagation for current layer.

Return type

numpy.ndarray

compute_gradients()[source]

Wrapper for epynn.dense.parameters.dense_compute_gradients().

compute_shapes(A)[source]

Wrapper for epynn.dense.parameters.dense_compute_shapes().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

forward(A)[source]

Wrapper for epynn.dense.forward.dense_forward().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Output of forward propagation for current layer.

Return type

numpy.ndarray

initialize_parameters()[source]

Wrapper for epynn.dense.parameters.dense_initialize_parameters().

update_parameters()[source]

Wrapper for epynn.dense.parameters.dense_update_parameters().

Forward

epynn.dense.forward.dense_forward(layer, A)[source]

Forward propagate signal to next layer.

epynn.dense.forward.initialize_forward(layer, A)[source]

Forward cache initialization.

Parameters
  • layer (epynn.dense.models.Dense) – An instance of dense layer.

  • A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Input of forward propagation for current layer.

Return type

numpy.ndarray

Backward

epynn.dense.backward.dense_backward(layer, dX)[source]

Backward propagate error gradients to previous layer.

epynn.dense.backward.initialize_backward(layer, dX)[source]

Backward cache initialization.

Parameters
  • layer (epynn.dense.models.Dense) – An instance of dense layer.

  • dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Input of backward propagation for current layer.

Return type

numpy.ndarray

Parameters

epynn.dense.parameters.dense_compute_gradients(layer)[source]

Compute gradients with respect to weight and bias for layer.

epynn.dense.parameters.dense_compute_shapes(layer, A)[source]

Compute forward shapes and dimensions from input for layer.

epynn.dense.parameters.dense_initialize_parameters(layer)[source]

Initialize trainable parameters from shapes for layer.

epynn.dense.parameters.dense_update_parameters(layer)[source]

Update parameters from gradients for layer.

Dropout

Model

class epynn.dropout.models.Dropout(drop_prob=0.5, axis=())[source]

Definition of a dropout layer prototype.

Parameters
  • drop_prob (float, optional) – Probability to drop one data point from previous layer to next layer, defaults to 0.5.

  • axis (int or tuple[int], optional) – Compute and apply dropout mask along defined axis, defaults to all axis.

backward(dX)[source]

Wrapper for epynn.dropout.backward.dropout_backward().

Parameters

dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Output of backward propagation for current layer.

Return type

numpy.ndarray

compute_gradients()[source]

Wrapper for epynn.dropout.parameters.dropout_compute_gradients(). Dummy method, there are no gradients to compute in layer.

compute_shapes(A)[source]

Wrapper for epynn.dropout.parameters.dropout_compute_shapes().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

forward(A)[source]

Wrapper for epynn.dropout.forward.dropout_forward().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Output of forward propagation for current layer.

Return type

numpy.ndarray

initialize_parameters()[source]

Wrapper for epynn.dropout.parameters.dropout_initialize_parameters().

update_parameters()[source]

Wrapper for epynn.dropout.parameters.dropout_update_parameters(). Dummy method, there are no parameters to update in layer.

Forward

epynn.dropout.forward.dropout_forward(layer, A)[source]

Forward propagate signal to next layer.

epynn.dropout.forward.initialize_forward(layer, A)[source]

Forward cache initialization.

Parameters
  • layer (epynn.dropout.models.Dropout) – An instance of dropout layer.

  • A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Input of forward propagation for current layer.

Return type

numpy.ndarray

Backward

epynn.dropout.backward.dropout_backward(layer, dX)[source]

Backward propagate error gradients to previous layer.

epynn.dropout.backward.initialize_backward(layer, dX)[source]

Backward cache initialization.

Parameters
  • layer (epynn.dropout.models.Dropout) – An instance of dropout layer.

  • dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Input of backward propagation for current layer.

Return type

numpy.ndarray

Parameters

epynn.dropout.parameters.dropout_compute_gradients(layer)[source]

Compute gradients with respect to weight and bias for layer.

epynn.dropout.parameters.dropout_compute_shapes(layer, A)[source]

Compute forward shapes and dimensions from input for layer.

epynn.dropout.parameters.dropout_initialize_parameters(layer)[source]

Initialize trainable parameters from shapes for layer.

epynn.dropout.parameters.dropout_update_parameters(layer)[source]

Update parameters from gradients for layer.

Embedding

Model

class epynn.embedding.models.Embedding(X_data=None, Y_data=None, relative_size=(2, 1, 0), batch_size=None, X_encode=False, Y_encode=False, X_scale=False)[source]

Definition of an embedding layer prototype.

Parameters
  • X_data (list[list[float or str or list[float or str]]] or NoneType, optional) – Dataset containing samples features, defaults to None which returns an empty layer.

  • Y_data (list[int or list[int]] or NoneType, optional) – Dataset containing samples label, defaults to None.

  • relative_size (tuple[int], optional) – For training, validation and testing sets. Defaults to (2, 1, 1)

  • batch_size (int or NoneType, optional) – For training batches, defaults to None which makes a single batch out of the training data.

  • X_encode – Set to True to one-hot encode features, default to False.

  • Y_encode – Set to True to one-hot encode labels, default to False.

  • X_scale (bool, optional) – Normalize sample features within [0, 1], default to False.

backward(dX)[source]

Wrapper for epynn.embedding.backward.embedding_backward().

Parameters

dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Output of backward propagation for current layer.

Return type

numpy.ndarray

compute_gradients()[source]

Wrapper for epynn.embedding.parameters.embedding_compute_gradients(). Dummy method, there are no gradients to compute in layer.

compute_shapes(A)[source]

Wrapper for epynn.embedding.parameters.embedding_compute_shapes().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

forward(A)[source]

Wrapper for epynn.embedding.forward.embedding_forward().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Output of forward propagation for current layer.

Return type

numpy.ndarray

initialize_parameters()[source]

Wrapper for epynn.embedding.parameters.embedding_initialize_parameters().

training_batches(init=False)[source]

Wrapper for epynn.embedding.dataset.mini_batches().

Parameters

init (bool, optional) – Wether to prepare a zip of X and Y data, defaults to False.

update_parameters()[source]

Wrapper for epynn.embedding.parameters.embedding_update_parameters(). Dummy method, there are no parameters to update in layer.

Data processing

epynn.embedding.dataset.embedding_check(X_data, Y_data=None, X_scale=False)[source]

Pre-processing.

Parameters
  • X_data – Set of sample features.

  • Y_data – Set of samples label.

  • X_scale (bool, optional) – Set to True to normalize sample features within [0, 1].

Returns

Sample features and label.

Return type

tuple[numpy.ndarray]

epynn.embedding.dataset.embedding_encode(layer, X_data, Y_data, X_encode, Y_encode)[source]

One-hot encoding for samples features and label.

Parameters
Returns

Encoded set of sample features, if applicable.

:rtype : numpy.ndarray

Returns

Encoded set of sample label, if applicable.

:rtype : numpy.ndarray

epynn.embedding.dataset.embedding_prepare(layer, X_data, Y_data)[source]

Prepare dataset for Embedding layer object.

Parameters
Returns

All training, validation and testing sets along with batched training set

Return type

tuple[epynn.commons.models.dataSet]

epynn.embedding.dataset.mini_batches(layer)[source]

Shuffle and divide dataset in batches for each training epoch.

Parameters

layer (epynn.embedding.models.Embedding) – An instance of the epynn.embedding.models.Embedding

Returns

Batches made from dataset with respect to batch_size

Return type

list[Object]

epynn.embedding.dataset.split_dataset(dataset, se_dataset)[source]

Split dataset in training, testing and validation sets.

Parameters
  • dataset (tuple[list or numpy.ndarray]) – Dataset containing sample features and label

  • se_dataset (dict[str: int]) – Settings for sets preparation

Returns

Training, testing and validation sets.

Return type

tuple[list]

Forward

epynn.embedding.forward.embedding_forward(layer, A)[source]

Forward propagate signal to next layer.

epynn.embedding.forward.initialize_forward(layer, A)[source]

Forward cache initialization.

Parameters
Returns

Input of forward propagation for current layer.

Return type

numpy.ndarray

Backward

epynn.embedding.backward.embedding_backward(layer, dX)[source]

Backward propagate error gradients to previous layer.

epynn.embedding.backward.initialize_backward(layer, dX)[source]

Backward cache initialization.

Parameters
Returns

Input of backward propagation for current layer.

Return type

numpy.ndarray

Parameters

epynn.embedding.parameters.embedding_compute_gradients(layer)[source]

Compute gradients with respect to weight and bias for layer.

epynn.embedding.parameters.embedding_compute_shapes(layer, A)[source]

Compute forward shapes and dimensions from input for layer.

epynn.embedding.parameters.embedding_initialize_parameters(layer)[source]

Initialize parameters from shapes for layer.

epynn.embedding.parameters.embedding_update_parameters(layer)[source]

Update parameters from gradients for layer.

Flatten

Model

class epynn.flatten.models.Flatten[source]

Definition of a flatten layer prototype.

backward(dX)[source]

Wrapper for epynn.flatten.backward.flatten_backward().

Parameters

dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Output of backward propagation for current layer.

Return type

numpy.ndarray

compute_gradients()[source]

Wrapper for epynn.flatten.parameters.flatten_compute_gradients(). Dummy method, there are no gradients to compute in layer.

compute_shapes(A)[source]

Wrapper for epynn.flatten.parameters.flatten_compute_shapes().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

forward(A)[source]

Wrapper for epynn.flatten.forward.flatten_forward().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Output of forward propagation for current layer.

Return type

numpy.ndarray

initialize_parameters()[source]

Wrapper for epynn.flatten.parameters.flatten_initialize_parameters().

update_parameters()[source]

Wrapper for epynn.flatten.parameters.flatten_update_parameters(). Dummy method, there are no parameters to update in layer.

Forward

epynn.flatten.forward.flatten_forward(layer, A)[source]

Forward propagate signal to next layer.

epynn.flatten.forward.initialize_forward(layer, A)[source]

Forward cache initialization.

Parameters
  • layer (epynn.flatten.models.Flatten) – An instance of flatten layer.

  • A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Input of forward propagation for current layer.

Return type

numpy.ndarray

Backward

epynn.flatten.backward.flatten_backward(layer, dX)[source]

Backward propagate error gradients to previous layer.

epynn.flatten.backward.initialize_backward(layer, dX)[source]

Backward cache initialization.

Parameters
  • layer (epynn.flatten.models.Flatten) – An instance of flatten layer.

  • dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Input of backward propagation for current layer.

Return type

numpy.ndarray

Parameters

epynn.flatten.parameters.flatten_compute_gradients(layer)[source]

Compute gradients with respect to weight and bias for layer.

epynn.flatten.parameters.flatten_compute_shapes(layer, A)[source]

Compute forward shapes and dimensions from input for layer.

epynn.flatten.parameters.flatten_initialize_parameters(layer)[source]

Initialize trainable parameters from shapes for layer.

epynn.flatten.parameters.flatten_update_parameters(layer)[source]

Update parameters from gradients for layer.

GRU

Model

class epynn.gru.models.GRU(unit_cells=1, activate=<function tanh>, activate_update=<function sigmoid>, activate_reset=<function sigmoid>, initialization=<function orthogonal>, clip_gradients=False, sequences=False, se_hPars=None)[source]

Definition of a GRU layer prototype.

Parameters
  • units (int, optional) – Number of unit cells in GRU layer, defaults to 1.

  • activate (function, optional) – Non-linear activation of hidden hat (hh) state, defaults to tanh.

  • activate_output (function, optional) – Non-linear activation of update gate, defaults to sigmoid.

  • activate_candidate (function, optional) – Non-linear activation of reset gate, defaults to sigmoid.

  • initialization (function, optional) – Weight initialization function for GRU layer, defaults to orthogonal.

  • clip_gradients (bool, optional) – May prevent exploding/vanishing gradients, defaults to False.

  • sequences (bool, optional) – Whether to return only the last hidden state or the full sequence, defaults to False.

  • se_hPars (dict[str, str or float] or NoneType, optional) – Layer hyper-parameters, defaults to None and inherits from model.

backward(dX)[source]

Wrapper for epynn.gru.backward.gru_backward().

Parameters

dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Output of backward propagation for current layer.

Return type

numpy.ndarray

compute_gradients()[source]

Wrapper for epynn.gru.parameters.gru_compute_gradients().

compute_shapes(A)[source]

Wrapper for epynn.gru.parameters.gru_compute_shapes().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

forward(A)[source]

Wrapper for epynn.gru.forward.gru_forward().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Output of forward propagation for current layer.

Return type

numpy.ndarray

initialize_parameters()[source]

Wrapper for epynn.gru.parameters.gru_initialize_parameters().

update_parameters()[source]

Wrapper for epynn.gru.parameters.gru_update_parameters().

Forward

epynn.gru.forward.gru_forward(layer, A)[source]

Forward propagate signal to next layer.

epynn.gru.forward.initialize_forward(layer, A)[source]

Forward cache initialization.

Parameters
  • layer (epynn.gru.models.GRU) – An instance of GRU layer.

  • A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Input of forward propagation for current layer.

Return type

numpy.ndarray

Returns

Previous hidden state initialized with zeros.

Return type

numpy.ndarray

Backward

epynn.gru.backward.gru_backward(layer, dX)[source]

Backward propagate error gradients to previous layer.

epynn.gru.backward.initialize_backward(layer, dX)[source]

Backward cache initialization.

Parameters
  • layer (epynn.gru.models.GRU) – An instance of GRU layer.

  • dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Input of backward propagation for current layer.

Return type

numpy.ndarray

Returns

Next hidden state initialized with zeros.

Return type

numpy.ndarray

Parameters

epynn.gru.parameters.gru_compute_gradients(layer)[source]

Compute gradients with respect to weight and bias for layer.

epynn.gru.parameters.gru_compute_shapes(layer, A)[source]

Compute forward shapes and dimensions from input for layer.

epynn.gru.parameters.gru_initialize_parameters(layer)[source]

Initialize trainable parameters from shapes for layer.

epynn.gru.parameters.gru_update_parameters(layer)[source]

Update parameters from gradients for layer.

LSTM

Model

class epynn.lstm.models.LSTM(unit_cells=1, activate=<function tanh>, activate_output=<function sigmoid>, activate_candidate=<function tanh>, activate_input=<function sigmoid>, activate_forget=<function sigmoid>, initialization=<function orthogonal>, clip_gradients=False, sequences=False, se_hPars=None)[source]

Definition of a LSTM layer prototype.

Parameters
  • units (int, optional) – Number of unit cells in LSTM layer, defaults to 1.

  • activate (function, optional) – Non-linear activation of hidden and memory states, defaults to tanh.

  • activate_output (function, optional) – Non-linear activation of output gate, defaults to sigmoid.

  • activate_candidate (function, optional) – Non-linear activation of candidate, defaults to tanh.

  • activate_input (function, optional) – Non-linear activation of input gate, defaults to sigmoid.

  • activate_forget (function, optional) – Non-linear activation of forget gate, defaults to sigmoid.

  • initialization (function, optional) – Weight initialization function for LSTM layer, defaults to orthogonal.

  • clip_gradients (bool, optional) – May prevent exploding/vanishing gradients, defaults to False.

  • sequences (bool, optional) – Whether to return only the last hidden state or the full sequence, defaults to False.

  • se_hPars (dict[str, str or float] or NoneType, optional) – Layer hyper-parameters, defaults to None and inherits from model.

backward(dX)[source]

Is a wrapper for epynn.lstm.backward.lstm_backward().

Parameters

dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Output of backward propagation for current layer.

Return type

numpy.ndarray

compute_gradients()[source]

Is a wrapper for epynn.lstm.parameters.lstm_compute_gradients().

compute_shapes(A)[source]

Is a wrapper for epynn.lstm.parameters.lstm_compute_shapes().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

forward(A)[source]

Is a wrapper for epynn.lstm.forward.lstm_forward().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Output of forward propagation for current layer.

Return type

numpy.ndarray

initialize_parameters()[source]

Is a wrapper for epynn.lstm.parameters.lstm_initialize_parameters().

update_parameters()[source]

Is a wrapper for epynn.lstm.parameters.lstm_update_parameters().

Forward

epynn.lstm.forward.initialize_forward(layer, A)[source]

Forward cache initialization.

Parameters
  • layer (epynn.lstm.models.LSTM) – An instance of LSTM layer.

  • A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Input of forward propagation for current layer.

Return type

numpy.ndarray

Returns

Previous hidden state initialized with zeros.

Return type

numpy.ndarray

Returns

Previous memory state initialized with zeros.

Return type

numpy.ndarray

epynn.lstm.forward.lstm_forward(layer, A)[source]

Forward propagate signal to next layer.

Backward

epynn.lstm.backward.initialize_backward(layer, dX)[source]

Backward cache initialization.

Parameters
  • layer (epynn.lstm.models.LSTM) – An instance of LSTM layer.

  • dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Input of backward propagation for current layer.

Return type

numpy.ndarray

Returns

Next hidden state initialized with zeros.

Return type

numpy.ndarray

Returns

Next memory state initialized with zeros.

Return type

numpy.ndarray

epynn.lstm.backward.lstm_backward(layer, dX)[source]

Backward propagate error gradients to previous layer.

Parameters

epynn.lstm.parameters.lstm_compute_gradients(layer)[source]

Compute gradients with respect to weight and bias for layer.

epynn.lstm.parameters.lstm_compute_shapes(layer, A)[source]

Compute forward shapes and dimensions from input for layer.

epynn.lstm.parameters.lstm_initialize_parameters(layer)[source]

Initialize trainable parameters from shapes for layer.

epynn.lstm.parameters.lstm_update_parameters(layer)[source]

Update parameters from gradients for layer.

Network

Model

class epynn.network.models.EpyNN(layers, name='EpyNN')[source]

Definition of a Neural Network prototype following the EpyNN scheme.

Parameters
  • layers (list[Object]) – Network architecture.

  • name (str, optional) – Name of network, defaults to ‘EpyNN’.

backward(dA)[source]

Wrapper for epynn.network.backward.model_backward().

Parameters

dA (numpy.ndarray) – Derivative of the loss function with respect to the output of forward propagation.

batch_report(batch, A)[source]

Wrapper for epynn.network.report.single_batch_report().

evaluate()[source]

Wrapper for epynn.network.evaluate.model_evaluate(). Good spot for further implementation of early stopping procedures.

forward(X)[source]

Wrapper for epynn.network.forward.model_forward().

Parameters

X (numpy.ndarray) – Set of sample features.

Returns

Output of forward propagation through all layers in the Network.

Return type

numpy.ndarray

initialize(loss='MSE', se_hPars={'ELU_alpha': 1, 'LRELU_alpha': 0.3, 'cycle_descent': 0, 'cycle_epochs': 0, 'decay_k': 0, 'learning_rate': 0.1, 'schedule': 'steady', 'softmax_temperature': 1}, metrics=['accuracy'], seed=None, params=True, end='\n')[source]

Wrapper for epynn.network.initialize.model_initialize(). Perform a dry epoch including all but not the parameters update step.

Parameters
  • loss (str, optional) – Loss function to use for training, defaults to ‘MSE’. See epynn.commons.loss for built-in functions.

  • se_hPars (dict[str: float or str], optional) – Global hyperparameters, defaults to epynn.settings.se_hPars. If local hyperparameters were assigned to one layer, these remain unchanged.

  • metrics (list[str], optional) – Metrics to monitor and print on terminal report or plot, defaults to [‘accuracy’]. See epynn.commons.metrics for built-in metrics. Note that it also accept loss functions string identifiers.

  • seed (int or NoneType, optional) – Reproducibility in pseudo-random procedures.

  • params (bool, optional) – Layer parameters initialization, defaults to True.

  • end (str in ['n', 'r'], optional) – Whether to print every line for initialization steps or overwrite, default to n.

plot(pyplot=True, path=None)[source]

Wrapper for epynn.commons.plot.pyplot_metrics(). Plot metrics from model training.

Parameters
  • pyplot (bool, optional) – Plot of results on GUI using matplotlib.

  • path (str or bool or NoneType, optional) – Write matplotlib plot, defaults to None which writes in the plots subdirectory created from epynn.commons.library.configure_directory(). To not write the plot at all, set to False.

predict(X_data, X_encode=False, X_scale=False)[source]

Perform prediction of label from unlabeled samples in dataset.

Parameters
  • X_data (list[list[int or float or str]] or numpy.ndarray) – Set of sample features.

  • X_encode (bool, optional) – One-hot encode sample features, defaults to False.

  • X_scale (bool, optional) – Normalize sample features within [0, 1] along all axis, default to False.

Returns

Data embedding and output of forward propagation.

Return type

epynn.commons.models.dataSet

report()[source]

Wrapper for epynn.network.report.model_report().

train(epochs, verbose=None, init_logs=True)[source]

Wrapper for epynn.network.training.model_training(). Apart, it computes learning rate along learning epochs.

Parameters
  • epochs (int) – Number of training iterations.

  • verbose (int or NoneType, optional) – Print logs every Nth epochs, defaults to None which sets to every tenth of epochs.

  • init_logs (bool, optional) – Print data, architecture and hyperparameters logs, defaults to True.

write(path=None)[source]

Write model on disk.

Parameters

path (str or NoneType, optional) – Path to write the model on disk, defaults to None which writes in the models subdirectory created from epynn.commons.library.configure_directory().

Forward

epynn.network.forward.model_forward(model, X)[source]

Forward propagate input data from input to output layer.

Backward

epynn.network.backward.model_backward(model, dA)[source]

Backward propagate error gradients from output to input layer.

Initialization

epynn.network.initialize.model_assign_seeds(model)[source]

Seed model and layers with independant pseudo-random number generators.

Model is seeded from user-input. Layers are seeded by incrementing the input by one in order to not generate same numbers for all objects

Parameters

model (epynn.network.models.EpyNN) – An instance of EpyNN network.

epynn.network.initialize.model_initialize(model, params=True, end='\n')[source]

Initialize EpyNN network.

Parameters
  • model (epynn.network.models.EpyNN) – An instance of EpyNN network.

  • params (bool, optional) – Layer parameters initialization, defaults to True.

  • end (str in ['n', 'r']) – Wether to print every line for steps or overwrite, default to n.

Raises

Exception – If any layer other than Dense was provided with softmax activation. See epynn.maths.softmax().

epynn.network.initialize.model_initialize_exceptions(model, trace)[source]

Handle error in model initialization and show logs.

Parameters

Training

epynn.network.training.model_training(model)[source]

Perform the training of the Neural Network.

Parameters

model (epynn.network.models.EpyNN) – An instance of EpyNN network.

Evaluation

epynn.network.evaluate.batch_evaluate(model, Y, A)[source]

Compute metrics for current batch.

Will evaluate current batch against accuracy and training loss.

Parameters
  • model (epynn.network.models.EpyNN) – An instance of EpyNN network.

  • Y (numpy.ndarray) – True labels for batch samples.

  • A (numpy.ndarray) – Output of forward propagation for batch.

epynn.network.evaluate.model_evaluate(model)[source]

Compute metrics including cost for model.

Will evaluate training, testing and validation sets against metrics set in model.se_config.

Parameters

model (epynn.network.models.EpyNN) – An instance of EpyNN network.

Report

epynn.network.report.initialize_model_report(model, timeout)[source]

Report exhaustive initialization logs for datasets, model architecture and shapes, layers hyperparameters.

Parameters
epynn.network.report.model_report(model)[source]

Report selected metrics for datasets at current epoch.

Parameters

model (epynn.network.models.EpyNN) – An instance of EpyNN network object.

epynn.network.report.single_batch_report(model, batch, A)[source]

Report accuracy and cost for current batch.

Parameters

Hyperparameters

epynn.network.hyperparameters.model_hyperparameters(model)[source]

Set hyperparameters for each layer in model.

Parameters

model (epynn.network.models.EpyNN) – An instance of EpyNN network.

epynn.network.hyperparameters.model_learning_rate(model)[source]

Schedule learning rate for each layer in model.

Parameters

model (epynn.network.models.EpyNN) – An instance of EpyNN network.

epynn.network.hyperparameters.schedule_lrate(se_hPars, training_epochs)[source]

Learning rate schedule.

Parameters
  • se_hPars (dict) – Hyperparameters settings for layer.

  • training_epochs (int) – Number of training epochs for model.

Returns

Updated settings for layer hyperparameters.

Return type

dict

Returns

Scheduled learning rate for layer.

Return type

list

Pooling

Model

class epynn.pooling.models.Pooling(pool_size=(2, 2), strides=None, pool=<function amax>)[source]

Definition of a pooling layer prototype.

Parameters
  • pool_size (int or tuple[int], optional) – Height and width for pooling window, defaults to (2, 2).

  • strides (int or tuple[int], optional) – Height and width to shift the pooling window by, defaults to None which equals pool_size.

  • pool (function, optional) – Pooling activation of units, defaults to np.max(). Use one of min or max pooling.

backward(dX)[source]

Wrapper for epynn.pooling.backward.pooling_backward().

Parameters

dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Output of backward propagation for current layer.

Return type

numpy.ndarray

compute_gradients()[source]

Wrapper for epynn.pooling.parameters.pooling_compute_gradients(). Dummy method, there are no gradients to compute in layer.

compute_shapes(A)[source]

Wrapper for epynn.pooling.parameters.pooling_compute_shapes().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

forward(A)[source]

Wrapper for epynn.pooling.forward.pooling_forward().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Output of forward propagation for current layer.

Return type

numpy.ndarray

initialize_parameters()[source]

Wrapper for epynn.pooling.parameters.initialize_parameters().

update_parameters()[source]

Wrapper for epynn.pooling.parameters.pooling_update_parameters(). Dummy method, there are no parameters to update in layer.

Forward

epynn.pooling.forward.initialize_forward(layer, A)[source]

Forward cache initialization.

Parameters
  • layer (epynn.pooling.models.Pooling) – An instance of pooling layer.

  • A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Input of forward propagation for current layer.

Return type

numpy.ndarray

Returns

Input of forward propagation for current layer.

Return type

numpy.ndarray

Returns

Input blocks of forward propagation for current layer.

Return type

numpy.ndarray

epynn.pooling.forward.pooling_forward(layer, A)[source]

Forward propagate signal to next layer.

Backward

epynn.pooling.backward.initialize_backward(layer, dX)[source]

Backward cache initialization.

Parameters
  • layer (epynn.pooling.models.Pooling) – An instance of pooling layer.

  • dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Input of backward propagation for current layer.

Return type

numpy.ndarray

epynn.pooling.backward.pooling_backward(layer, dX)[source]

Backward propagate error gradients to previous layer.

Parameters

epynn.pooling.parameters.pooling_compute_gradients(layer)[source]

Compute gradients with respect to weight and bias for layer.

epynn.pooling.parameters.pooling_compute_shapes(layer, A)[source]

Compute forward shapes and dimensions from input for layer.

epynn.pooling.parameters.pooling_initialize_parameters(layer)[source]

Initialize parameters from shapes for layer.

epynn.pooling.parameters.pooling_update_parameters(layer)[source]

Update parameters from gradients for layer.

RNN

Model

class epynn.rnn.models.RNN(unit_cells=1, activate=<function tanh>, initialization=<function xavier>, clip_gradients=True, sequences=False, se_hPars=None)[source]

Definition of a RNN layer prototype.

Parameters
  • units (int, optional) – Number of unit cells in RNN layer, defaults to 1.

  • activate (function, optional) – Non-linear activation of hidden state, defaults to tanh.

  • initialization (function, optional) – Weight initialization function for RNN layer, defaults to xavier.

  • clip_gradients (bool, optional) – May prevent exploding/vanishing gradients, defaults to False.

  • sequences (bool, optional) – Whether to return only the last hidden state or the full sequence, defaults to False.

  • se_hPars (dict[str, str or float] or NoneType, optional) – Layer hyper-parameters, defaults to None and inherits from model.

backward(dX)[source]

Wrapper for epynn.rnn.backward.rnn_backward().

Parameters

dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Output of backward propagation for current layer.

Return type

numpy.ndarray

compute_gradients()[source]

Wrapper for epynn.rnn.parameters.rnn_compute_gradients().

compute_shapes(A)[source]

Wrapper for epynn.rnn.parameters.rnn_compute_shapes().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

forward(A)[source]

Wrapper for epynn.rnn.forward.rnn_forward().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Output of forward propagation for current layer.

Return type

numpy.ndarray

initialize_parameters()[source]

Wrapper for epynn.rnn.parameters.rnn_initialize_parameters().

update_parameters()[source]

Wrapper for epynn.rnn.parameters.rnn_update_parameters().

Forward

epynn.rnn.forward.initialize_forward(layer, A)[source]

Forward cache initialization.

Parameters
  • layer (epynn.rnn.models.RNN) – An instance of RNN layer.

  • A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Input of forward propagation for current layer.

Return type

numpy.ndarray

Returns

Previous hidden state initialized with zeros.

Return type

numpy.ndarray

epynn.rnn.forward.rnn_forward(layer, A)[source]

Forward propagate signal to next layer.

Backward

epynn.rnn.backward.initialize_backward(layer, dX)[source]

Backward cache initialization.

Parameters
  • layer (epynn.rnn.models.RNN) – An instance of RNN layer.

  • dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Input of backward propagation for current layer.

Return type

numpy.ndarray

Returns

Next hidden state initialized with zeros.

Return type

numpy.ndarray

epynn.rnn.backward.rnn_backward(layer, dX)[source]

Backward propagate error gradients to previous layer.

Parameters

epynn.rnn.parameters.rnn_compute_gradients(layer)[source]

Compute gradients with respect to weight and bias for layer.

epynn.rnn.parameters.rnn_compute_shapes(layer, A)[source]

Compute forward shapes and dimensions from input for layer.

epynn.rnn.parameters.rnn_initialize_parameters(layer)[source]

Initialize trainable parameters from shapes for layer.

epynn.rnn.parameters.rnn_update_parameters(layer)[source]

Update parameters from gradients for layer.

Template

Model

class epynn.template.models.Template[source]

Definition of a template layer prototype. This is a pass-through or inactive layer prototype which contains method definitions used for all active layers. For all layer prototypes, methods are wrappers of functions which contain the specific implementations.

backward(dX)[source]

Is a wrapper for epynn.template.backward.template_backward().

Parameters

dX (numpy.ndarray) – Output of backward propagation from next layer.

Returns

Output of backward propagation for current layer.

Return type

numpy.ndarray

compute_gradients()[source]

Is a wrapper for epynn.template.parameters.template_compute_gradients(). Dummy method, there are no gradients to compute in layer.

compute_shapes(A)[source]

Is a wrapper for epynn.template.parameters.template_compute_shapes().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

forward(A)[source]

Is a wrapper for epynn.template.forward.template_forward().

Parameters

A (numpy.ndarray) – Output of forward propagation from previous layer.

Returns

Output of forward propagation for current layer.

Return type

numpy.ndarray

initialize_parameters()[source]

Is a wrapper for epynn.template.parameters.template_initialize_parameters().

update_parameters()[source]

Is a wrapper for epynn.template.parameters.template_update_parameters(). Dummy method, there are no parameters to update in layer.

Forward

epynn.template.forward.initialize_forward(layer, A)[source]

Forward cache initialization.

Parameters
Returns

Input of forward propagation for current layer.

Return type

numpy.ndarray

epynn.template.forward.template_forward(layer, A)[source]

Forward propagate signal to next layer.

Backward

epynn.template.backward.initialize_backward(layer, dX)[source]

Backward cache initialization.

Parameters
Returns

Input of backward propagation for current layer.

Return type

numpy.ndarray

epynn.template.backward.template_backward(layer, dX)[source]

Backward propagate error gradients to previous layer.

Parameters

epynn.template.parameters.template_compute_gradients(layer)[source]

Compute gradients with respect to weight and bias for layer.

epynn.template.parameters.template_compute_shapes(layer, A)[source]

Compute forward shapes and dimensions from input for layer.

epynn.template.parameters.template_initialize_parameters(layer)[source]

Initialize parameters from shapes for layer.

epynn.template.parameters.template_update_parameters(layer)[source]

Update parameters from gradients for layer.

Settings

epynn.settings.se_hPars = {'ELU_alpha': 1, 'LRELU_alpha': 0.3, 'cycle_descent': 0, 'cycle_epochs': 0, 'decay_k': 0, 'learning_rate': 0.1, 'schedule': 'steady', 'softmax_temperature': 1}

Hyperparameters dictionary settings.

Set hyperparameters for model and layer.