Quick Start

EpyNN requires python>=3.7 and matching Python package manager pip3 for dependencies.

EpyNN is available from GitHub and can be cloned with git on command-line.

EpyNN is also available from Pypi although we do recommend to use the github method.

Requirements

Python

You need python>=3.7 to run EpyNN.

To check your python3 version on Linux, MacOS or Windows, open a command prompt and type:

python3 --version
# e.g., Python 3.7.3

In case Python is not installed on your system or the version does not satisfy requirements, please refer to the relevant documentation available at: https://wiki.python.org/moin/BeginnersGuide/Download

Pip

You need the Python package manager pip3 to install EpyNN dependencies on your system.

To check if pip3 is installed on Linux, MacOS or Windows, open a command prompt and enter:

pip3 --version
# e.g., pip 21.1.2 from [...] (python 3.7)

In case pip3 is not installed on your system or does not match your python version, please refer to the relevant documentation available at: https://pip.pypa.io/en/stable/installing/

Git

You need git, a free and open source distributed version control system for command line install of EpyNN.

  • Install git on Linux

Debian based distributions

sudo apt install git

Red-Hat based distributions

sudo yum install git
  • Install git on MacOS

brew install git

If brew is not installed on your system, please see Homebrew official documentation.

  • Install git on Windows

The latest 64-bit version of git for Windows can be downloaded from:

https://git-scm.com/download/win

Next, run the binary executable and follow instructions.

EpyNN Install

Linux/MacOS

Open a terminal and proceed with:

# Use bash shell
bash

# Clone git repository
git clone https://github.com/synthaze/EpyNN

# Change directory to EpyNN
cd EpyNN

# Install EpyNN dependencies
pip3 install -r requirements.txt

# Export EpyNN path in $PYTHONPATH for current session
export PYTHONPATH=$PYTHONPATH:$PWD

# Alternatively with pip, not recommended
# pip3 install EpyNN
# epynn

Permanent export of EpyNN path in $PYTHONPATH.

In the same terminal session, proceed with:

  • Linux

# Append export instruction to the end of .bashrc file
echo "export PYTHONPATH=$PYTHONPATH:$PWD" >> ~/.bashrc

# Source .bashrc to refresh $PYTHONPATH
source ~/.bashrc
  • MacOS

# Append export instruction to the end of .bash_profile file
echo "export PYTHONPATH=$PYTHONPATH:$PWD" >> ~/.bash_profile

# Source .bash_profile to refresh $PYTHONPATH
source ~/.bash_profile

Windows

Open a command-prompt and proceed with:

# Clone git repository
git clone https://github.com/synthaze/EpyNN

# Change directory to EpyNN
chdir EpyNN

# Install EpyNN dependencies
pip3 install -r requirements.txt

# Show full path of EpyNN directory
echo %cd%

# Alternatively with pip, not recommended
# pip3 install EpyNN
# epynn

Copy the full path of EpyNN directory, then go to: Control Panel > System > Advanced > Environment variable

If you already have PYTHONPATH in the User variables section, select it and click Edit, otherwise click New to add it.

Paste the full path of EpyNN directory in the input field, keep in mind that paths in PYTHONPATH should be comma-separated.

How to use EpyNN?

Whether EpyNN was installed with git (recommended) or pip, the EpyNN sources directory should be within your working directory /home/username/working_dir/EpyNN or C:\Users\username\working_dir\EpyNN.

Upon export of EpyNN path in your system-specific PYTHONPATH environment variable (See EpyNN Install), you can use EpyNN like a traditional library/framework. See Data preparation - Examples and Network training - Examples for notebooks series.

Purpose

While EpyNN’s educational API is fully functional, the specific value in the project is the source code and the web documentation.

EpyNN is written in pure Python/NumPy and makes it possible to understand every line of code supporting its educational API.

As shown in the table below, EpyNN sources are very concise and organized within a limited number of files and directories with exhaustive code comments.

EpyNN epynn library tree

Subdirectory

Files

Lines

Docstring

Code

Inline

Block

Comment

network

8

601

151

325

4

125

0.4

embedding

5

316

106

176

6

34

0.23

convolution

4

274

58

158

19

58

0.49

dense

4

181

54

108

16

19

0.32

dropout

4

153

54

80

6

19

0.31

flatten

4

135

54

66

6

15

0.32

gru

4

316

58

219

46

39

0.39

lstm

4

377

62

269

64

46

0.41

pooling

4

223

56

113

9

54

0.56

rnn

4

241

58

155

30

28

0.37

template

4

134

56

66

6

12

0.27

commons

9

1107

429

571

44

107

0.26

Total

58

4058

1196

2306

256

556

0.35

Interface

For anyone willing to understand and build from minimal implementations of Neural Network architectures, below is a suggested of interface.

_images/howto-01.svg

New layers or variants may be easily implemented from the template layer or by copying and modifying an existing layer’s subdirectory.

New metrics, activation or loss functions can be easily added by editing the corresponding epynn.commons.metrics, epynn.commons.maths and epynn.commons.loss modules.

Overall, you may take the best from EpyNN by making it your own to the very last line of code.

Console

With all defaults such as in Basics with Perceptron (P), training a Neural Network - Model with EpyNN reports the following:

  • Initialization logs

This is an exhaustive report of the Neural Network properties including sections for datasets, model architecture and layers hyperparameters.

_images/init_logs-01.svg

In the model architecture section, we may highlight:

  • (1): The model object was created by providing a list of layers, stored as model.layers. Therefore, ID and Layer relates to list index and layer type. For instance, the output layer can be retrieved from the model such as output_layer = model.layers[1] with 1 the ID of the Dense layer.

  • (2): When learning or developing Neural Networks, the most difficult part may be to comprehend layers’ dimensions and array shapes. EpyNN provides a report on dimensions and shapes for each layer including input, output and all processing intermediates. The information is retrieved from layers’ cache as introduced in Architecture Layers - Model.

Because one layer’s cache is a Python dictionary (e.g., layer.d) it works with key: value pairs. In (2.1) for the Dense layer, the value 33 assigned to key m would then be retrieved such as value = output_layer.d['m'].

See Glossary - Notations for keys definition, or browse the code.

  • Training logs

Tabular logs are printed on the terminal during training upon monitoring of all non-empty datasets.

_images/console-01.svg

We may comment:

  • Training iteration: It shows the current training iteration or epoch.

  • Learning rate: One column for each trainable layer showing the learning rate with respect to epoch.

  • Metrics: One column per metrics and for each dataset. Metrics are defined in epynn.commons.metrics.

  • Cost: One column for each dataset. This is the cost computed from the Loss - Functions.

Now, you may want to read the documentation and to proceed with Data preparation - Examples and Network training - Examples.