{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Basics with Perceptron (P)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "* Find this notebook at `EpyNN/epynnlive/dummy_boolean/train.ipynb`.\n", "* Regular python code at `EpyNN/epynnlive/dummy_boolean/train.py`.\n", "\n", "Run the notebook online with [Google Colab](https://colab.research.google.com/github/Synthaze/EpyNN/blob/main/epynnlive/dummy_boolean/train.ipynb).\n", "\n", "**Level: Beginner**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook we will review:\n", "\n", "* Handling Boolean data.\n", "* Designing and training a simple perceptron using EpyNN objects.\n", "* Basics and general concepts relevant to the context." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**This notebook does not enhance, extend or replace EpyNN's documentation.**\n", "\n", "**Relevant documentation pages for the current notebook:**\n", "\n", "* [Neural Network - Model](https://epynn.net/EpyNN_Model.html)\n", "* [Architecture layers - Model](https://epynn.net/Layer_Model.html)\n", "* [Data - Model](https://epynn.net/Data_Model.html)\n", "* [Data Embedding (Input)](https://epynn.net/Embedding.html)\n", "* [Fully Connected (Dense)](https://epynn.net/Dense.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Import, configure and retrieve data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Follow [this link](prepare_dataset.ipynb) for details about data preparation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will import all libraries and configure seeding, behaviors and directory. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Imports" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# EpyNN/epynnlive/dummy_boolean/train.ipynb\n", "# Install dependencies\n", "!pip3 install --upgrade-strategy only-if-needed epynn\n", "\n", "# Standard library imports\n", "import random\n", "\n", "# Related third party imports\n", "import numpy as np\n", "\n", "# Local application/library specific imports\n", "import epynn.initialize\n", "from epynn.commons.library import (\n", " configure_directory,\n", " read_model,\n", ")\n", "from epynn.network.models import EpyNN\n", "from epynn.embedding.models import Embedding\n", "from epynn.dense.models import Dense\n", "from epynnlive.dummy_boolean.prepare_dataset import prepare_dataset\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that we imported all libraries we will use, at once and on top of the script. Even though we are going through a notebook, we should pay attention to follow good practices for imports as stated in [PEP 8 -- Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/#imports).\n", "\n", "You may have also noted that ``# Related third party imports`` are limited to ``numpy``. \n", "\n", "We developed an educational resource for which computations rely on pure Python/NumPy, nothing else." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Configuration" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's now proceed with the configuration and preferences for the current script." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1m\u001b[31mRemove: /pylibs/EpyNN/epynnlive/dummy_boolean/datasets\u001b[0m\n", "\u001b[1m\u001b[32mMake: /pylibs/EpyNN/epynnlive/dummy_boolean/datasets\u001b[0m\n", "\u001b[1m\u001b[31mRemove: /pylibs/EpyNN/epynnlive/dummy_boolean/models\u001b[0m\n", "\u001b[1m\u001b[32mMake: /pylibs/EpyNN/epynnlive/dummy_boolean/models\u001b[0m\n", "\u001b[1m\u001b[31mRemove: /pylibs/EpyNN/epynnlive/dummy_boolean/plots\u001b[0m\n", "\u001b[1m\u001b[32mMake: /pylibs/EpyNN/epynnlive/dummy_boolean/plots\u001b[0m\n" ] } ], "source": [ "random.seed(1)\n", "\n", "np.set_printoptions(threshold=10)\n", "\n", "np.seterr(all='warn')\n", "\n", "configure_directory(clear=True) # This is a dummy example" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We already explained the [reason for seeding](prepare_dataset.ipynb#Seeding).\n", "\n", "The call to ``np.set_printoptions()`` set the printing behavior of NumPy arrays. To not overfill this notebook, we instructed that beyond ``threshold=10`` NumPy will trigger summarization rather than full representation.\n", "\n", "The call to ``np.seterr()`` is **very important** if you want to be aware of what's happening in your Network. Follow the [numpy.seterr](https://numpy.org/doc/stable/reference/generated/numpy.seterr.html) official documentation for details. Herein, we make sure that floating-point errors will always ``warn`` on the terminal session and thus we will always be aware of them.\n", "\n", "Finally, the call to ``configure_directory()`` is purely facultative but creates the default EpyNN subdirectories in the working directory." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Retrieve Boolean features and label " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "From [prepare_dataset](prepare_dataset.ipynb#Preparedataset) we imported the function ``prepare_dataset()``." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "X_features, Y_label = prepare_dataset(N_SAMPLES=50)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's inspect, as always." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1 False [False, True, False, True, True, False, True, False, True, False, False]\n", "0 True [False, True, False, True, True, False, True, False, True, False, True]\n", "0 True [True, True, False, True, False, True, True, False, False, True, False]\n", "1 False [False, False, False, False, False, False, True, True, False, False, True]\n", "1 False [False, False, False, False, False, True, True, False, True, True, True]\n" ] } ], "source": [ "for sample in list(zip(X_features, Y_label))[:5]:\n", " features, label = sample\n", " print(label, (features.count(True) > features.count(False)), features)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We have what we expect. Remember that the conditional expression is the dummy law we used to assign a label to dummy sample Boolean features." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Perceptron - Single layer Neural Network" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Herein, we are going to build the most simple Neural Network and train it in the most simple way we can." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Embedding layer object" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In EpyNN, data must be passed as arguments upon call of the ``Embedding()`` layer class constructor.\n", "\n", "The instantiated object - the embedding or input layer - is always the first layer in Neural Networks made with EpyNN." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "embedding = Embedding(X_data=X_features,\n", " Y_data=Y_label,\n", " relative_size=(2, 1, 0)) # Training, validation, testing set" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The arguments ``X_features`` and ``Y_label`` passed in the class constructor have been split with respect to ``relative_size`` for training, validation and testing sets.\n", "\n", "Let’s take a look at what ``relative_size`` means." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[False, True, False, True, True, False, True, False, True, False, False] 1\n", "[False, True, False, True, True, False, True, False, True, False, True] 0\n", "[True, True, False, True, False, True, True, False, False, True, False] 0\n", "[False, False, False, False, False, False, True, True, False, False, True] 1\n", "[False, False, False, False, False, True, True, False, True, True, True] 1\n", "[False, False, True, True, False, True, True, True, False, True, True] 0\n", "[True, False, False, True, False, False, True, True, False, True, True] 0\n", "[True, True, False, False, True, False, False, True, False, True, True] 0\n", "[True, False, False, False, False, True, True, True, False, True, True] 0\n", "[True, True, False, True, True, True, True, True, False, False, True] 0\n" ] } ], "source": [ "dataset = list(zip(X_features, Y_label)) # Pair-wise X-Y data\n", "\n", "# We print the 10 first only just to not overfill the notebook\n", "for features, label in dataset[:10]:\n", " print(features, label)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now, we can apply ``relative_size`` to split the whole dataset into parts." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "relative_size=(2, 1, 0)\n", "\n", "dtrain_relative, dval_relative, dtest_relative = relative_size\n", "\n", "# Compute absolute sizes with respect to full dataset\n", "sum_relative = sum([dtrain_relative, dval_relative, dtest_relative])\n", "\n", "dtrain_length = round(dtrain_relative / sum_relative * len(dataset))\n", "dval_length = round(dval_relative / sum_relative * len(dataset))\n", "dtest_length = round(dtest_relative / sum_relative * len(dataset))\n", "\n", "# Slice full dataset\n", "dtrain = dataset[:dtrain_length]\n", "dval = dataset[dtrain_length:dtrain_length + dval_length]\n", "dtest = dataset[dtrain_length + dval_length:]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This is the procedure. Let’s see what happened." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "50\n", "33 2\n", "17 1\n", "0 0\n" ] } ], "source": [ "print(len(dataset))\n", "\n", "print(len(dtrain), dtrain_relative)\n", "print(len(dval), dval_relative)\n", "print(len(dtest), dtest_relative)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With ``relative_size=(2, 1, 0)`` we asked to prepare a training set ``dtrain`` two times bigger than the validation set ``dval``.\n", "\n", "We have 50 samples." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "33.333333333333336\n", "16.666666666666668\n" ] } ], "source": [ "print(50 / 3 * 2)\n", "print(50 / 3 * 1)" ] }, { "cell_type": "raw", "metadata": {}, "source": [ "We can not have fraction of samples. We need to round to the nearest integer." ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "33\n", "17\n", "True\n", "True\n" ] } ], "source": [ "print(round(50 / 3 * 2))\n", "print(round(50 / 3 * 1))\n", "\n", "print((round(50 / 3 * 2)) == len(dtrain))\n", "print((round(50 / 3 * 1)) == len(dval))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that since we have set ``relative_size=(2, 1, 0)`` the testing set is empty." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's explore some properties and attributes of the instantiated ``embedding`` layer object." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "0 d {}\n", "1 fs {}\n", "2 p {}\n", "3 fc {}\n", "4 bs {}\n", "5 g {}\n", "6 bc {}\n", "7 o {}\n", "8 activation {}\n", "9 se_hPars None\n", "10 se_dataset {'dtrain_relative': 2, 'dval_relative': 1, 'dtest_relative': 0, 'batch_size': None, 'X_scale': False, 'X_encode': False, 'Y_encode': False}\n", "11 dtrain \n", "12 dval \n", "13 dtest \n", "14 dsets [, ]\n", "15 trainable False\n" ] } ], "source": [ "# Type of object\n", "print(type(embedding))\n", "\n", "# Attributes and values of embedding layer\n", "for i, (attr, value) in enumerate(vars(embedding).items()):\n", " print(i, attr, value)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**Lines 0-9**: Inherited from ``epynn.commons.models.Layer`` which is the [Base Layer](https://epynn.net/Layer_Model.html#base-layer). These instance attributes exist for any layers in EpyNN.\n", "\n", "**Lines 10-15**: Instance attributes specific to ``epynn.embedding.models.Embedding`` layer. \n", "\n", "* (10) se_dataset: Contains data-related settings applied upon layer instantiation\n", "* (11-13) dtrain, dval, dtest: Training, validation and testing sets in EpyNN ``dataSet`` object.\n", "* (14) batch_dtrain: Training mini-batches. Contains the data actually used for training.\n", "* (15) dsets: Contains the active datasets that will be evaluated during training. It contains only two ``dataSet`` objects because dval was set to empty." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The dataSet object" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's examine one ``dataSet`` object the same way." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "0 name dtrain\n", "1 active True\n", "2 X [[False True False ... True False False]\n", " [False True False ... True False True]\n", " [ True True False ... False True False]\n", " ...\n", " [ True False False ... True False False]\n", " [False False False ... True False True]\n", " [False False True ... True False True]]\n", "3 Y [[1]\n", " [0]\n", " [0]\n", " ...\n", " [1]\n", " [1]\n", " [1]]\n", "4 y [1 0 0 ... 1 1 1]\n", "5 b {1: 15, 0: 18}\n", "6 ids [ 0 1 2 ... 30 31 32]\n", "7 A []\n", "8 P []\n" ] } ], "source": [ "# Type of object\n", "print(type(embedding.dtrain))\n", "\n", "# Attributes and values of embedding layer\n", "for i, (attr, value) in enumerate(vars(embedding.dtrain).items()):\n", " print(i, attr, value)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* (0) name: Self-explaining\n", "* (1) active: If it contains data\n", "* (2) X: Set of sample features\n", "* (3) Y: Set of sample label.\n", "* (4) y: Set of single-digit sample label.\n", "* (5) b: Balance of labels in set.\n", "* (6) ids: Sample identifiers.\n", "* (7) A: Output of forward propagation\n", "* (8) P: Label predictions.\n", "\n", "For full documentation of the ``epynn.commons.models.dataSet`` object, you can refer to [Data - Model](https://epynn.net/Data_Model.html).\n", "\n", "Note that in the present example we use single-digit labels and therefore the only difference between (3) and (4) is the shape. The reason for this apparent duplicate will appear in a subsequent notebook." ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(33, 1)\n", "(33,)\n", "True\n" ] } ], "source": [ "print(embedding.dtrain.Y.shape)\n", "print(embedding.dtrain.y.shape)\n", "\n", "print(all(embedding.dtrain.Y.flatten() == embedding.dtrain.y))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We described how to instantiate the embedding - or input - layer with EpyNN and we browsed the attached attribute-value pairs.\n", "\n", "We observed that it contains ``dataSet`` objects and we browsed the corresponding attribute-value pairs for the training set.\n", "\n", "We introduced all we needed to know.\n", "\n", "We are ready." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For more information about these concepts, please follow this link:\n", "\n", "* [Data - Model](https://epynn.net/Data_Model.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Dense layer object" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "dense = Dense() # Defaults to Dense(nodes=1, activation=sigmoid)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This one was easy. Let's inspect." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "0 d {'u': 1}\n", "1 fs {}\n", "2 p {}\n", "3 fc {}\n", "4 bs {}\n", "5 g {}\n", "6 bc {}\n", "7 o {}\n", "8 activation {'activate': 'sigmoid'}\n", "9 se_hPars None\n", "10 activate \n", "11 initialization \n", "12 trainable True\n" ] } ], "source": [ "# Type of object\n", "print(type(dense))\n", "\n", "# Attributes and values of dense layer\n", "for i, (attr, value) in enumerate(vars(dense).items()):\n", " print(i, attr, value)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that (1) indicates the number of nodes in the Dense layer, and (8, 11) the activation function for this layer. See [Activation - Functions](https://epynn.net/activation.html) for details." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For code, maths and pictures behind layers in general and the dense layer in particular, follow these links:\n", "\n", "* [Base layer](https://epynn.net/Layer_Model.html#base-layer)\n", "* [Template layer](https://epynn.net/Layer_Model.html#template-layer)\n", "* [Fully Connected (Dense)](https://epynn.net/Dense.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The EpyNN Network object" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Instantiate your Perceptron" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now that we have an embedding (input) layer and a Dense (output) layer, we can instantiate the EpyNN object which represents the Neural Network." ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "layers = [embedding, dense]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The list ``layers`` is the architecture of a Perceptron." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [], "source": [ "model = EpyNN(layers=layers, name='Perceptron_Dense-1-sigmoid')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The object ``EpyNN`` is the Perceptron itself.\n", "\n", "Let's prove it." ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "0 layers [, ]\n", "1 embedding \n", "2 ts 1651394171\n", "3 uname 1651394171_Perceptron_Dense-1-sigmoid\n", "4 initialized False\n" ] } ], "source": [ "# Type of object\n", "print(type(model))\n", "\n", "# Attributes and values of EpyNN model for a Perceptron\n", "for i, (attr, value) in enumerate(vars(model).items()):\n", " print(i, attr, value)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Does it really contain the layers we instantiated before?" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n", "True\n" ] } ], "source": [ "print((model.embedding == embedding))\n", "print((model.layers[-1] == dense))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Perceptron training" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "It does seem so, yes.\n", "\n", "We are going to start the training of this Perceptron with all defaults, the very most simple form." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1m--- EpyNN Check --- \u001b[0m\n", "\u001b[1mLayer: Embedding\u001b[0m\n", "\u001b[1m\u001b[32mcompute_shapes: Embedding\u001b[0m\n", "\u001b[1m\u001b[32minitialize_parameters: Embedding\u001b[0m\n", "\u001b[1m\u001b[32mforward: Embedding\u001b[0m\n", "shape: (33, 11)\n", "\u001b[1mLayer: Dense\u001b[0m\n", "\u001b[1m\u001b[32mcompute_shapes: Dense\u001b[0m\n", "\u001b[1m\u001b[32minitialize_parameters: Dense\u001b[0m\n", "\u001b[1m\u001b[32mforward: Dense\u001b[0m\n", "shape: (33, 1)\n", "\u001b[1mLayer: Dense\u001b[0m\n", "\u001b[1m\u001b[36mbackward: Dense\u001b[0m\n", "shape: (33, 11)\n", "\u001b[1m\u001b[36mcompute_gradients: Dense\u001b[0m\n", "\u001b[1mLayer: Embedding\u001b[0m\n", "\u001b[1m\u001b[36mbackward: Embedding\u001b[0m\n", "shape: (33, 11)\n", "\u001b[1m\u001b[36mcompute_gradients: Embedding\u001b[0m\n", "\u001b[1m--- EpyNN Check OK! --- \u001b[0m\n", "\u001b[1m----------------------- 1651394171_Perceptron_Dense-1-sigmoid -------------------------\n", "\u001b[0m\n", "\n", "\u001b[1m-------------------------------- Datasets ------------------------------------\n", "\u001b[0m\n", "+--------+------+-------+-------+\n", "| dtrain | dval | dtest | batch |\n", "| | | | size |\n", "+--------+------+-------+-------+\n", "| 33 | 17 | None | None |\n", "+--------+------+-------+-------+\n", "\n", "+----------+--------+------+-------+\n", "| N_LABELS | dtrain | dval | dtest |\n", "| | | | |\n", "+----------+--------+------+-------+\n", "| 2 | 0: 18 | 0: 9 | None |\n", "| | 1: 15 | 1: 8 | |\n", "+----------+--------+------+-------+\n", "\n", "\u001b[1m----------------------- Model Architecture -------------------------\n", "\u001b[0m\n", "+----+-----------+------------+-------------------+-------------+--------------+\n", "| ID | Layer | Dimensions | Activation | FW_Shapes | BW_Shapes |\n", "+----+-----------+------------+-------------------+-------------+--------------+\n", "| 0 | Embedding | m: 33 | | X: (33, 11) | dA: (33, 11) |\n", "| | | n: 11 | | A: (33, 11) | dX: (33, 11) |\n", "+----+-----------+------------+-------------------+-------------+--------------+\n", "| 1 | Dense | u: 1 | activate: sigmoid | X: (33, 11) | dA: (33, 1) |\n", "| | | m: 33 | | W: (11, 1) | dZ: (33, 1) |\n", "| | | n: 11 | | b: (1, 1) | dX: (33, 11) |\n", "| | | | | Z: (33, 1) | |\n", "| | | | | A: (33, 1) | |\n", "+----+-----------+------------+-------------------+-------------+--------------+\n", "\n", "\u001b[1m------------------------------------------- Layers ---------------------------------------------\n", "\u001b[0m\n", "+-------+--------+----------+---------+--------+---------+--------+----------+----------+-----+\n", "| Layer | epochs | schedule | decay_k | cycle | cycle | cycle | learning | learning | end |\n", "| | | | | epochs | descent | number | rate | rate | (%) |\n", "| | | | | | | | (start) | (end) | |\n", "+-------+--------+----------+---------+--------+---------+--------+----------+----------+-----+\n", "| Dense | 100 | steady | 0.050 | 0 | 0 | 1 | 0.100 | 0.100 | 100 |\n", "+-------+--------+----------+---------+--------+---------+--------+----------+----------+-----+\n", "\n", "+-------+-------+-------+-------------+\n", "| Layer | LRELU | ELU | softmax |\n", "| | alpha | alpha | temperature |\n", "+-------+-------+-------+-------------+\n", "| Dense | 0.300 | 1 | 1 |\n", "+-------+-------+-------+-------------+\n", "\n", "\u001b[1m----------------------- 1651394171_Perceptron_Dense-1-sigmoid -------------------------\n", "\u001b[0m\n", "\n", "\u001b[1m\u001b[37mEpoch 99 - Batch 0/0 - Accuracy: 1.0 Cost: 0.05833 - TIME: 5.39s RATE: 1.86e+01e/s TTC: 0s \u001b[0m\n", "\n", "+-------+----------+----------+-------+--------+-------+---------------------------------------+\n", "| \u001b[1m\u001b[37mepoch\u001b[0m | \u001b[1m\u001b[37mlrate\u001b[0m | \u001b[1m\u001b[32maccuracy\u001b[0m | | \u001b[1m\u001b[31mMSE\u001b[0m | | \u001b[37mExperiment\u001b[0m |\n", "| | \u001b[37mDense\u001b[0m | \u001b[1m\u001b[32mdtrain\u001b[0m | \u001b[1m\u001b[32mdval\u001b[0m | \u001b[1m\u001b[31mdtrain\u001b[0m | \u001b[1m\u001b[31mdval\u001b[0m | |\n", "+-------+----------+----------+-------+--------+-------+---------------------------------------+\n", "| \u001b[1m\u001b[37m0\u001b[0m | \u001b[1m\u001b[37m1.00e-01\u001b[0m | \u001b[1m\u001b[32m0.636\u001b[0m | \u001b[1m\u001b[32m0.588\u001b[0m | \u001b[1m\u001b[31m0.242\u001b[0m | \u001b[1m\u001b[31m0.244\u001b[0m | \u001b[37m1651394171_Perceptron_Dense-1-sigmoid\u001b[0m |\n", "| \u001b[1m\u001b[37m10\u001b[0m | \u001b[1m\u001b[37m1.00e-01\u001b[0m | \u001b[1m\u001b[32m0.818\u001b[0m | \u001b[1m\u001b[32m0.765\u001b[0m | \u001b[1m\u001b[31m0.153\u001b[0m | \u001b[1m\u001b[31m0.197\u001b[0m | \u001b[37m1651394171_Perceptron_Dense-1-sigmoid\u001b[0m |\n", "| \u001b[1m\u001b[37m20\u001b[0m | \u001b[1m\u001b[37m1.00e-01\u001b[0m | \u001b[1m\u001b[32m0.818\u001b[0m | \u001b[1m\u001b[32m0.765\u001b[0m | \u001b[1m\u001b[31m0.123\u001b[0m | \u001b[1m\u001b[31m0.179\u001b[0m | \u001b[37m1651394171_Perceptron_Dense-1-sigmoid\u001b[0m |\n", "| \u001b[1m\u001b[37m30\u001b[0m | \u001b[1m\u001b[37m1.00e-01\u001b[0m | \u001b[1m\u001b[32m0.848\u001b[0m | \u001b[1m\u001b[32m0.765\u001b[0m | \u001b[1m\u001b[31m0.105\u001b[0m | \u001b[1m\u001b[31m0.168\u001b[0m | \u001b[37m1651394171_Perceptron_Dense-1-sigmoid\u001b[0m |\n", "| \u001b[1m\u001b[37m40\u001b[0m | \u001b[1m\u001b[37m1.00e-01\u001b[0m | \u001b[1m\u001b[32m0.970\u001b[0m | \u001b[1m\u001b[32m0.765\u001b[0m | \u001b[1m\u001b[31m0.093\u001b[0m | \u001b[1m\u001b[31m0.159\u001b[0m | \u001b[37m1651394171_Perceptron_Dense-1-sigmoid\u001b[0m |\n", "| \u001b[1m\u001b[37m50\u001b[0m | \u001b[1m\u001b[37m1.00e-01\u001b[0m | \u001b[1m\u001b[32m1.000\u001b[0m | \u001b[1m\u001b[32m0.824\u001b[0m | \u001b[1m\u001b[31m0.084\u001b[0m | \u001b[1m\u001b[31m0.152\u001b[0m | \u001b[37m1651394171_Perceptron_Dense-1-sigmoid\u001b[0m |\n", "| \u001b[1m\u001b[37m60\u001b[0m | \u001b[1m\u001b[37m1.00e-01\u001b[0m | \u001b[1m\u001b[32m1.000\u001b[0m | \u001b[1m\u001b[32m0.824\u001b[0m | \u001b[1m\u001b[31m0.077\u001b[0m | \u001b[1m\u001b[31m0.147\u001b[0m | \u001b[37m1651394171_Perceptron_Dense-1-sigmoid\u001b[0m |\n", "| \u001b[1m\u001b[37m70\u001b[0m | \u001b[1m\u001b[37m1.00e-01\u001b[0m | \u001b[1m\u001b[32m1.000\u001b[0m | \u001b[1m\u001b[32m0.824\u001b[0m | \u001b[1m\u001b[31m0.071\u001b[0m | \u001b[1m\u001b[31m0.144\u001b[0m | \u001b[37m1651394171_Perceptron_Dense-1-sigmoid\u001b[0m |\n", "| \u001b[1m\u001b[37m80\u001b[0m | \u001b[1m\u001b[37m1.00e-01\u001b[0m | \u001b[1m\u001b[32m1.000\u001b[0m | \u001b[1m\u001b[32m0.824\u001b[0m | \u001b[1m\u001b[31m0.066\u001b[0m | \u001b[1m\u001b[31m0.141\u001b[0m | \u001b[37m1651394171_Perceptron_Dense-1-sigmoid\u001b[0m |\n", "| \u001b[1m\u001b[37m90\u001b[0m | \u001b[1m\u001b[37m1.00e-01\u001b[0m | \u001b[1m\u001b[32m1.000\u001b[0m | \u001b[1m\u001b[32m0.824\u001b[0m | \u001b[1m\u001b[31m0.061\u001b[0m | \u001b[1m\u001b[31m0.138\u001b[0m | \u001b[37m1651394171_Perceptron_Dense-1-sigmoid\u001b[0m |\n", "| \u001b[1m\u001b[37m99\u001b[0m | \u001b[1m\u001b[37m1.00e-01\u001b[0m | \u001b[1m\u001b[32m1.000\u001b[0m | \u001b[1m\u001b[32m0.824\u001b[0m | \u001b[1m\u001b[31m0.058\u001b[0m | \u001b[1m\u001b[31m0.137\u001b[0m | \u001b[37m1651394171_Perceptron_Dense-1-sigmoid\u001b[0m |\n", "+-------+----------+----------+-------+--------+-------+---------------------------------------+\n" ] } ], "source": [ "model.train(epochs=100)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "With all defaults, EpyNN returns:\n", "\n", "* **EpyNN Check**: Result of a blank epoch to make sure the network is functional. For each layer, output shapes are returned for both the forward and backward propagation.\n", "* **init_logs**: Extended report about datasets, network architecture and shapes.\n", "* **Evaluation**: Real time evaluation of non-empty datasets (may include dtrain, dval, dtest) with default metrics accuracy and default loss function MSE.\n", "\n", "See [How to use EpyNN - Console](https://epynn.net/quickstart.html#console) for more details.\n", "\n", "You may also like to review the [Loss - Functions](https://epynn.net/loss.html).\n", "\n", "For **Evaluation** and to be straightforward, we want:\n", "\n", "* Metrics (e.g., accuracy) as high as possible.\n", "* Cost (e.g., MSE) as low as possible.\n", "* Differences in metrics and cost between training and validation data to be **as low as possible**, otherwise we talk about **overfitting**.\n", "\n", "**Overfitting** happens when the model corresponds too closely to a particular set of data.\n", "\n", "The terminal report indicates:\n", "\n", "* Accuracy is 1 or 100% for the training set and 0.824 or 82.4% for the validation set.\n", "* MSE is 0.058 and 0.137 for the training and validation set, respectively.\n", "* Differences between training and validation data are significant but the model could reproduce the validation data with acceptable accuracy. Still, we are in presence of **overfitting**.\n", "\n", "In the next notebooks, we will see in practice how to reduce **overfitting**.\n", "\n", "Now, we can take the opportunity to define what accuracy means:" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0] [0.10853502] [0.]\n", "[0] [0.36626986] [0.]\n", "[1] [0.56927243] [1.]\n", "[1] [0.71776795] [1.]\n", "[1] [0.25847613] [0.]\n", "[0] [0.14270835] [0.]\n", "[1] [0.74599602] [1.]\n", "[0] [0.08445828] [0.]\n", "[1] [0.97949576] [1.]\n", "[1] [0.24478587] [0.]\n" ] } ], "source": [ "Y_train = model.embedding.dval.Y # True labels (i.e. target values)\n", "A_train = model.embedding.dval.A # Probabilities - Output of model\n", "P_train = model.embedding.dval.P # Decision from probabilities\n", "\n", "for y, a, p in list(zip(Y_train, A_train, P_train))[:10]:\n", " print(y, a, p)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This print denotes for each row, by column:\n", "\n", "* The true label or target value: It is ``0`` or ``1`` herein.\n", "* The output probability: The value is within \\[0, 1\\]. Therefore, the higher the probability the more confident the model is to predict a label of ``1``. Conversely, the lower the probability the more confident the model is to predict a label of ``0``.\n", "* The predicted label or output decision: This is the rounding of the output probability to the nearest integer.\n", "\n", "When the number of output nodes is equal to one, rounding of the output probability is as follows:" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "True\n" ] } ], "source": [ "print(all(np.around(A_train) == P_train))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Then, the accuracy for each sample is defined as:" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[[ True]\n", " [ True]\n", " [ True]\n", " ...\n", " [ True]\n", " [False]\n", " [ True]]\n" ] } ], "source": [ "print((Y_train == P_train))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that the accuracy for each sample is `True` or `False` which evaluates to `1` and `0`, respectively.\n", "\n", "To compute the accuracy with respect to the whole dataset, we need the mean:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0.8235294117647058\n", "0.824\n" ] } ], "source": [ "print((Y_train == P_train).mean())\n", "print(np.around((Y_train == P_train).mean(), 3))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note the rounded value is, as expected, identical to the accuracy reported for the last epoch on the validation set.\n", "\n", "We can finally plot the results:" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYIAAAEWCAYAAABrDZDcAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMywgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/MnkTPAAAACXBIWXMAAAsTAAALEwEAmpwYAABA10lEQVR4nO3deXwU9f348dd7j1wkIYRwX0EOOQ1yfxUERbxaUUGkFq1oK1Wrrbbf1qPUL1497a/Vam2xKlqtWrGtFK1VBEQExIAHhyAIgYQzhJzk2M3u5/fHzC6b+yCbTbLv5+Oxj52Zz2dm3rOzyXtnPjOfEWMMSimlopcj0gEopZSKLE0ESikV5TQRKKVUlNNEoJRSUU4TgVJKRTlNBEopFeU0ESil2jwRmS8i77S19YrIGhH5TmvGFA6aCMJERG4XkUwRqRCRpdXKEkTkjyJyXEQKRWRtSNliEfGKSEnI64yQ8iUisktE/CKyoNpyv2GXFYrIMRF5XkSSQ8qHi8gqu3yPiFxVR+z3i4gRkQtDpl0jIutFpFRE1lSrP7VavCX2/HPs8lEi8l97ext944qIZIlImb28oyKyVEQSGzt/uNn76sUIrDfd/nxLQj6bFSIys7VjaUh935umMMa8ZIy5qAVDa9PrbW2aCMLnEPAw8GwtZUuAVGC4/X5XtfJXjTGJIa+9IWWfAbcBW2pZ7ofAucaYzsAZgMuOARFxAW8AK+x1LgReFJGhoQsQkUHAXOBwtWWfAH4P/LL6So0xH4TGC3wdKAHetqt4gb8D364l5oZcbi9zLDAeWNSUmcUSke95K6w7xf5sMoB3gX9W/3HQBtT5vVFthyaCMDHG/MMY8y8gL3S6iAwDZgELjTG5xhifMWZzE5b7pDHmPaC8lrJsY8zxkEk+YLA9PAzoDfzOXucqrMRxfbXFPAncDXiqLXulMebvWAmuITcAy4wxJ+15dxljngG2N2LeWhljDgL/AUYBiMhk+5dmgYh8JiLTA3Xtw/VHRORDoBQ4Q0RGisi7InLC/gV9n13XISL3iMhXIpInIn8XkVS7LPDLe6GIHBKRwyLyv3bZJcB9wDz7V/ln9az7HBH52D4S+1hEzqkW60Mi8qGIFIvIOyKS1sTP5ogx5jFgMfCrQPIRkd4i8rqI5IrIPhH5fsh6F9vb+oK93u0iMj6k/G4ROWiX7RKRGQ19XnXE1pTvDSKyQET22uvdJyLzQ6avC6l3kZw6+v2jiLwv9ikau+6HIvI7+/ux194HC0QkW6yj5RtCltXZ/hxyRWS/iCwK+Qyrr3emiOy01/sEII3ZrrZOE0HrmwjsBx4Q61TJVrFPoYS43P6HtV1Ebm3KwkVkiogUAsXAHKxfY3VWx/7Has87F6gwxrzVlHVWW38n4Grg+eYuo47l9gMuAz4RkT7Am1hHO6nA/wKvi0i3kFmuxzrqSQKOAiuxjlB6YyXH9+x6dwBXAtPssnysZBjqfGAIcBFwt4hcaIx5G/g5p47eMupYd7Ed6+NAV+D/AW+KSNeQ+t8EbgS6AzH29jTHP+xlnGn/I/s31hFkH2AGcKeIXBxSfxbwCpACLAeeABCRM4HbgQnGmCTgYiDLnqcxn1ez2N+dx4FL7fWeA3xaS700YBlwL9ZnusuuG2oS8Lld/jes7ZyAte+vA56QU6cZ/wAEjqKnAd/C2h+1rfcfWEelacBXwLnN3d42xRijrzC+sP5ZLQ0Zvw8wWL/eYrC+eCXAcLt8BNYfmBPry30YuLaW5a4DFtSz3j72Ooba425gL/ATe/girF/9/7XLk4DdQLo9ngVcWMtyvwOsqWe91wP7AKmlbLD1lWv0Z5dlfzYFWMnzj0A81hHLX6vV/S9wgz28BngwpOxa4JM61vEFMCNkvBfWqSwXkG7vq2Eh5b8GnrGHFwMvVlte9XVfD2yqVmdDYN/Z9ReFlN0GvN3A5xKIy1Vtepw9/Vysf4QHqpXfCzwXEvvKkLIRQFnIfjoGXAi4G/t5NRBzvd8bu04ne1/PAeKrlS0A1tnD3wI2hJQJkA18J6Tu7pDy0fbn0iNkWh4wBuvvzAOMCCn7biDWWta7sdp6cwLrbc8vPSJofWVYfzgPG2M8xpj3gdVY/5gxxuwwxhwy1umb9cBjWL+wm8RYp1LexvolhDHGi/VL7mvAEeBHWOftc+xZFmP9c81q9pZZbgBeMPZfSgu40hiTYowZYIy5zRhTBgwA5tqH/QUiUgBMwfqnFJAdMtwP69dbbQZgnVsPLOcLrFNqPepY1n6sRF2f0Pq97XlC7cdK1AFHQoZLgeY2iAeWeQJru3pX+4zuo+p2VV9vnIi4jDF7gDuxvhPHROQVEQlsc52fl4j8SU41YN/XULDV6xvrVOI84BbgsIi8Kdap1Op6E/IZ29+1nGp1joYMl9n1qk9LxPpl76bqPqq+f+pbb3Yt9dodTQSt7/NaptX3T9PQ/POQLmBQcEHGfG6MmWaM6WqMuRjrUHiTXTwD+L6IHBGRI1j/PP8uInc3dmX26ZvpwAvNjLexsrGSVkrIq5MxJrRB0lSrfwa1y8Y6FRG6rDg7kQb0Cxnuz6nz3XXtt9Dph7D+eYbqDxyk5V2F9Ut+F9Z27au2XUnGmMsasyBjzN+MMVOwYjfAr+yiOj8vY8wt5tRFAz9vxDpq1DfG/NcYMxMrqe8Enq5l1sNA38CIiEjoeBMdx/phFrqP6to/hwn5Ltjr7VdLvXZHE0GYiIhLROKwDj2dIhIn1pU7a4EDwL12nXOxzkH/157vChHpIpaJwPexrvYJLDfGXq4Abnu5gYat+SLS3x4eADzCqXPhiMhZdv0EsRo9ewFL7eIZWO0FY+zXIaxD5CfteZ32el2Aw16Ou9pmXw+sN8ZU+fVtb0sc1qkw7Hljm/O52l7Eake5OBCXiEwXkbr+GawAeonInSISKyJJIjLJLvsT8Ij9eSEi3UTkimrz/8z+zEZinTt+1Z5+FEiX+q8MegsYKiLftPf3PKzTMCuavtm1E5EeInI78H/AvcYYP1aCLxar0Tfe/pxGiciERizvTBG5wN5H5Vi/nv12cWM+r9BlNeZ7E7odV9htBRVYpwX9tVR9ExgtIlfaf1PfA3o2tF21Mcb4sI6MH7G/FwOAH2J9x2pb70gRmW2v9/vNXW9bo4kgfBZh/QHdg9U4VYZ1LtgLXIHV8FmI9YvnW8aYnfZ83wD2YDUyvgD8yhgT2vD6jr2sc7AuQy0DzrPLRgDrReQk1hVBu4CbQ+a9HutXzTGsf/wzjTEVAMaYPGNdfXLEGHME63A/3xhTEjJvGfAUMNUerv5r7VvU3kg8wK4fuGqozI6tWYwx2Vif4X1ALtav1B9Tx/fZGFMMzAQuxzodshsr+YJ16m058I6IFAMbsc6vh3ofa5+8BzxqjAncYPSa/Z4nIrVdzosxJg/rctofYZ2X/gnwdVP16q7mKrD39Vas79NcY8yz9np99nrHYLXZHAf+gtUo2pBYrMs9j2N9Xt2x2hegcZ9XqMZ8bwIcWP+ED2Gd3poG1LhYwv7s5mK11+Rhfe8zsZJHc9wBnMRqQ1uH1bhc47LvkPX+0l7vEKy/s3ZPWu5UrlIdi4ikY/0TdRtjKiMcjqqDfUSWA8w3xqyOdDztkR4RKKXaHfu0YIp9+uo+rFOlGyMcVruliUBFjIj0l5pdUwRe/SMdXyTZ7T21fS7Nvimvg/kfrCvBjmOd8rvSvqJMNYOeGlJKqSinRwRKKRXlXJEOoKnS0tJMenp6pMNQSql2ZfPmzceNMd1qK2t3iSA9PZ3MzMxIh6GUUu2KiFS/wz1ITw0ppVSU00SglFJRThOBUkpFOU0ESikV5TQRKKVUlAtbIhCRZ8V6JNy2OspFRB4X6yHqn4vI2HDFopRSqm7hPCJYClxST/mlWL33DcF6rN9TYYxFKaVUHcJ2H4ExZq3de2NdruDUk6w22h1I9TLGHA5XTEoFlHl8PLd+H+UeX6RDUarRZgzvQUa/lBZfbiRvKOtD1ce85djTaiQCEVmIddRA//5R3ReZaiHPrd/Hr9/ehTT32W9KRUD35LgOlwgazRizBOshLIwfP157yVOnpczj45kP9jFtaDeev2lipMNRKuIiedXQQao+77Mv4XmOq1JVvLzpAHknPdxxweBIh6JUmxDJRLAc+JZ99dBkoFDbB1S4VVT6+PPar5g0MJXx6amRDkepNiFsp4ZE5GVgOpAmIjlYD9Z2Axhj/oT1UO/LsJ4FW4r1UHClwmrZ5hyOFlXw27ljIh2KUm1GOK8auraBcgN8L1zrVx2fp9JPTn5po+sb4Kk1XzGmXwrnDu4avsCUamfaRWOxUrX539c+Y/lnh5o83+LLRyJ6uZBSQZoIVLu051gx//78ELPP7sO0M2t91katkuPcTG9CfaWigSYC1S79cfVXxLmcLPr6CFI7xUQ6HKXaNe10TrU7B/JKeeOzQ8yf1F+TgFItQBOBaneeev8rnA7h5vPOiHQoSnUImghUu3K4sIxlm7OZN74fPZLjIh2OUh2CJgLVrvz5/b0YA9+dpkcDSrUUTQSq3cgtruDlTQe46uw+9O2SEOlwlOowNBGoduMv6/bi9fm57XztI0iplqSJQLULBaUeXtywn6+f1ZuBaZ0iHY5SHYomAtUuPPdhFic9Pr6nRwNKtThNBKrNKy738tyH+7h4ZA/O7JkU6XCU6nD0zmIVUe/uONpgx3GfZRdQVF7J7ecPaaWolIoumghUxHxxuIibX8hsVN2ZI3owum/nMEekVHTSRKAi5snVe0iMdfH2nVNJjK3/q5gc526lqJSKPpoIVER8lVvCm1sPc8u0QXpPgFIRpo3FKiL+uPorYl0Ovj1lYKRDUSrqaSJQrS77RCn/+vQg35w4gLTE2EiHo1TU00SgWt1T73+FU4SF2nuoUm2CJgLVqg4XlrEsM4e54/vSs7P2HqpUW6CJQLWqJWv34jOGW6YNinQoSimbJgLVao6XWL2HXjGmN/1S9UohpdoKTQSq1Tyzbh8VlX7tL0ipNkYTgWoVBaUeXlifxddG92JQt8RIh6OUCqGJQLWKpeu191Cl2iq9s7iNyMw6wZOr9+A39ddLiHHy8JWj6NoGr78/WFDG4uXb8VT6a5Rt3p/PzBE9GN4rOQKRKaXqo4mgDTDG8MC/d7A/7yQDGzht8sHuXPqnJnDvZcNbKbrGe3zlbt7flcvw3jX/2Q/rmcT/XnRmBKJSSjVEE0Eb8P6XuWw9WMiv55zFNRP61Vv3B698wl837ueWaYPo0immlSJs2MGCMl7fksP8Sf154IpRkQ5HKdUE2kYQYcYY/rBqD31S4rny7D4N1v/e+YMp9fh47sN9rRBd4/35/a8QgYV6f4BS7Y4mggjbuPcEm/fn891pZxDjanh3DO2RxMUje7B0fRZF5d5WiLBhx4rLeeXjbGaf3Zc+KfGRDkcp1UR6aijCnli9m25JsVwz1Anrfgd+n1WQ1AvOnl/rPLefP4T/bj/KXzfsbxNX4fzlg31U+vzcOj2KjgZ2r4TDn0Y6ChVtBl8Ivce0+GI1EbSCA3mlbDmQX2P68ZIKPtyTx08vG07cR3+ATX+uWqHPOOg+rMZ8o/t2ZvqZ3Xhm3T56p8QhSLhCb5DPb3hx435mZfQmPa1TxOJoVT4vvHYDeEoiHYmKNvFdNBG0Rz6/4calm/gq92St5WmJMXxzUn94Zh0MnAbzl0HBfnhiPOxfV2siALjjgiFc/af13PXqZ+EMv1FcDmkTRyat5vBnVhKY8wwMnxXpaFQ0cTjDstiwJgIRuQR4DHACfzHG/LJaeX/geSDFrnOPMeatcMbU2t7edoSvck/yyFWjOGdQWo3y1IQYOlUWwrHtcMHPwBUDXQdDch/IWgcTvlPrcscN6MLGe2dQ6vGFexMalBjroltS27uvIWyyPrDeB06z9pdS7VzYEoGIOIEngZlADvCxiCw3xuwIqbYI+Lsx5ikRGQG8BaSHK6bWZozhidV7OKNbJ74xoT9ORx2ncHZ8aL2nT7HeRazhr1aBMdZ4LXokazfOEZG1DroNg8RukY5EqRYRzquGJgJ7jDF7jTEe4BXgimp1DBC4+6gzcCiM8bS6VTuP8cXhIr43fXDdSQBg/4fgiofeY09NS58CJ3Ph+JfhD1Q1nq8SDmw8lbSV6gDCmQj6ANkh4zn2tFCLgetEJAfraOCO2hYkIgtFJFNEMnNzc8MRa4sL3B/QLzWeWWN61185ax30n1T1NEPgH03WuvAFqZou0D6giUB1IJG+j+BaYKkxpi9wGfBXEakRkzFmiTFmvDFmfLdu7eNw/MM9eXyaXcCt0wbjdtbzMZeegKPbav5j6TIQknprImhrAu0DAzQRqI4jnI3FB4HQ/hL62tNCfRu4BMAYs0FE4oA04FgY4woLv98wb8kGPs46dZloz+Q45oxr4G7h/eut9/SpVacH2gn2rqm3nUC1Mm0fUB1QOBPBx8AQERmIlQC+AXyzWp0DwAxgqYgMB+KA9nHup5r3dh7j46x85oztS58u1t2104amEetq4HKvrHU12wcC0qfA1r/D8d3QbWgYolZN4quEAxvgrHmRjkSpFhW2RGCMqRSR24H/Yl0a+qwxZruIPAhkGmOWAz8CnhaRu7AajhcYYxroiLntMcbwxKrd9E9N4FdzRuOq71RQdVnroN/E2i9DDLYTfKCJoC3Q9gHVQYX1PgL7noC3qk27P2R4B3BuOGNoDR/sPs5nOYX8YnYTk0CgfeD8n9ZennqG1dXE/g9hwrdbJljVfPvt9hpNBKqD0TuLAYqPgLes2bP//Z1PGJdcxuyBXjjRhF5Bsz4ATN3/WALtBPvWNm25VZbhgM79wNHIBOXzQmFO89bV0e15D9LOhMTukY5EqRaliSDrQ1h62Wkt4onAwJPNmNmdAH1qaR8IGHgebH0NHh/TjIXbzl8E037cuLrL74DPXm7+ujq6Ou70Vqo900RwdJv1/rXfgrvpnab9ae1XHC4o46dfG0FMU04LBXQdDK56umc4ax7EdIJKT9OXDfDhY7Dn3cYlAmNg97tW8smo3q6vEIfV+6NSHYwmgoID1lU747/d5Es0PzmQzy8PreeeS4cRMy5MXTC7YmHUnObPf/xLWP84eE5aCaU+ubug9LiVfMZc2/x1KqXalUjfUBZ5BQcgpV+zrtN/cvUeUhLcXDd5QBgCayHp54K/ErI/arhu8Gapdt9+r5RqAk0EBQcgpX+TZ9txqIiVXxzjxnMGkhjbhg+s+k0GcTbuDuWsdZDcF7qkhz0spVTboYmgmYngyTV7SIx1seCc9JaPqSXFJlqN0Q0lAmOsy1TTp+hdzEpFmehOBBXFUHaiyYlgz7ES3tp6mG/9zwA6J7jDFFwLSp8CB7dY7QR1Of6l1dupXiOvVNSJ7kRQYHeO2sRE8Mc1e4h1Ofj2lIFhCCoM0qeA3wvZm+quE2gf0ESgVNSJ8kRwwHpPaXxjb/aJUt749BDfnDiAront5Klc/SY13E6Qtc56Kpq2DygVdTQRQJOOCJ56/yucInx32hlhCioMYpOg99l1JwJjrDJtH1AqKkV5ItgPrjjo1LguhQ8XlrEsM4e54/u2v8dEpk+Bg5vBU1qz7PhubR9QKopFeSI4YPXD08hfwUvW7sVnDLdMC9PNY+GUPtVqJ8ippZ1A2weUimpt+AL4VtCES0ePl1Tw8qYDXHV2H/qlJoQ5sDDob7cTvHF7zSOgwmy7faCdNH4rpVpUdCeCwmzr3HkjPLNuHxWVfm6d3g6PBsBqJ5h+T+1XDiV0heGXa/uAUlEqehNBRQmU5jXqiKCw1MtfN+zna6N7MahbYisEFybTfhLpCJRSbVD0thEUNv4egufW76OkopLvnT84zEEppVTri95E0Mh7CEoqKnnuwyxmjujB8F7JrRCYUkq1Lk0EDRwRvLhxP4VlXm7XowGlVAcVxYnAvoegnscOlnl8/OWDvUwdkkZGv5TWi00ppVpR9DYWN+Ieglc+PsDxEg93XDCkFQNTKnK8Xi85OTmUl5dHOhTVTHFxcfTt2xe3u/EdYkZ3IkjpR0Gph6fe/wpPpb9GlRWfH2biwFQmDkyNQIBKtb6cnBySkpJIT09H9HLidscYQ15eHjk5OQwc2Pj7gqI7EfTK4I9rvmLJ2r0kxdX8KGKcDn40c2gEglMqMsrLyzUJtGMiQteuXcnNzW3SfNGZCDwnoTSP0oQ+vPj+fq4c05vff6NxN5Yp1dFpEmjfmrP/orOx2H4OweojcZR6fHp/gFJt1OLFi3n00UcBWLp0KYcOHWryMv70pz/xwgsvtHRoHUp0HhHYl46+vFu4dFRPhvRIinBASqmGLF26lFGjRtG7d+8aZT6fD6fTWet8t9xyS7hDa7bKykpcrsj/G47OI4JjOwDYUZ6mRwNKtTGPPPIIQ4cOZcqUKezatQuAZcuWkZmZyfz58xkzZgxlZWWkp6dz9913M3bsWF577TWefvppJkyYQEZGBnPmzKG01OpyPfSoYvr06dx9991MnDiRoUOH8sEHH9RYf0lJCTNmzGDs2LGMHj2aN954I1j2wgsvcNZZZ5GRkcH1118PwNGjR7nqqqvIyMggIyOD9evXk5WVxahRo4LzPfrooyxevDgYw5133sn48eN57LHH+Pe//82kSZM4++yzufDCCzl69GgwjhtvvJHRo0dz1lln8frrr/Pss89y5513Bpf79NNPc9ddd532Zx75VNRKjhWXc6TQuiRuwM415NGHjDMHMapP5whHplTb9MC/t7PjUFGLLnNE72T+7/KRdZZv3ryZV155hU8//ZTKykrGjh3LuHHjuPrqq3niiSd49NFHGT9+fLB+165d2bJlCwB5eXncfPPNACxatIhnnnmGO+64o8Y6Kisr2bRpE2+99RYPPPAAK1eurFIeFxfHP//5T5KTkzl+/DiTJ09m1qxZ7Nixg4cffpj169eTlpbGiRMnAPj+97/PtGnT+Oc//4nP56OkpIT8/Px6PwePx0NmZiYA+fn5bNy4ERHhL3/5C7/+9a/57W9/y0MPPUTnzp3ZunVrsJ7b7eaRRx7hN7/5DW63m+eee44///nPDX3sDYqaRPDPLQf5xX924sTHp7EbWe87h9sv0KMBpdqSDz74gKuuuoqEBKur91mzZtVbf968ecHhbdu2sWjRIgoKCigpKeHiiy+udZ7Zs2cDMG7cOLKysmqUG2O47777WLt2LQ6Hg4MHD3L06FFWrVrF3LlzSUtLAyA11bqsfNWqVcE2CKfTSefOnRtMBKFx5+TkMG/ePA4fPozH4wle9rly5UpeeeWVYL0uXboAcMEFF7BixQqGDx+O1+tl9OjR9a6rMaImEVwyqieDuyeSdGIrSe+Wcc6MKzljgN4foFRd6vvl3lZ06tQpOLxgwQL+9a9/kZGRwdKlS1mzZk2t88TGWs8adzqdVFZW1ih/6aWXyM3NZfPmzbjdbtLT05t8g53L5cLvP3VvUvX5Q+O+4447+OEPf8isWbNYs2ZN8BRSXb7zne/w85//nGHDhnHjjTc2Ka66RE0bwYCunZgxvAcTsdoHzhhf+68FpVTknHfeefzrX/+irKyM4uJi/v3vfwfLkpKSKC4urnPe4uJievXqhdfr5aWXXmp2DIWFhXTv3h23283q1avZv38/YP0Sf+2118jLywMInhqaMWMGTz31FGA1WhcWFtKjRw+OHTtGXl4eFRUVrFixot719enTB4Dnn38+OH3mzJk8+eSTwfHAUcakSZPIzs7mb3/7G9dee22ztzNU1CSCoKx10HUIJPWIdCRKqWrGjh3LvHnzyMjI4NJLL2XChAnBsgULFnDLLbcEG4ure+ihh5g0aRLnnnsuw4YNa3YM8+fPJzMzk9GjR/PCCy8ElzVy5Eh++tOfMm3aNDIyMvjhD38IwGOPPcbq1asZPXo048aNY8eOHbjdbu6//34mTpzIzJkz641n8eLFzJ07l3HjxgVPO4HVzpGfn8+oUaPIyMhg9erVwbJrrrmGc889N3i66HSJMaZFFtRaxo8fbwKNLE3mq4RfD4RRc+Dy37doXEp1BF988QXDhw+PdBiqAV//+te56667mDFjRq3lte1HEdlsjBlfW/2wHhGIyCUisktE9ojIPXXUuUZEdojIdhH5Wzjj4cjnUFGkD2lXSrVLBQUFDB06lPj4+DqTQHOErbFYRJzAk8BMIAf4WESWG2N2hNQZAtwLnGuMyReRuvuEbgn7P7TeB5wb1tUopVQ4pKSk8OWXX7b4csN5RDAR2GOM2WuM8QCvAFdUq3Mz8KQxJh/AGHMsjPHY7QODIblXWFejlFLtSTgTQR8gO2Q8x54WaigwVEQ+FJGNInJJbQsSkYUikikimU3tVS/I74P96/W0kFJKVRPpq4ZcwBBgOnAt8LSIpFSvZIxZYowZb4wZ361bt+atKdA+MEATgVJKhQpnIjgI9AsZ72tPC5UDLDfGeI0x+4AvsRJDy8taZ72na/uAUkqFCmci+BgYIiIDRSQG+AawvFqdf2EdDSAiaVinivaGJZrBM+GyRyG5Zs+FSqm2KbTDuKZasGABy5Yta+GIOqZGXzUkIgnGmNLG1jfGVIrI7cB/ASfwrDFmu4g8CGQaY5bbZReJyA7AB/zYGJPXtE1opO7DrJdSSrWCttLFdGM0eEQgIufY/6h32uMZIvLHxizcGPOWMWaoMWaQMeYRe9r9dhLAWH5ojBlhjBltjHml/iUqpTq62rqh3rlzJxMnTgzWycrKCna29uCDDzJhwgRGjRrFwoULaegm2bq6q66tO2movevp6kcbiYmJAKxZs4apU6cya9YsRowYAcCVV17JuHHjGDlyJEuWLAnO8/bbbzN27FgyMjKYMWMGfr+fIUOGBB8z6ff7GTx4cJMfO9kcjUlXvwMuxj6tY4z5TETOC2tUSqnI+889cGRryy6z52i49Jd1FtfVDfWwYcPweDzs27ePgQMH8uqrrwZ78Lz99tu5//77Abj++utZsWIFl19+eZ3rmD17dq3dVdfWnfT27dtr7Xq6Plu2bGHbtm3BXkSfffZZUlNTKSsrY8KECcyZMwe/38/NN9/M2rVrGThwICdOnMDhcHDdddfx0ksvceedd7Jy5UoyMjJo9gUyTdCoNgJjTHa1Sb4wxKKUinKh3VAnJydX6Yb6mmuu4dVXXwWokghWr17NpEmTGD16NKtWrWL79u31rmPbtm1MnTqV0aNH89JLLwXrr1q1iltvvRU41Z10XV1P12fixInBJADw+OOPk5GRweTJk8nOzmb37t1s3LiR8847L1gvsNybbrop2KX1s88+22K9izakMUcE2SJyDmBExA38APgivGEppSKunl/ukTBv3jzmzp3L7NmzERGGDBlCeXk5t912G5mZmfTr14/Fixc32GV0Y7urrk9oN9N+vx+PxxMsC+1ies2aNaxcuZINGzaQkJDA9OnT642vX79+9OjRg1WrVrFp06bT6kW1KRpzRHAL8D2sm8EOAmPscaWUalH1dUM9aNAgnE4nDz30UPBoIPBPNS0tjZKSkkZdJVRXd9W1dSddV9fT6enpbN68GYDly5fj9XprXVdhYSFdunQhISGBnTt3snHjRgAmT57M2rVr2bdvX5XlgvW8geuuu465c+fW+RzmltZgIjDGHDfGzDfG9DDGdDfGXBe2K3uUUlGtvm6owToqePHFF7nmmmsAq++dm2++mVGjRnHxxRfXqF+burqrrq076bq6nr755pt5//33ycjIYMOGDVWOAkJdcsklVFZWMnz4cO655x4mT54MQLdu3ViyZAmzZ88mIyOjyhPLZs2aFXxecWtpsBtqEXkOqFHJGHNTuIKqz2l1Q62Uqpd2Qx15mZmZ3HXXXXzwwQfNXkZTu6FuTBtB6KN14oCrgEPNjlAppVStfvnLX/LUU0+1WttAQIOJwBjzeui4iLwMrAtbREopFaXuuece7rmn1ke3hFVzupgYAoT3uQFKKaVaTYNHBCJSjNVGIPb7EeDuMMellFKqlTTm1FBSawSilFIqMupMBCIytr4ZjTFbWj4cpZRSra2+NoLf1vNqXr+wSinVBKHdUC9dupRDh5p+weKf/vSnYLcNjZGVlYWIsGjRouC048eP43a7uf322wHYtWsX06dPZ8yYMQwfPpyFCxcC1p3EnTt3ZsyYMcHXypUrmxxza6vziMAYc35rBqKUUvVZunQpo0aNonfvms8U8fl8dd6Fe8sttzR5XQMHDuTNN9/k4YcfBuC1115j5MiRwfLvf//73HXXXVxxhfUY9q1bT3XON3XqVFasWEF70qirhkRklIhcIyLfCrzCHZhSKjrV1g31smXLyMzMZP78+YwZM4aysjLS09O5++67GTt2LK+99lqd3UuHHlVMnz6du+++m4kTJzJ06NA6b9pKSEhg+PDhBG5effXVV4N3MwMcPnyYvn37BscDXWK3V425auj/sJ4iNgJ4C7gU6z6Cxh9rKaXanV9t+hU7T+xs0WUOSx3G3RPrvuiwrm6or776ap544gkeffRRxo8/dXNs165d2bLFaq7My8urtXvp6iorK9m0aRNvvfUWDzzwQJ2nbr7xjW/wyiuv0KNHD5xOJ7179w6emrrrrru44IILOOecc7jooou48cYbSUlJAaweVMeMGRNczuuvv86gQYOa9Dm1tsbcWXw1kAF8Yoy5UUR6AC+GNyylVDQK7YYaqNINdW1C++jZtm0bixYtoqCggJKSEi6++OJa55k9ezYA48aNIysrq85lX3LJJfzsZz+jR48eVdYDcOONN3LxxRfz9ttv88Ybb/DnP/+Zzz77DGifp4YakwjKjTF+EakUkWTgGFUfSq+U6oDq++XeVoR29tbY7qVjY2MB65kDlZWVdS47JiaGcePG8dvf/pYdO3awfHnVR6737t2bm266iZtuuolRo0axbdu209+gCKmzjUBEnhSRKcAmEUkBngY2A1uADa0TnlIqmtTXDXVSUhLFxcV1zltX99Kn40c/+hG/+tWvajyQ5u233w52PX3kyBHy8vLo06dPi6wzEuo7IvgS+A3QGzgJvAzMBJKNMZ+3QmxKqSgT2g119+7dq3QrvWDBAm655Rbi4+PZsKHmb9FA99LdunVj0qRJ9SaNxho5cmSVq4UC3nnnHX7wgx8QFxcHwG9+8xt69uzJzp07a7QRLFq0iKuvvvq0YwmnxnRDPQD4hv2Kx0oIfzPG7A5/eDVpN9RKhY92Q90xNLUb6sY8mGa/MeZXxpizgWuBK4GWvZRAKaVUxDSYCETEJSKXi8hLwH+AXcDssEemlFKqVdTX19BMrCOAy4BNwCvAQmPMyVaKTSmlVCuor7H4XuBvwI+MMfmtFI9SSqlWVl9fQxe0ZiBKKaUiozlPKFNKKdWBaCJQSrVZoR3GNdWCBQtYtmxZrdMTEhKq3Gdw5513IiIcP34csDq+GzlyJGeddRZjxozho48+AqxO684888xgF9Nt/f6AxmpMFxNKKdWhDB48mDfeeIPrrrsOv9/PqlWrgncGb9iwgRUrVrBlyxZiY2M5fvw4Ho8nOO9LL71UpeO7jkCPCJRSbUpt3VDv3LmTiRMnButkZWUFu35+8MEHmTBhAqNGjWLhwoU0dJMsWD2Lvvrqq4D1MJlzzz0Xl8v6XXz48GHS0tKCfRKlpaXV+gyEjkSPCJRStTry859T8UXL3jsaO3wYPe+7r87yurqhHjZsGB6Ph3379jFw4EBeffXVYI+gt99+O/fffz8A119/PStWrODyyy+vN46hQ4eyfPly8vPzefnll7nuuuv4z3/+A8BFF13Egw8+yNChQ7nwwguZN28e06ZNC847f/584uPjAZg5cya/+c1vTuszaQv0iEAp1WaEdkOdnJxcpRvqa665JvgrPjQRrF69mkmTJjF69GhWrVrF9u3bG7Wu2bNn88orr/DRRx8xderU4PTExEQ2b97MkiVL6NatG/PmzWPp0qXB8pdeeolPP/2UTz/9tEMkAdAjAqVUHer75R4J8+bNY+7cucyePRsRYciQIZSXl3PbbbeRmZlJv379WLx4MeXl5Y1e3rhx47jhhhtwOKr+JnY6nUyfPp3p06czevRonn/+eRYsWBCGrWobwnpEICKXiMguEdkjIvfUU2+OiBgR6VgtMEqpJqmvG+pBgwbhdDp56KGHgkcDgX/6aWlplJSU1HqVUF0GDBjAI488wm233VZl+q5du9i9+1Sfmp9++ikDBgw4nc1q88J2RCAiTuBJrK6rc4CPRWS5MWZHtXpJwA+Aj8IVi1KqfaivG2qwfsX/+Mc/Zt++fQCkpKRw8803M2rUKHr27FmjfkO++93v1phWUlLCHXfcQUFBAS6Xi8GDB7NkyZJgeWgbQVpaWp2PumxPGuyGutkLFvkfYLEx5mJ7/F4AY8wvqtX7PfAu8GPgf40x9fYxrd1QKxU+2g11x9Di3VCfhj5Adsh4jj0tNLCxQD9jzJv1LUhEFopIpohk5ubmtnykSikVxSJ21ZCIOID/B/yoobrGmCXGmPHGmPHdunULf3BKKRVFwpkIDlL1Ifd97WkBScAoYI2IZAGTgeXaYKyUUq0rnIngY2CIiAwUkRisR10uDxQaYwqNMWnGmHRjTDqwEZjVUBuBUiq8wtVuqFpHc/Zf2BKBMaYSuB34L/AF8HdjzHYReVBEZtU/t1IqEuLi4sjLy9Nk0E4ZY8jLyyMuLq5J84XtqqFw0auGlAofr9dLTk5Oo2/KUm1PXFwcffv2xe12V5le31VDemexUirI7XYzcODASIehWpn2NaSUUlFOE4FSSkU5TQRKKRXlNBEopVSU00SglFJRThOBUkpFOU0ESikV5TQRKKVUlNNEoJRSUU4TgVJKRTlNBEopFeU0ESilVJTTRKCUUlFOE4FSSkU5TQRKKRXlNBEopVSU00SglFJRThOBUkpFOU0ESikV5TQRKKVUlNNEoJRSUU4TgVJKRTlNBEopFeU0ESilVJSLmkRgjIl0CEop1SZFTSI4+OY/2HTVTIpXrcb4/ZEORyml2oyoSQQbD62n/GAOObfdxr4rrqDgH//E7/FEOiyllIq4qEkEs2/+DasfncvjlzvIq8jn8H33sWfGDHL/+Ecq8/IiHZ5SSkVM1CQChzj42dTFpF5xJTd9s4Dti+YQN3w4xx//A3vOv4CDP/kJpZs3a1uCUirqRE0iACsZPHjOg3xt0Nd5wPcG/7plFAPfXEHK1VdTsmo1++dfx75Zs8h75lm8R49FOlyllGoV0t5+AY8fP95kZmae1jJ8fh8PbXyI13e/zpwhc/jZ5J8h5RUUvfUW+a+9Rvlnn4PDQafJk0m+7FISZ8zA1aVLC22BUkq1PhHZbIwZX2tZNCYCsC4n/cMnf+DprU9zYf8L+cXUXxDnigOgYt8+Cpcvp+jfK/Dm5IDTScLECSSdfwGJ500lJj39tNevlFKtKWKJQEQuAR4DnMBfjDG/rFb+Q+A7QCWQC9xkjNlf3zJbKhEEvLjjRX798a8ZljqMx85/jF6JvYJlxhjKd+yg+J13KX73XTx79wLgHtCfTuecQ6dJk0mYNFGPFpRSbV5EEoGIOIEvgZlADvAxcK0xZkdInfOBj4wxpSJyKzDdGDOvvuW2dCIAeD/7fe754B7cDje/nf5bJvScUGs9T3Y2JWvXUrJ2LWUfZ+IvLQUgZvAgEs4+m/izxxKfcRYx6emI09miMSql1OmIVCL4H2CxMeZie/xeAGPML+qofzbwhDHm3PqWG45EALCvcB8/WP0DDhQd4LsZ3+Xm0TfjcrjqrG+8Xsq2baP0o02UfrKFsk8/w19YCIAjIYG4ESOIGzmC2DOHETfsTGIGD8YRE9PicSulVGNEKhFcDVxijPmOPX49MMkYc3sd9Z8AjhhjHq6lbCGwEKB///7j9u+v9+xRs5V4Snho40O8te8tzko7i0emPEJ65/RGzWv8fjx791K2dRvl27ZRtm0rFbu+xJSXWxUcDmL69ydm8CBiBw0mZmA6sQMHEjNwIM7k5LBsj1JKBbT5RCAi1wG3A9OMMRX1LTdcRwSh3t73Ng9tfAiv38stGbdw/fDrcTvdTV6O8fnw7D9Axa6dVOzeTcWer6jYswfPgQNQWRms50xJwT2gPzH9+uPu24eYvn1x9+2Lu08f3D16IHokoZQ6TW361JCIXAj8ASsJNHjxfmskAoCjJ4/y8EcPsyZ7DenJ6dw76V7O6X1OiyzbeL14snPwZO3Dsy8LT/YBvAcO4Nl/AO+RI+Dznaosgqt7d9y9euHu3QtXz164e/bA1aMn7h7dcfXogSstDXE3PVEppaJHpBKBC6uxeAZwEKux+JvGmO0hdc4GlmEdOexuzHJbKxEErM1Zyy83/ZLs4mym9JnCHWffwYiuI8K2PlNZiffIEbw5OXgPHsJ78CDeQ4esaYcPUXn4CKZ6H0kiOLt2xdWtG67u3az3bt1wpaWdeu/aFWfXNBydEhCRsMWvlGqbInn56GXA77EuH33WGPOIiDwIZBpjlovISmA0cNie5YAxZlZ9y2ztRABQ4avgxR0v8uy2ZynyFHFh/wtZeNZChncd3qpxgHVJq6+ggMqjR/EePkzlsVxr+NhRKnNzgy9f3gmopZdViYuzk0JXXKmpOLum4krtijM1FVdqF5ypqTi7pOLqkoKzSxckPl4Th1IdgN5Q1kKKPcW8uONFnt/xPCe9J5nYcyI3jLyBKX2m4JC21VuH8fnw5edbieF4HpXHj+PLO24Nn8jDdzyPyvx8fHnWO15vrcuR2FicXbpYr5TOOFNScHXpgjMlBWdnazww7EjubNVJTkZcdV9xpZRqfZoIWliRp4jXv3ydF794kWOlx+iX1I85Q+ZwxeArSItPi2hszWGMwV9cjO/ECSpP5OPLP2EddZw4gS+/AF9BAb78fOsVGC4qgnq+O45Onazk0NlKDM7kZBzJSTiTO+PsnIwjKenU9KQknElJOJKTcSYmIgl6+kqplqaJIEy8Pi/v7H+H1758jc1HN+MSF1P6TuFrZ3yNaX2nEe+Kj3SIYWP8fvxFRfgKC63kUFiIr7DIGi4qxFdYiD8wragIX1Eh/sIifMXFpy6prYvTiSMxEWdiopUk7HdHUiLOxCQciYn2q5NVlpiII6GT9d4pwUpCiYl6WkupEJoIWsHewr3848t/8Na+t8gtyyXBlcD0ftOZ0X8GU/pMIcGdEOkQ2wy/x2MdgRQVWcmkuAR/cRG+wiL8JcWnxotLrHolxfhLTuIvKcFfUoKvpKTK5bd1cjhwJCTYicJKEI6EhJrDCQlWArGHJT7eSiwJ8TjirZfEJwTH9bSXao80EbQin9/H5qObeXPfm6w6sIqCigJinbFM6jWJqX2mMqXPFPom9Y10mO2aMQZTUWElheJi/CdLrSRxsgT/yZP4T57EVxIYLrXeS0uDZf7S0lPjZWUYu6uQxpKYGCshJCTgiItD4uNwxIcMx8XjiI9D4uKrTJO4WKssLhaJi0NiY63y2LjgNEfsqTJxu/WIRrUYTQQRUumv5JNjn/Degfd4P/t9ckpyAEhPTmdSr0lM7DmRiT0nkhKXEtlAo5zx+/GXlmHKSk8libIy/KVl+EtPYsrKTo2XlVrjpWX4y8ut8cBweZk1XFFh1Skvx19eXmdDfIMcDitZxMScShyxMUhMrJ1EYoPDEhODxMbgCIzHxlgJK8Z6l5gYxB1j13XXLIuJsRJP9WG323o52tbFEKrpNBG0AcYY9hftZ93BdWw4vIHMI5mUVlq/RAenDGZcj3GM7T6WjO4Z9O7UW38JdiCmshJ/eQWmvMx6ryjHX1ZuvZeXY+yEYSo89rSKU2UVHuvop6LasMcbMuwJlhmPB7/HGq5yY+LpcrmqJoaGXi6XPWy9Exh3BcpCpgemuVxWknK5rOlOu15gPFjPaU0LlDuddrld1+m01uV0nipzOq3pUfx3pYmgDfL6vWw/vp1NRzax5egWPjn2STAxpMWncVbaWYxKG8XItJGM7DqSzrGdIxyxam+Mz2cniQorOXi8GI+VLEJf/uCwF+P1Yrwhw4Eyb8h4ZWXV4UCdykpMpdeat9IL3sqQ6fY8IcPNPlI6HdWSQ63DDge4nIjTBU6HlZACicTptKY5nFYdx6lxcTnB4UScDuu9+ni19+BynA5rndXriONUHYcgTifxZ59N7BlnNGvT60sE2uoVIW6HmzHdxzCm+xjAOo20O383n+V+xme5n7H1+FZWZa8K1u+T2IfhqcMZljqMM1PPZGiXofTq1Cuqf+Go+onTicTHQ3w8bbFTdGMM+HxVE4XXC4HxSp+VUILj1qtKua+yZn2fL2TYH0xQ+PwYnw98lRif/9S8gWFvJcbvt8orfRi/Dyp99jzWu1XPiz9Q5vdbZTXG/cF5qoxXe6/vEuza9Fz8f81OBPXRI4I2rLCikB15O9iet52dJ3ay88RO9hed6nk10Z3I4JTBDO4ymMEpgzmj8xmc0fkMuid01wShVDtgjLF6AKieJPz+GuP4/dZNm4mdmrUuPTXUgZz0nmR3/m6+zP+SL/O/ZE/BHvYU7KGwojBYJ9GdSHpyOumd00lPTmdA5wH0T+rPgOQBdHI370uklGrfNBF0cMYYjpcdZ2/hXutVsJesoiyyirI4cvJIlbqpcan0S+oXfPVN6kufxD70SexD94Tuba6rDKVUy9BEEMVKvaVkF2dzoPgA+4v2k1Ocw4HiAxwoOsCx0mMYTu1/l8NF70696ZXYK/jeq5P16tmpJz079STWGRvBrVFKNZc2FkexBHcCZ6aeyZmpZ9Yo8/g8HCo5xMGSg1Veh08eZt3BdeSW5daYp0tsF3p06kH3hO70SKj63i2hG93ju9M5trO2USjVjmgiiGIxzhirHaGOx3F6fB6OnjzKkdIjHD55mCMnj3D05FGOlh7lyMkjbM3dSn5Ffo353A43afFpdEvoRlqc9d41vitp8Wl0jetK1/iuwfeO3B+TUu2FJgJVpxhnDP2S+9EvuV+ddTw+D8dKj5Fblmu9l+aSW5YbfD9QfIAtx7ZQUFFQ6/wJrgS6xnclNS41+OoS14UusV3oEtclOJ4al0pKbApxrrgwba1S0UsTgTotMc4Y+ib1bbD/JK/PS155nvUqs1/28InyE5woP0FOSQ5bj2+loLyASlN7p3JxzjhS4lLoEtuFzrGdSYlNCb4HhkOnJ8ckkxSThMuhX3Wl6qJ/HapVuJ3uYINzQ/zGT7GnmPzyfPIr8jlRfoLCikJOlJ8gvzyfgooCCisKya/I5/DJwxRUFFBUUVSl4bu6RHciyTHJdI7tTFJMEskxySTHJlvvdrIIvFd/xTnjtM1DdWiaCFSb4xBH8Jd9OumNmsfn91HsKabQUxhMFIUVhRR5ioLvRRVF1runiH2F+yjyFFHsKabcV//zEVziIikmicSYRBLdidawO5HEGGu4k7sTSe4kOsV0ItGdSCf3qffgcEwn3A53C3w6SrU8TQSqQ3A6nKTEpZASl8IABjRpXo/PE0wQxZ5iiiqKKPGWWMOeIko8JcHxEm8JJZ4Sskuyg9NPek/iNzWfD11djCOGxJhEElwJwSQR+op3xQeHE1wJJLjtlz3cydUpOB7vjifGEaNHKqpFaCJQUS/GGUNafFqzHzPqN37KK8uDieKk9+Spd4/1Hvoq8ZZQ6i3lZOVJ8srzyC7ODo6XekvrPcUVyinOYFJIcCUQ74on3hVPgvvUcLyralm8K544V1yV8eplca44PR0WZTQRKHWaHOII/nrvQY/TWlYgqZRWlnLSayWG4HBlKWXeMuu9ssxKHt6TlFWWWeP29ILyAg5VHgpOL6sso8JX0eRY4l3xxDnjqiSHeFc8sc7Y4LQq43bdWGfsqekh04LvzjhiXbHBaXpkE3maCJRqQ0KTSnOPUGrj8/so95VbicFOJoHx8spyyivLqySOQPIIHQ7UCRzJlFeWU+4rD87v8XuaHV+sM7ZGkghOc8UR44ypMi3WGUuMM4Y4Z0iZPV+MIyY4LfQ9dBmBYbfDrd2qoIlAqajgdDjp5LDaHwjTPXw+v89KGL5yKiorgkmi+rRAUqnwVdQYDox7fB7Kfdb7Se9JTpSfCE4Prev1n/4zDdwOd5XEEEgUVaY73cQ6TiWUQFkg6bid7uBwXdPcTjduh7vu6fYyndL6D9DRRKCUahFOh5MEh3U001r8xk+FrwKPzxNMHl6fN5hEQhNHYNjj8+DxV50efPmrDlf4KvD6vJR6S4NlXr83uKzAsM+03NPgBKmSbFwOV3D41oxbuXTgpS22rgBNBEqpdsshjmBDdyT5/L4aiSKQTLw+76lp9nigbqW/8tR0vzdYFhy2lxd4hetJhZoIlFLqNDkdTuIdkU9IzaWtJEopFeU0ESilVJTTRKCUUlFOE4FSSkU5TQRKKRXlNBEopVSU00SglFJRThOBUkpFOTGmcV3ethUikgvsb+bsacDxFgynvYjG7Y7GbYbo3O5o3GZo+nYPMMZ0q62g3SWC0yEimcaY8ZGOo7VF43ZH4zZDdG53NG4ztOx266khpZSKcpoIlFIqykVbIlgS6QAiJBq3Oxq3GaJzu6Nxm6EFtzuq2giUUkrVFG1HBEopparRRKCUUlEuahKBiFwiIrtEZI+I3BPpeMJBRPqJyGoR2SEi20XkB/b0VBF5V0R22+9dIh1rSxMRp4h8IiIr7PGBIvKRvb9fFZGYSMfY0kQkRUSWichOEflCRP4nSvb1Xfb3e5uIvCwicR1tf4vIsyJyTES2hUyrdd+K5XF72z8XkbFNXV9UJAIRcQJPApcCI4BrRWREZKMKi0rgR8aYEcBk4Hv2dt4DvGeMGQK8Z493ND8AvggZ/xXwO2PMYCAf+HZEogqvx4C3jTHDgAys7e/Q+1pE+gDfB8YbY0YBTuAbdLz9vRS4pNq0uvbtpcAQ+7UQeKqpK4uKRABMBPYYY/YaYzzAK8AVEY6pxRljDhtjttjDxVj/GPpgbevzdrXngSsjEmCYiEhf4GvAX+xxAS4AltlVOuI2dwbOA54BMMZ4jDEFdPB9bXMB8SLiAhKAw3Sw/W2MWQucqDa5rn17BfCCsWwEUkSkV1PWFy2JoA+QHTKeY0/rsEQkHTgb+AjoYYw5bBcdAXpEKq4w+T3wE8Bvj3cFCowxlfZ4R9zfA4Fc4Dn7lNhfRKQTHXxfG2MOAo8CB7ASQCGwmY6/v6HufXva/9+iJRFEFRFJBF4H7jTGFIWWGet64Q5zzbCIfB04ZozZHOlYWpkLGAs8ZYw5GzhJtdNAHW1fA9jnxa/ASoS9gU7UPIXS4bX0vo2WRHAQ6Bcy3tee1uGIiBsrCbxkjPmHPflo4FDRfj8WqfjC4FxglohkYZ3yuwDr3HmKfeoAOub+zgFyjDEf2ePLsBJDR97XABcC+4wxucYYL/APrO9AR9/fUPe+Pe3/b9GSCD4GhthXFsRgNS4tj3BMLc4+N/4M8IUx5v+FFC0HbrCHbwDeaO3YwsUYc68xpq8xJh1rv64yxswHVgNX29U61DYDGGOOANkicqY9aQawgw68r20HgMkikmB/3wPb3aH3t62ufbsc+JZ99dBkoDDkFFLjGGOi4gVcBnwJfAX8NNLxhGkbp2AdLn4OfGq/LsM6Z/4esBtYCaRGOtYwbf90YIU9fAawCdgDvAbERjq+MGzvGCDT3t//ArpEw74GHgB2AtuAvwKxHW1/Ay9jtYF4sY7+vl3XvgUE66rIr4CtWFdUNWl92sWEUkpFuWg5NaSUUqoOmgiUUirKaSJQSqkop4lAKaWinCYCpZSKcpoIlKpGRHwi8mnIq8U6bhOR9NAeJZVqC1wNV1Eq6pQZY8ZEOgilWoseESjVSCKSJSK/FpGtIrJJRAbb09NFZJXdF/x7ItLfnt5DRP4pIp/Zr3PsRTlF5Gm7T/13RCQ+YhulFJoIlKpNfLVTQ/NCygqNMaOBJ7B6PQX4A/C8MeYs4CXgcXv648D7xpgMrH6AttvThwBPGmNGAgXAnLBujVIN0DuLlapGREqMMYm1TM8CLjDG7LU79ztijOkqIseBXsYYrz39sDEmTURygb7GmIqQZaQD7xrr4SKIyN2A2xjzcCtsmlK10iMCpZrG1DHcFBUhwz60rU5FmCYCpZpmXsj7Bnt4PVbPpwDzgQ/s4feAWyH4TOXOrRWkUk2hv0SUqileRD4NGX/bGBO4hLSLiHyO9av+WnvaHVhPCvsx1lPDbrSn/wBYIiLfxvrlfytWj5JKtSnaRqBUI9ltBOONMccjHYtSLUlPDSmlVJTTIwKllIpyekSglFJRThOBUkpFOU0ESikV5TQRKKVUlNNEoJRSUe7/A2EYRDeDEJhdAAAAAElFTkSuQmCC\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "model.plot(path=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this plot, *dtrain* and *dval* refer to the training and validation set and accuracy/MSE values are identical to those printed on the terminal. This is simply a graphical representation of the tabular log report.\n", "\n", "For code, maths and pictures behind the EpyNN model, follow this link:\n", "\n", "* [Neural Network - Model](https://epynn.net/EpyNN_Model.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Write, read & Predict" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A trained model can be written on disk such as:" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1m\u001b[32mMake: /pylibs/EpyNN/epynnlive/dummy_boolean/models/1651394171_Perceptron_Dense-1-sigmoid.pickle\u001b[0m\n" ] } ], "source": [ "model.write()\n", "\n", "# model.write(path=/your/custom/path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A model can be read from disk such as:" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [], "source": [ "model = read_model()\n", "\n", "# model = read_model(path=/your/custom/path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can retrieve new features and predict on them." ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [], "source": [ "X_features, _ = prepare_dataset(N_SAMPLES=10)\n", "\n", "dset = model.predict(X_features)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Results can be extracted such as:" ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 [0.] [0.30108818] [ True False False ... True True True]\n", "1 [0.] [0.14158315] [ True True True ... True False True]\n", "2 [0.] [0.41840971] [False True False ... True True True]\n", "3 [0.] [0.08498417] [False True True ... False True True]\n", "4 [1.] [0.8993867] [ True False True ... True False False]\n", "5 [0.] [0.38729994] [False True False ... True True True]\n", "6 [0.] [0.09610285] [False False True ... False True True]\n", "7 [0.] [0.17566746] [False False True ... False True False]\n", "8 [0.] [0.13229161] [ True False True ... True True True]\n", "9 [0.] [0.15331008] [ True True True ... True True True]\n" ] } ], "source": [ "for n, pred, probs, features in zip(dset.ids, dset.P, dset.A, dset.X):\n", " print(n, pred, probs, features)\n", " # pred = output (decision); probs = output (probability)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.2" } }, "nbformat": 4, "nbformat_minor": 4 }