{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Distinguish author-specific patterns in music" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* Find this notebook at `EpyNN/epynnlive/author_music/train.ipynb`.\n", "* Regular python code at `EpyNN/epynnlive/author_music/train.py`.\n", "\n", "Run the notebook online with [Google Colab](https://colab.research.google.com/github/Synthaze/EpyNN/blob/main/epynnlive/author_music/train.ipynb).\n", "\n", "**Level: Advanced**" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this notebook we will review:\n", "\n", "* Handling univariate time series that represent a **huge amount of data points**.\n", "* Taking advantage of recurrent architectures (RNN, GRU) over Feed-Forward architectures.\n", "* Introducing recall and precision along with accuracy when dealing with unbalanced datasets.\n", "\n", "**It is assumed that all *basics* notebooks were already reviewed:**\n", "\n", "* [Basics with Perceptron (P)](../dummy_boolean/train.ipynb)\n", "* [Basics with string sequence](../dummy_string/train.ipynb)\n", "* [Basics with numerical time-series](../dummy_time/train.ipynb)\n", "* [Basics with images](../dummy_image/train.ipynb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**This notebook does not enhance, extend or replace EpyNN's documentation.**\n", "\n", "**Relevant documentation pages for the current notebook:**\n", "\n", "* [Fully Connected (Dense)](https://epynn.net/Dense.html)\n", "* [Recurrent Neural Network (RNN)](https://epynn.net/RNN.html)\n", "* [Gated Recurrent Unit (GRU)](https://epynn.net/GRU.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Environment and data" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Follow [this link](prepare_dataset.ipynb) for details about data preparation." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Briefly, raw data are instrumental guitar music from the *True* author and the *False* author. These are raw ``.wav`` files that were normalized, digitalized using a 4-bits encoder and clipped.\n", "\n", "Commonly, music ``.wav`` files have a sampling rate of 44100 Hz. This means that each second of music represents a numerical time series of length 44100." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "# EpyNN/epynnlive/author_music/train.ipynb\n", "# Install dependencies\n", "!pip3 install --upgrade-strategy only-if-needed epynn\n", "\n", "# Standard library imports\n", "import random\n", "\n", "# Related third party imports\n", "import numpy as np\n", "\n", "# Local application/library specific imports\n", "import epynn.initialize\n", "from epynn.commons.maths import relu, softmax\n", "from epynn.commons.library import (\n", " configure_directory,\n", " read_model,\n", ")\n", "from epynn.network.models import EpyNN\n", "from epynn.embedding.models import Embedding\n", "from epynn.rnn.models import RNN\n", "from epynn.gru.models import GRU\n", "from epynn.flatten.models import Flatten\n", "from epynn.dropout.models import Dropout\n", "from epynn.dense.models import Dense\n", "from epynnlive.author_music.prepare_dataset import (\n", " prepare_dataset,\n", " download_music,\n", ")\n", "from epynnlive.author_music.settings import se_hPars\n", "\n", "\n", "########################## CONFIGURE ##########################\n", "random.seed(1)\n", "\n", "np.set_printoptions(threshold=10)\n", "\n", "np.seterr(all='warn')\n", "np.seterr(under='ignore')\n", "\n", "configure_directory()\n", "\n", "\n", "############################ DATASET ##########################\n", "download_music()\n", "\n", "X_features, Y_label = prepare_dataset(N_SAMPLES=256)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's inspect." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "256\n", "(10000,)\n", "[10 7 7 ... 9 9 9]\n", "1 15\n" ] } ], "source": [ "print(len(X_features))\n", "print(X_features[0].shape)\n", "print(X_features[0])\n", "print(np.min(X_features[0]), np.max(X_features[0]))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We clipped the original ``.wav`` files in 1 second clips and thus we could retrieve ``256`` samples. We did that because we do not have an infinite number of data. Since we want more training examples, we need to split the data.\n", "\n", "Below other problems are discussed:\n", "\n", "* **Arrays size in memory**: One second represents 44100 data points for each clip and thus ``44100 * 256 = 11.2896e6`` data points in total. More than ten million of these is more likely to overload your RAM or to raise a memory allocation error on most laptops. This is why we resampled the original ``.wav`` files content to 10000 Hz. When doing that, we lose the patterns associated with frequencies greater than 5000 Hz. Alternatively, we could have made clips of shorther duration but then we would miss patterns associated with lower frequencies. Because the guitar emission spectrum is essentially entirely below 5000 Hz, we preferred to apply the resampling method.\n", "* **Signal normalization**: Original signals were sequences of 16-bits integers ranging from ``-32768`` to ``32767``. Feeding a neural network which such big values will most likely result in floating point errors. This is why we normalized the original data from each ``.wav`` file within the range \\[0, 1\\].\n", "* **Signal digitalization**: While the original signal was a digital signal encoded over 16-bits integers, this results in ``3e-5`` difference between each digit after normalization within the range \\[0, 1\\]. Such thin differences may be difficult to evaluate for the network and convergence in the training phase could turn prohibitively slow. In the context of this notebook, we digitalized from 16-bits to 4-bits integers ranging from ``0`` to ``15`` for a total of 16 bins instead of 65536. \n", "* **One-hot encoding**: To simplify the problem and focus on patterns, we will eliminate explicit amplitudes by performing one-hot encoding of the univariate, 4-bits encoded time series.\n", "\n", "All things being said, we can go ahead." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Feed-Forward (FF)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We first start with our reference, a Feed-Forward network with dropout regularization." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Embedding" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We scaled input data for each ``.wav`` file before, so we do not need to provide the argument to the class constructor of the *embedding* layer. Note that when ``X_scale=True`` it applies a global scaling over the whole training set. Here we work with independant ``.wav`` files which should be normalized separately.\n", "\n", "For the embedding, we will one-hot encode time series. See [One-hot encoding of string features](https://epynn.net/epynnlive/dummy_string/prepare_dataset.html#One-hot-encoding-of-string-features) for details about the process. Note that while one-hot encoding is mandatory when dealing with string input data, it can also be done with digitized numerical data as is the case here." ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "embedding = Embedding(X_data=X_features,\n", " Y_data=Y_label,\n", " X_encode=True,\n", " Y_encode=True,\n", " batch_size=16,\n", " relative_size=(2, 1, 0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's inspect the shape of the data." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(171, 10000, 16)\n", "{0: 71, 1: 100}\n" ] } ], "source": [ "print(embedding.dtrain.X.shape)\n", "print(embedding.dtrain.b)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We note that we have an unbalanced dataset, with about 2/3 of negative samples." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Flatten-(Dense)n with Dropout" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's proceed with the network design and training." ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "tags": [] }, "outputs": [], "source": [ "name = 'Flatten_Dense-64-relu_Dropout05_Dense-2-softmax'\n", "\n", "se_hPars['learning_rate'] = 0.01\n", "se_hPars['softmax_temperature'] = 1\n", "\n", "layers = [\n", " embedding,\n", " Flatten(),\n", " Dense(64, relu),\n", " Dropout(0.5),\n", " Dense(2, softmax),\n", "]\n", "\n", "model = EpyNN(layers=layers, name=name)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "We can initialize the model." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1m--- EpyNN Check OK! --- \u001b[0m\r" ] } ], "source": [ "model.initialize(loss='MSE', seed=1, metrics=['accuracy', 'recall', 'precision'], se_hPars=se_hPars.copy(), end='\\r')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Train it for 10 epochs." ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1m\u001b[37mEpoch 1 - Batch 0/9 - Accuracy: 0.5 Cost: 0.49991 - TIME: 1.97s RATE: 1.01e+00e/s TTC: 10s \u001b[0m\r" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/media/synthase/beta/EpyNN/epynn/commons/metrics.py:100: RuntimeWarning: invalid value encountered in long_scalars\n", " precision = (tp / (tp+fp))\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1m\u001b[37mEpoch 9 - Batch 9/9 - Accuracy: 0.625 Cost: 0.37297 - TIME: 13.26s RATE: 7.54e-01e/s TTC: 3s \u001b[0m\n", "\n", "+-------+----------+----------+----------+-------+--------+-------+-----------+-------+--------+-------+------------------------------------------------------------+\n", "| \u001b[1m\u001b[37mepoch\u001b[0m | \u001b[1m\u001b[37mlrate\u001b[0m | \u001b[1m\u001b[37mlrate\u001b[0m | \u001b[1m\u001b[32maccuracy\u001b[0m | | \u001b[1m\u001b[31mrecall\u001b[0m | | \u001b[1m\u001b[35mprecision\u001b[0m | | \u001b[1m\u001b[36mMSE\u001b[0m | | \u001b[37mExperiment\u001b[0m |\n", "| | \u001b[37mDense\u001b[0m | \u001b[37mDense\u001b[0m | \u001b[1m\u001b[32mdtrain\u001b[0m | \u001b[1m\u001b[32mdval\u001b[0m | \u001b[1m\u001b[31mdtrain\u001b[0m | \u001b[1m\u001b[31mdval\u001b[0m | \u001b[1m\u001b[35mdtrain\u001b[0m | \u001b[1m\u001b[35mdval\u001b[0m | \u001b[1m\u001b[36mdtrain\u001b[0m | \u001b[1m\u001b[36mdval\u001b[0m | |\n", "+-------+----------+----------+----------+-------+--------+-------+-----------+-------+--------+-------+------------------------------------------------------------+\n", "| \u001b[1m\u001b[37m0\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.591\u001b[0m | \u001b[1m\u001b[32m0.541\u001b[0m | \u001b[1m\u001b[31m0.014\u001b[0m | \u001b[1m\u001b[31m0.000\u001b[0m | \u001b[1m\u001b[35m1.000\u001b[0m | \u001b[1m\u001b[35mnan\u001b[0m | \u001b[1m\u001b[36m0.409\u001b[0m | \u001b[1m\u001b[36m0.458\u001b[0m | \u001b[37m1635107656_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "| \u001b[1m\u001b[37m1\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.544\u001b[0m | \u001b[1m\u001b[32m0.588\u001b[0m | \u001b[1m\u001b[31m0.704\u001b[0m | \u001b[1m\u001b[31m0.615\u001b[0m | \u001b[1m\u001b[35m0.467\u001b[0m | \u001b[1m\u001b[35m0.545\u001b[0m | \u001b[1m\u001b[36m0.434\u001b[0m | \u001b[1m\u001b[36m0.381\u001b[0m | \u001b[37m1635107656_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "| \u001b[1m\u001b[37m2\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.643\u001b[0m | \u001b[1m\u001b[32m0.565\u001b[0m | \u001b[1m\u001b[31m0.268\u001b[0m | \u001b[1m\u001b[31m0.256\u001b[0m | \u001b[1m\u001b[35m0.679\u001b[0m | \u001b[1m\u001b[35m0.556\u001b[0m | \u001b[1m\u001b[36m0.339\u001b[0m | \u001b[1m\u001b[36m0.428\u001b[0m | \u001b[37m1635107656_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "| \u001b[1m\u001b[37m3\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.637\u001b[0m | \u001b[1m\u001b[32m0.541\u001b[0m | \u001b[1m\u001b[31m0.141\u001b[0m | \u001b[1m\u001b[31m0.128\u001b[0m | \u001b[1m\u001b[35m0.909\u001b[0m | \u001b[1m\u001b[35m0.500\u001b[0m | \u001b[1m\u001b[36m0.358\u001b[0m | \u001b[1m\u001b[36m0.447\u001b[0m | \u001b[37m1635107656_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "| \u001b[1m\u001b[37m4\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.673\u001b[0m | \u001b[1m\u001b[32m0.600\u001b[0m | \u001b[1m\u001b[31m0.225\u001b[0m | \u001b[1m\u001b[31m0.154\u001b[0m | \u001b[1m\u001b[35m0.941\u001b[0m | \u001b[1m\u001b[35m0.857\u001b[0m | \u001b[1m\u001b[36m0.317\u001b[0m | \u001b[1m\u001b[36m0.390\u001b[0m | \u001b[37m1635107656_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "| \u001b[1m\u001b[37m5\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.690\u001b[0m | \u001b[1m\u001b[32m0.624\u001b[0m | \u001b[1m\u001b[31m0.408\u001b[0m | \u001b[1m\u001b[31m0.359\u001b[0m | \u001b[1m\u001b[35m0.725\u001b[0m | \u001b[1m\u001b[35m0.667\u001b[0m | \u001b[1m\u001b[36m0.303\u001b[0m | \u001b[1m\u001b[36m0.354\u001b[0m | \u001b[37m1635107656_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "| \u001b[1m\u001b[37m6\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.585\u001b[0m | \u001b[1m\u001b[32m0.459\u001b[0m | \u001b[1m\u001b[31m0.915\u001b[0m | \u001b[1m\u001b[31m0.795\u001b[0m | \u001b[1m\u001b[35m0.500\u001b[0m | \u001b[1m\u001b[35m0.449\u001b[0m | \u001b[1m\u001b[36m0.400\u001b[0m | \u001b[1m\u001b[36m0.495\u001b[0m | \u001b[37m1635107656_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "| \u001b[1m\u001b[37m7\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.550\u001b[0m | \u001b[1m\u001b[32m0.494\u001b[0m | \u001b[1m\u001b[31m0.986\u001b[0m | \u001b[1m\u001b[31m0.974\u001b[0m | \u001b[1m\u001b[35m0.479\u001b[0m | \u001b[1m\u001b[35m0.475\u001b[0m | \u001b[1m\u001b[36m0.445\u001b[0m | \u001b[1m\u001b[36m0.493\u001b[0m | \u001b[37m1635107656_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "| \u001b[1m\u001b[37m8\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.702\u001b[0m | \u001b[1m\u001b[32m0.635\u001b[0m | \u001b[1m\u001b[31m0.761\u001b[0m | \u001b[1m\u001b[31m0.615\u001b[0m | \u001b[1m\u001b[35m0.614\u001b[0m | \u001b[1m\u001b[35m0.600\u001b[0m | \u001b[1m\u001b[36m0.289\u001b[0m | \u001b[1m\u001b[36m0.366\u001b[0m | \u001b[37m1635107656_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "| \u001b[1m\u001b[37m9\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.772\u001b[0m | \u001b[1m\u001b[32m0.694\u001b[0m | \u001b[1m\u001b[31m0.549\u001b[0m | \u001b[1m\u001b[31m0.462\u001b[0m | \u001b[1m\u001b[35m0.848\u001b[0m | \u001b[1m\u001b[35m0.783\u001b[0m | \u001b[1m\u001b[36m0.219\u001b[0m | \u001b[1m\u001b[36m0.280\u001b[0m | \u001b[37m1635107656_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "+-------+----------+----------+----------+-------+--------+-------+-----------+-------+--------+-------+------------------------------------------------------------+\n" ] } ], "source": [ "model.train(epochs=10, init_logs=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "The model could not reproduce the training data very well while it is almost not representative at all of the validation data.\n", "\n", "We can still comment the **recall** and **precision** metrics:\n", "\n", "* **Recall**: This represents *the fraction of positive instances retrieved by the model*.\n", "* **Precision**: This represents *the fraction of positive instances within the labels predicted as positive*. \n", "\n", "Said differently:\n", "\n", "* Given **tp** the *true positive* samples.\n", "* Given **tn** the *true negative* samples.\n", "* Given **fp** the *false positive* samples.\n", "* Given **fn** the *false negative* samples.\n", "* Then **recall** = ``tp / (tp+fn)`` and **precision** = ``tp / (tp+fp)``.\n", "\n", "For code, maths and pictures behind the *Dense* layer, follow this link:\n", "\n", "* [Fully Connected (Dense)](https://epynn.net/Dense.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Recurrent Architectures" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Recurrent architectures can make a difference here because they process time series one measurement by one measurement. \n", "\n", "Impotantly, the number of time steps **does not define the size of parameters (weight/bias) array** while in the Feed-Forward network this is the case. \n", "\n", "For the *dense*, the shape of W is ``n, u`` given ``n`` the number of nodes in the previous layer and ``u`` in the current layer. So when a *dense* layer follows the *embedding* layer, the number of nodes in the *embedding* layer is equal to the number of features, herein the number of time steps ``10 000``. \n", "\n", "By contrast, the *RNN* layer has parameters shape which depends on the number of units and the uni/multivariate nature of each measurement, but not depending of the number of time steps. In the previous situation there are likely too many parameters and the computation does not converge.\n", "\n", "Of note, this is because recurrent layers parameters are not defined with respect to sequence length, as they can handle data of variable length." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Embedding" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "For the embedding, we will one-hot encode time series. See [One-hot encoding of string features](https://epynn.net/epynnlive/dummy_string/prepare_dataset.html#One-hot-encoding-of-string-features) for details about the process which follows the same logic and requirements regardless the data-type." ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "embedding = Embedding(X_data=X_features,\n", " Y_data=Y_label,\n", " X_encode=True,\n", " Y_encode=True,\n", " batch_size=16,\n", " relative_size=(2, 1, 0))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's inspect the data shape." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "(171, 10000, 16)\n" ] } ], "source": [ "print(embedding.dtrain.X.shape)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### RNN(sequences=True)-Flatten-(Dense)n with Dropout" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Time to clarify a point:\n", "\n", "* We have a multivariate like time series (one-hot encoded univariate series) with 10000 time steps.\n", "* The 10000 or **length of sequence is unrelated to the number of units in the RNN layer**. The number of units may be anything, the whole sequence will be processed in its entirety.\n", "* In recurrent layers, parameters shape is related to the number of units and the vocabulary size, not to the length of the sequence. That's why such architectures can handle input sequences of variable length.\n" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "tags": [] }, "outputs": [], "source": [ "name = 'RNN-1-Seq_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax'\n", "\n", "se_hPars['learning_rate'] = 0.01\n", "se_hPars['softmax_temperature'] = 1\n", "\n", "layers = [\n", " embedding,\n", " RNN(1, sequences=True),\n", " Flatten(),\n", " Dense(64, relu),\n", " Dropout(0.5),\n", " Dense(2, softmax),\n", "]\n", "\n", "model = EpyNN(layers=layers, name=name)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "We initialize the model." ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1m--- EpyNN Check OK! --- \u001b[0m\r" ] } ], "source": [ "model.initialize(loss='MSE', seed=1, metrics=['accuracy', 'recall', 'precision'], se_hPars=se_hPars.copy(), end='\\r')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We will only train for 3 epochs." ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1m\u001b[37mEpoch 2 - Batch 9/9 - Accuracy: 1.0 Cost: 0.01575 - TIME: 20.68s RATE: 1.45e-01e/s TTC: 14s \u001b[0m\n", "\n", "+-------+----------+----------+----------+----------+-------+--------+-------+-----------+-------+--------+-------+----------------------------------------------------------------------+\n", "| \u001b[1m\u001b[37mepoch\u001b[0m | \u001b[1m\u001b[37mlrate\u001b[0m | \u001b[1m\u001b[37mlrate\u001b[0m | \u001b[1m\u001b[37mlrate\u001b[0m | \u001b[1m\u001b[32maccuracy\u001b[0m | | \u001b[1m\u001b[31mrecall\u001b[0m | | \u001b[1m\u001b[35mprecision\u001b[0m | | \u001b[1m\u001b[36mMSE\u001b[0m | | \u001b[37mExperiment\u001b[0m |\n", "| | \u001b[37mRNN\u001b[0m | \u001b[37mDense\u001b[0m | \u001b[37mDense\u001b[0m | \u001b[1m\u001b[32mdtrain\u001b[0m | \u001b[1m\u001b[32mdval\u001b[0m | \u001b[1m\u001b[31mdtrain\u001b[0m | \u001b[1m\u001b[31mdval\u001b[0m | \u001b[1m\u001b[35mdtrain\u001b[0m | \u001b[1m\u001b[35mdval\u001b[0m | \u001b[1m\u001b[36mdtrain\u001b[0m | \u001b[1m\u001b[36mdval\u001b[0m | |\n", "+-------+----------+----------+----------+----------+-------+--------+-------+-----------+-------+--------+-------+----------------------------------------------------------------------+\n", "| \u001b[1m\u001b[37m0\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.906\u001b[0m | \u001b[1m\u001b[32m0.753\u001b[0m | \u001b[1m\u001b[31m0.831\u001b[0m | \u001b[1m\u001b[31m0.744\u001b[0m | \u001b[1m\u001b[35m0.937\u001b[0m | \u001b[1m\u001b[35m0.725\u001b[0m | \u001b[1m\u001b[36m0.079\u001b[0m | \u001b[1m\u001b[36m0.167\u001b[0m | \u001b[37m1635107675_RNN-1-Seq_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "| \u001b[1m\u001b[37m1\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.953\u001b[0m | \u001b[1m\u001b[32m0.729\u001b[0m | \u001b[1m\u001b[31m0.944\u001b[0m | \u001b[1m\u001b[31m0.641\u001b[0m | \u001b[1m\u001b[35m0.944\u001b[0m | \u001b[1m\u001b[35m0.735\u001b[0m | \u001b[1m\u001b[36m0.037\u001b[0m | \u001b[1m\u001b[36m0.168\u001b[0m | \u001b[37m1635107675_RNN-1-Seq_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "| \u001b[1m\u001b[37m2\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[37m1.00e-02\u001b[0m | \u001b[1m\u001b[32m0.971\u001b[0m | \u001b[1m\u001b[32m0.741\u001b[0m | \u001b[1m\u001b[31m0.972\u001b[0m | \u001b[1m\u001b[31m0.744\u001b[0m | \u001b[1m\u001b[35m0.958\u001b[0m | \u001b[1m\u001b[35m0.707\u001b[0m | \u001b[1m\u001b[36m0.020\u001b[0m | \u001b[1m\u001b[36m0.189\u001b[0m | \u001b[37m1635107675_RNN-1-Seq_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax\u001b[0m |\n", "+-------+----------+----------+----------+----------+-------+--------+-------+-----------+-------+--------+-------+----------------------------------------------------------------------+\n" ] } ], "source": [ "model.train(epochs=3, init_logs=False)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "While we still observe overfitting, it is reduced compared to the Feed-Forward network and the accuracy on the validation set is higher. This model seems, so far, more appropriate to the problem. \n", "\n", "For code, maths and pictures behind the *RNN* layer, follow this link:\n", "\n", "* [Recurrent Neural Network (RNN)](https://epynn.net/RNN.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### GRU(sequences=True)-Flatten-(Dense)n with Dropout" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's now try a more evolved recurrent architecture." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\u001b[1m\u001b[37mEpoch 0 - Batch 2/9 - Accuracy: 0.438 Cost: 0.24173 - TIME: 9.35s RATE: 1.07e-01e/s TTC: 37s \u001b[0m\r" ] } ], "source": [ "name = 'GRU-1-Seq_Flatten_Dense-64-relu_Dropout05_Dense-2-softmax'\n", "\n", "se_hPars['learning_rate'] = 0.01\n", "se_hPars['softmax_temperature'] = 1\n", "\n", "layers = [\n", " embedding,\n", " GRU(1, sequences=True),\n", " Flatten(),\n", " Dense(64, relu),\n", " Dropout(0.5),\n", " Dense(2, softmax),\n", "]\n", "\n", "model = EpyNN(layers=layers, name=name)\n", "\n", "model.initialize(loss='MSE', seed=1, metrics=['accuracy', 'recall', 'precision'], se_hPars=se_hPars.copy(), end='\\r')\n", "\n", "model.train(epochs=3, init_logs=False)" ] }, { "cell_type": "markdown", "metadata": { "tags": [] }, "source": [ "In the context of this example, the GRU-based network performs poorly compared to both Feed-Forward and RNN-based networks. While we may attempt to optimize using more samples, different batch size, decaying learning rate and many other things, this teaches us that the more complex architecture is not necessarily the more appropriate one. This always depends on the context and most importantly computational and human time required with respect to anticipated possibility of improvements.\n", "\n", "For code, maths and pictures behind the *GRU* layer, follow this link:\n", "\n", "* [Gated Recurrent Unit (GRU)](https://epynn.net/GRU.html)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Write, read & Predict" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A trained model can be written on disk such as:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model.write()\n", "\n", "# model.write(path=/your/custom/path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "A model can be read from disk such as:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "model = read_model()\n", "\n", "# model = read_model(path=/your/custom/path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "We can retrieve new features and predict on them." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "X_features, _ = prepare_dataset(N_SAMPLES=10)\n", "\n", "dset = model.predict(X_features, X_encode=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Results can be extracted such as:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "for n, pred, probs in zip(dset.ids, dset.P, dset.A):\n", " print(n, pred, probs)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Note that we wrote to the disk the last model we computed, which was the poorest in terms on performance. Therefore, predictions achieved here are not expected to be appropriate. The RNN-based network should be saved instead and will provide more accurate results." ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.2" } }, "nbformat": 4, "nbformat_minor": 4 }