Ejemplo 1: Serie de tiempo simulada#

Introducción#

Los datos de la serie temporal introducen una «dependencia dura» de los pasos de tiempo anteriores, por lo que no se cumple la independencia de las observaciones. ¿Cuáles son algunas de las propiedades que puede tener una serie temporal?

La estacionalidad y la autocorrelación son algunas de las propiedades de la serie temporal en las que puede estar interesado.

Se dice que una serie de tiempos es estacionaria cuando la media y la varianza permanecen constantes en el tiempo.

Una serie temporal tiene una tendencia si la media varía con el tiempo. A menudo puede eliminarlo y hacer que la serie sea estacionaria aplicando transformación(es) logarítmicas de los datos.

La estacionalidad se refiere al fenómeno de las variaciones en plazos específicos. por ejemplo, personas que compran más árboles de Navidad durante Navidad (quién lo hubiera pensado). Un enfoque común para eliminar la estacionalidad es usar la diferenciación.

La autocorrelación se refiere a la correlación entre el valor actual con una copia de un tiempo anterior (retraso).

Usaremos redes neuronales para el modelamiento de esta series.

Series de tiempo y aprendizaje profundo#

Una serie de tiempo univariada es una sucesión de valores en el tiempo, digamos

\[ x_1, x_2, x_3, \ldots x_T. \]

Por ejemplo consideremos la serie dada pos los siguientes valores:

\[\{3.4, 5.2, 4.6, 6.2, 5.5, 4.0, 7.2, 8.1, 6.9, 9.2, 9.5, 9.8, 8.9, 9.6, 9.7, 9.9 \} \]

El problema de hacer pronósticos a partir de una serie de tiempo puede verse como un problema de aprendizaje profundo , si la información se organiza inicialmente como un problema de regresión o de clasificación. La siguiente imagen ilustra el procedimiento más común.

En la imagen se observan varias cosas interesantes.

  1. Los datos de la serie se han organizado como si se tuviera un regresión. La matriz de diseño \(X\) está constituida por filas con 8 columnas. El número de columnas corresponde al rezago (número de pasos hacia atrás en la serie para poder predecir el sigueinte valor.

  2. La variable de interés \(Y\) es la variable que se quiere predecir.

Observe que para la primera fila se tomaron los primeros 8 datos de la serie y el valor de la variable a predecir es la novena observación. En la segunda fila nos hemos desplazado un casilla a la derecha en la serie.

Puesto de esta forma, cada fila de la matriz \(X\) es independiente de las demás y constituye un tensor de dimensión 1 en la entrada de una red neuronal. La variable target es es el tensor \(Y\).La entrada completa a la red es un tensor de dimensión 2.

Visto así, tenemos un problema de aprendizaje profundo, por lo que un por ejemplo un perceptron podría usarse para construir un modelo predictivo.

Predicción a más de un paso#

Si lo que se desea es hacer predicciones, varios pasos adelante y no del siguiente paso, el único cambio es en la variable objetivo (target). La imagen muestra el caso de predicciones tres pasos adelante. Observe que en este caso la variable objetivo se toma tres pasos adelante del último elemento de la matriz \(X\).

Observe además que las dos últimas filas de la matriz no se pueden utilizar, debido a que no tenemos datos más allá de el último dato de la serie.

Series Multivariadas#

En algunas situaciones tenemos más de una serie. La siguiente imagen muestra la nueva situación.

Como se observa, ahora los tensores de datos de entrada tiene una dimensión más, la cual en el lenguaje de los modelos convolucionales, que estudiarmeos más adelante, llamaremos canales. En el ejemplo, se muestra como se preparan los datos de tres series de tiempos para tratarlos como un problema de aprendizaje profundo.

La imagen ilustra claramente, la semejanza con los datos de imágenes en donde hay varios canales de color. Esta forma de organización de los datos para aprendizaje profundo llevó a los diseñadores de capas neuronales para este tipo de problemas a considerar tanto en problemas de visión por computador como para redes recurrentes, que los tensores de entrada siempre sean de dimensión 3: largo, ancho y canales.

En el caso de las series siempre se tendrá

  1. tamaño de batch, por ejemplo 32,

  2. longitud del rezago, por ejemplo 8,

  3. número de canales, por ejemplo 3.

Adicionalmente puede verse que la variable objetivo tiene por lo menos el tamaño del batch. Este tensor puede tener una segunda dimensión que podría corresponder al número de canales, pero no es necesario. Por ejemplo, podría tenerse que la variable objetivo podría ser el promedio de los valores predichos de cada acanal.

Modelos multivariadas multi-cabeza.#

Una vez las series de tiempo se presentan como un problema de aprendizaje profundo, es posible pensar en diferentes arquitecturas de redes. La siguiente imagen presenta un alternativa que consiste en que cada serie entran por una camino diferente a la red, sigue un camino separado por un tiempo y luego las predicciones se concadenan para llegar a una predicción final.

Modelo recurrente#

La siguiente imagen muestra la estructura general de una capa recurrente. LSTM es un caso particular y el gráfico ilustra bien la forma de procesamiento.

Comenzaremos con un ejemplo simple de pronosticar los valores de la función seno utilizando una red LSTM simple. Este código puede ser usado como base para la implementación de modelos mas complejos. Para un introdución teórica a las redes LSTM consulte el cuaderno Redes LSTM (Long Short Term Memory Networks).

Importar módulos requeridos#

import tensorflow as tf
import numpy as np
import pandas as pd
#import seaborn as sns
from pylab import rcParams
import matplotlib.pyplot as plt
from matplotlib import rc
#from tensorflow import keras
#
from tensorflow.keras.layers import Input, LSTM, Dense, GRU
from tensorflow.keras.models import Model
#
from tensorflow.keras.optimizers import Adam
#
from tensorflow.keras.utils import plot_model
#
from sklearn.preprocessing import MinMaxScaler
#
print("Versión de Tensorflow: ", tf.__version__)
Versión de Tensorflow:  2.9.1

Configuraciones básicas generales#

%matplotlib inline
%config InlineBackend.figure_format='retina'
#
#sns.set(style='whitegrid', palette='muted', font_scale=1.5)
rcParams['figure.figsize'] = 16, 10
#

#tf.random.set_seed(RANDOM_SEED)
#

Datos sintéticos#

Vamos a generar unos datos que siguen un comportamiento sinosoidal, con una tendencia ascendiente. Se introduce ruido Gaussiano.

Modifique las siguientes dos líneas y haga su priopia simulación.

#
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
trend = .1
#
time = np.arange(0, 100, 0.1)
sin =  trend*time + np.sin(time) + np.random.normal(scale=0.5, size=len(time))

Primer gráfico. Datos simulados.#

plt.plot(time, sin, label='Función seno con tendencia ascendente y ruido Gaussiano');
plt.title("Serie de tiempo simulada, con la función seno y una tendencia", size = 20)
plt.legend();
plt.show()
../../_images/rnr_Times_series_Intro-lstm_29_0.png
print( 'Número de datos: ', sin.shape[0])
Número de datos:  1000

Preprocesamiento de los datos#

Escala los datos#

Introducimos un instancia de clase del escalador MinMaxScaler para llevar los datos a la escala \([0,1]\)

#crea un dataset 
#
df1 = pd.DataFrame(sin, index=time, columns=['serie'])
# crea el objeto  scaler y escala los datos
scaler = MinMaxScaler(feature_range=(0, 1))
scaled_data = scaler.fit_transform(df1.values)
#
dataset = pd.DataFrame(scaled_data,index=df1.index, columns=['serie'])
#
dataset.shape
(1000, 1)

Gráfico de datos escalados#

#serie_0_1 = dataset.serie_0_1.values

plt.plot(time, dataset, label='Datos estandarizados a escala [0,1]');
plt.title("Serie de tiempo simulada, con la función seno y una tendencia", size = 20)
plt.legend();
plt.show()
../../_images/rnr_Times_series_Intro-lstm_37_0.png

Separación de datos entrenamiento y de validación#

En el caso de las series de tiempo, los datos de entrenamiento se toman desde el comienzo de la serie y los de validación desde el final hacia atras.

Veamos.

train_size = int(len(dataset) * 0.8)
test_size = len(dataset) - train_size
train, test = dataset.iloc[0:train_size], dataset.iloc[train_size:len(df1)]
len_train = len(train)
len_test = len(test)
print(len_train, len_test)
800 200
train.shape
(800, 1)

Plot que muestra los datos de entrenamiento y los de validación#

#plt.figure(figsize=(16,8))
plt.plot(train, label='Conjunto de entrenamiento (Training set): ' + str(len_train) +' puntos (80%)')
plt.plot(test, label='Conjunto de Validación (Validation set): '  + str(len_test) + ' puntos (20%)') 
plt.title("Serie de tiempo simulada, con la función seno y una tendencia", size = 20)
plt.legend()
plt.show()
../../_images/rnr_Times_series_Intro-lstm_43_0.png

Preparar los datos para la predicción de series temporales (LSTM en particular) puede ser complicado.

Intuitivamente, necesitamos predecir el valor en el paso de tiempo actual utilizando el historial (\(n\) pasos de tiempo hacia atrás a partir de él).

Aquí se propone una función genérica que hace el trabajo

def create_dataset(X, y, time_steps=1):
    # crea dos listas vacias para depositar los datos
    Xs, ys = [], []
    # el primer lote de datos empieza en la primera observación
    # y toma time_steps  datos.
    # Comienza a avanzar hacia adelante.
    for i in range(len(X) - time_steps):
        v = X.iloc[i:(i + time_steps)].values
        Xs.append(v)
        ys.append(y.iloc[i + time_steps])
    return np.array(Xs), np.array(ys)

La belleza de esta función es que trabaja con datos de series temporales univariadas (función única) y multivariadas (funciones múltiples). Usemos un historial de 50 pasos de tiempo para hacer nuestras secuencias. Esto significa que vamos a conservar la historia de 50 pasos atrás para predecir el valor actual.

time_steps = 50

# reshape to [samples, time_steps, n_features]

X_train, y_train = create_dataset(train, train, time_steps)
X_test, y_test = create_dataset(test, test, time_steps)

print(X_train.shape, y_train.shape)

print([X_train[0:2,], y_train[0:2]])
(750, 50, 1) (750, 1)
[array([[[0.12866229],
        [0.1121937 ],
        [0.15199099],
        [0.19517542],
        [0.13373672],
        [0.14166865],
        [0.2211258 ],
        [0.19603986],
        [0.15358548],
        [0.19974127],
        [0.16525499],
        [0.16990083],
        [0.20199465],
        [0.11982438],
        [0.12982285],
        [0.17767362],
        [0.16076588],
        [0.21355511],
        [0.16446774],
        [0.14307962],
        [0.25505916],
        [0.1851261 ],
        [0.19321062],
        [0.12984983],
        [0.15998183],
        [0.18065415],
        [0.12482759],
        [0.17917075],
        [0.13392128],
        [0.13937234],
        [0.12008995],
        [0.21029807],
        [0.12917938],
        [0.0806848 ],
        [0.14828083],
        [0.0604929 ],
        [0.11070353],
        [0.01858177],
        [0.03791053],
        [0.09315708],
        [0.10995165],
        [0.08338189],
        [0.06856677],
        [0.05846956],
        [0.00976347],
        [0.03858453],
        [0.04837435],
        [0.10885778],
        [0.08165597],
        [0.        ]],

       [[0.1121937 ],
        [0.15199099],
        [0.19517542],
        [0.13373672],
        [0.14166865],
        [0.2211258 ],
        [0.19603986],
        [0.15358548],
        [0.19974127],
        [0.16525499],
        [0.16990083],
        [0.20199465],
        [0.11982438],
        [0.12982285],
        [0.17767362],
        [0.16076588],
        [0.21355511],
        [0.16446774],
        [0.14307962],
        [0.25505916],
        [0.1851261 ],
        [0.19321062],
        [0.12984983],
        [0.15998183],
        [0.18065415],
        [0.12482759],
        [0.17917075],
        [0.13392128],
        [0.13937234],
        [0.12008995],
        [0.21029807],
        [0.12917938],
        [0.0806848 ],
        [0.14828083],
        [0.0604929 ],
        [0.11070353],
        [0.01858177],
        [0.03791053],
        [0.09315708],
        [0.10995165],
        [0.08338189],
        [0.06856677],
        [0.05846956],
        [0.00976347],
        [0.03858453],
        [0.04837435],
        [0.10885778],
        [0.08165597],
        [0.        ],
        [0.08542093]]]), array([[0.08542093],
       [0.06071886]])]
print(X_train.shape)
print(y_train.shape)
(750, 50, 1)
(750, 1)

Modelo LSTM#

Entrenar un modelo LSTM en Keras es fácil. Utilizaremos la capa LSTM en un modelo secuencial para hacer nuestras predicciones:

Crea el modelo#

# shapes
inputs_shape = (X_train.shape[1], X_train.shape[2])
lstm_output = 60

# layers
inputs = Input(inputs_shape)
x = LSTM(units=lstm_output, name='LSTM_layer')(inputs)
outputs = Dense(1)(x)

# model
serie_0_1_model = Model(inputs=inputs, outputs=outputs, name='series_LSTM_model')
inputs_shape
(50, 1)

Summary del modelo#

#model = keras.Sequential()
#model.add(keras.layers.LSTM(units=lstm_output,  input_shape=(X_train.shape[1], X_train.shape[2])))
#model.add(keras.layers.Dense(units=1))
serie_0_1_model.summary()

plot_model(serie_0_1_model, to_file='../Imagenes/series_LSTM_model.png', 
           show_shapes=True)
Model: "series_LSTM_model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 50, 1)]           0         
                                                                 
 LSTM_layer (LSTM)           (None, 60)                14880     
                                                                 
 dense (Dense)               (None, 1)                 61        
                                                                 
=================================================================
Total params: 14,941
Trainable params: 14,941
Non-trainable params: 0
_________________________________________________________________
You must install pydot (`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) for plot_model/model_to_dot to work.

Significa que el modelo tiene la primera de tipo LSTM capa con un input de tamaño \((50,1)\) y una salida de 60 neuronas.

La segunda es una capa densa con entrada 60 neuronas y salida una neurona

El cálculo del número de neuronas es como sigue:

\[ \text{Número de parámetros capa LSTM } = 4(p^2+ pn +p) \]

en donde \(p\) es el tamaño de salida y \(n\) el tamaño de entrada.

lstm_output =60
input_size =1 # input size in the LSTM machine
#
# cada entrada de tamaño 50 es mostrada a la máquina  LSTM machine uno por uno.
# luego se tiene
#
num_params = 4*(lstm_output*lstm_output + lstm_output*input_size+ lstm_output)
num_params
14880

Compila#

serie_0_1_model.compile(loss='mean_squared_error',
  optimizer=Adam(0.001)
)

Entrenamiento#

Lo más importante para recordar al entrenar modelos de series temporales es no mezclar los datos (el orden de los datos es importante). El resto es bastante estándar:

history = serie_0_1_model.fit(
    X_train, y_train,
    epochs=30,
    batch_size=16,
    validation_split=0.1,
    verbose=1,
    shuffle=True
)
Epoch 1/30
 1/43 [..............................] - ETA: 2:49 - loss: 0.1590

 3/43 [=>............................] - ETA: 1s - loss: 0.1534  

 6/43 [===>..........................] - ETA: 1s - loss: 0.1267

 9/43 [=====>........................] - ETA: 0s - loss: 0.0919

12/43 [=======>......................] - ETA: 0s - loss: 0.0711

15/43 [=========>....................] - ETA: 0s - loss: 0.0605

18/43 [===========>..................] - ETA: 0s - loss: 0.0518

21/43 [=============>................] - ETA: 0s - loss: 0.0453

23/43 [===============>..............] - ETA: 0s - loss: 0.0420

26/43 [=================>............] - ETA: 0s - loss: 0.0379

28/43 [==================>...........] - ETA: 0s - loss: 0.0361

31/43 [====================>.........] - ETA: 0s - loss: 0.0333

34/43 [======================>.......] - ETA: 0s - loss: 0.0307

36/43 [========================>.....] - ETA: 0s - loss: 0.0292

39/43 [==========================>...] - ETA: 0s - loss: 0.0272

42/43 [============================>.] - ETA: 0s - loss: 0.0257

43/43 [==============================] - 6s 58ms/step - loss: 0.0256 - val_loss: 0.0041
Epoch 2/30
 1/43 [..............................] - ETA: 1s - loss: 0.0047

 4/43 [=>............................] - ETA: 1s - loss: 0.0045

 7/43 [===>..........................] - ETA: 0s - loss: 0.0041

 9/43 [=====>........................] - ETA: 0s - loss: 0.0042

12/43 [=======>......................] - ETA: 0s - loss: 0.0045

15/43 [=========>....................] - ETA: 0s - loss: 0.0043

17/43 [==========>...................] - ETA: 0s - loss: 0.0041

20/43 [============>.................] - ETA: 0s - loss: 0.0040

23/43 [===============>..............] - ETA: 0s - loss: 0.0039

25/43 [================>.............] - ETA: 0s - loss: 0.0038

28/43 [==================>...........] - ETA: 0s - loss: 0.0038

30/43 [===================>..........] - ETA: 0s - loss: 0.0037

33/43 [======================>.......] - ETA: 0s - loss: 0.0037

36/43 [========================>.....] - ETA: 0s - loss: 0.0036

39/43 [==========================>...] - ETA: 0s - loss: 0.0037

42/43 [============================>.] - ETA: 0s - loss: 0.0037

43/43 [==============================] - 1s 26ms/step - loss: 0.0037 - val_loss: 0.0038
Epoch 3/30
 1/43 [..............................] - ETA: 1s - loss: 0.0023

 4/43 [=>............................] - ETA: 1s - loss: 0.0025

 7/43 [===>..........................] - ETA: 0s - loss: 0.0025

10/43 [=====>........................] - ETA: 0s - loss: 0.0025

12/43 [=======>......................] - ETA: 0s - loss: 0.0026

15/43 [=========>....................] - ETA: 0s - loss: 0.0027

18/43 [===========>..................] - ETA: 0s - loss: 0.0026

21/43 [=============>................] - ETA: 0s - loss: 0.0027

24/43 [===============>..............] - ETA: 0s - loss: 0.0029

27/43 [=================>............] - ETA: 0s - loss: 0.0028

29/43 [===================>..........] - ETA: 0s - loss: 0.0029

32/43 [=====================>........] - ETA: 0s - loss: 0.0029

35/43 [=======================>......] - ETA: 0s - loss: 0.0030

38/43 [=========================>....] - ETA: 0s - loss: 0.0030

41/43 [===========================>..] - ETA: 0s - loss: 0.0030

43/43 [==============================] - 1s 26ms/step - loss: 0.0029 - val_loss: 0.0048
Epoch 4/30
 1/43 [..............................] - ETA: 1s - loss: 0.0039

 4/43 [=>............................] - ETA: 0s - loss: 0.0044

 6/43 [===>..........................] - ETA: 0s - loss: 0.0041

 9/43 [=====>........................] - ETA: 0s - loss: 0.0036

11/43 [======>.......................] - ETA: 0s - loss: 0.0034

14/43 [========>.....................] - ETA: 0s - loss: 0.0033

17/43 [==========>...................] - ETA: 0s - loss: 0.0033

20/43 [============>.................] - ETA: 0s - loss: 0.0032

23/43 [===============>..............] - ETA: 0s - loss: 0.0031

25/43 [================>.............] - ETA: 0s - loss: 0.0031

28/43 [==================>...........] - ETA: 0s - loss: 0.0031

31/43 [====================>.........] - ETA: 0s - loss: 0.0031

34/43 [======================>.......] - ETA: 0s - loss: 0.0031

37/43 [========================>.....] - ETA: 0s - loss: 0.0030

40/43 [==========================>...] - ETA: 0s - loss: 0.0030

43/43 [==============================] - ETA: 0s - loss: 0.0030

43/43 [==============================] - 1s 26ms/step - loss: 0.0030 - val_loss: 0.0031
Epoch 5/30
 1/43 [..............................] - ETA: 1s - loss: 0.0019

 4/43 [=>............................] - ETA: 1s - loss: 0.0026

 7/43 [===>..........................] - ETA: 0s - loss: 0.0026

10/43 [=====>........................] - ETA: 0s - loss: 0.0029

13/43 [========>.....................] - ETA: 0s - loss: 0.0029

16/43 [==========>...................] - ETA: 0s - loss: 0.0030

19/43 [============>.................] - ETA: 0s - loss: 0.0030

22/43 [==============>...............] - ETA: 0s - loss: 0.0029

25/43 [================>.............] - ETA: 0s - loss: 0.0029

28/43 [==================>...........] - ETA: 0s - loss: 0.0029

31/43 [====================>.........] - ETA: 0s - loss: 0.0028

34/43 [======================>.......] - ETA: 0s - loss: 0.0028

37/43 [========================>.....] - ETA: 0s - loss: 0.0029

40/43 [==========================>...] - ETA: 0s - loss: 0.0028

42/43 [============================>.] - ETA: 0s - loss: 0.0028

43/43 [==============================] - 1s 26ms/step - loss: 0.0028 - val_loss: 0.0031
Epoch 6/30
 1/43 [..............................] - ETA: 1s - loss: 0.0025

 4/43 [=>............................] - ETA: 0s - loss: 0.0032

 6/43 [===>..........................] - ETA: 0s - loss: 0.0027

 9/43 [=====>........................] - ETA: 0s - loss: 0.0030

11/43 [======>.......................] - ETA: 0s - loss: 0.0028

14/43 [========>.....................] - ETA: 0s - loss: 0.0030

17/43 [==========>...................] - ETA: 0s - loss: 0.0029

20/43 [============>.................] - ETA: 0s - loss: 0.0028

23/43 [===============>..............] - ETA: 0s - loss: 0.0027

26/43 [=================>............] - ETA: 0s - loss: 0.0027

28/43 [==================>...........] - ETA: 0s - loss: 0.0027

31/43 [====================>.........] - ETA: 0s - loss: 0.0026

34/43 [======================>.......] - ETA: 0s - loss: 0.0026

37/43 [========================>.....] - ETA: 0s - loss: 0.0026

39/43 [==========================>...] - ETA: 0s - loss: 0.0026

42/43 [============================>.] - ETA: 0s - loss: 0.0026

43/43 [==============================] - 1s 26ms/step - loss: 0.0026 - val_loss: 0.0027
Epoch 7/30
 1/43 [..............................] - ETA: 0s - loss: 0.0032

 3/43 [=>............................] - ETA: 1s - loss: 0.0021

 6/43 [===>..........................] - ETA: 0s - loss: 0.0025

 9/43 [=====>........................] - ETA: 0s - loss: 0.0026

12/43 [=======>......................] - ETA: 0s - loss: 0.0027

15/43 [=========>....................] - ETA: 0s - loss: 0.0026

18/43 [===========>..................] - ETA: 0s - loss: 0.0029

21/43 [=============>................] - ETA: 0s - loss: 0.0028

24/43 [===============>..............] - ETA: 0s - loss: 0.0027

27/43 [=================>............] - ETA: 0s - loss: 0.0026

30/43 [===================>..........] - ETA: 0s - loss: 0.0026

33/43 [======================>.......] - ETA: 0s - loss: 0.0026

36/43 [========================>.....] - ETA: 0s - loss: 0.0026

39/43 [==========================>...] - ETA: 0s - loss: 0.0026

42/43 [============================>.] - ETA: 0s - loss: 0.0025

43/43 [==============================] - 1s 26ms/step - loss: 0.0025 - val_loss: 0.0027
Epoch 8/30
 1/43 [..............................] - ETA: 1s - loss: 0.0010

 4/43 [=>............................] - ETA: 0s - loss: 0.0020

 7/43 [===>..........................] - ETA: 0s - loss: 0.0019

10/43 [=====>........................] - ETA: 0s - loss: 0.0020

13/43 [========>.....................] - ETA: 0s - loss: 0.0021

16/43 [==========>...................] - ETA: 0s - loss: 0.0021

19/43 [============>.................] - ETA: 0s - loss: 0.0022

21/43 [=============>................] - ETA: 0s - loss: 0.0022

24/43 [===============>..............] - ETA: 0s - loss: 0.0022

27/43 [=================>............] - ETA: 0s - loss: 0.0024

30/43 [===================>..........] - ETA: 0s - loss: 0.0024

33/43 [======================>.......] - ETA: 0s - loss: 0.0023

36/43 [========================>.....] - ETA: 0s - loss: 0.0023

39/43 [==========================>...] - ETA: 0s - loss: 0.0024

42/43 [============================>.] - ETA: 0s - loss: 0.0023

43/43 [==============================] - 1s 25ms/step - loss: 0.0024 - val_loss: 0.0024
Epoch 9/30
 1/43 [..............................] - ETA: 0s - loss: 0.0023

 4/43 [=>............................] - ETA: 1s - loss: 0.0019

 7/43 [===>..........................] - ETA: 0s - loss: 0.0021

10/43 [=====>........................] - ETA: 0s - loss: 0.0021

13/43 [========>.....................] - ETA: 0s - loss: 0.0024

16/43 [==========>...................] - ETA: 0s - loss: 0.0025

19/43 [============>.................] - ETA: 0s - loss: 0.0025

22/43 [==============>...............] - ETA: 0s - loss: 0.0025

25/43 [================>.............] - ETA: 0s - loss: 0.0025

28/43 [==================>...........] - ETA: 0s - loss: 0.0025

31/43 [====================>.........] - ETA: 0s - loss: 0.0025

34/43 [======================>.......] - ETA: 0s - loss: 0.0025

37/43 [========================>.....] - ETA: 0s - loss: 0.0025

40/43 [==========================>...] - ETA: 0s - loss: 0.0024

43/43 [==============================] - ETA: 0s - loss: 0.0023

43/43 [==============================] - 1s 26ms/step - loss: 0.0023 - val_loss: 0.0021
Epoch 10/30
 1/43 [..............................] - ETA: 1s - loss: 0.0015

 4/43 [=>............................] - ETA: 0s - loss: 0.0024

 7/43 [===>..........................] - ETA: 0s - loss: 0.0025

10/43 [=====>........................] - ETA: 0s - loss: 0.0024

13/43 [========>.....................] - ETA: 0s - loss: 0.0022

16/43 [==========>...................] - ETA: 0s - loss: 0.0021

19/43 [============>.................] - ETA: 0s - loss: 0.0021

22/43 [==============>...............] - ETA: 0s - loss: 0.0019

25/43 [================>.............] - ETA: 0s - loss: 0.0020

28/43 [==================>...........] - ETA: 0s - loss: 0.0020

31/43 [====================>.........] - ETA: 0s - loss: 0.0021

34/43 [======================>.......] - ETA: 0s - loss: 0.0021

37/43 [========================>.....] - ETA: 0s - loss: 0.0021

40/43 [==========================>...] - ETA: 0s - loss: 0.0021

43/43 [==============================] - ETA: 0s - loss: 0.0022

43/43 [==============================] - 1s 25ms/step - loss: 0.0022 - val_loss: 0.0026
Epoch 11/30
 1/43 [..............................] - ETA: 1s - loss: 0.0021

 4/43 [=>............................] - ETA: 0s - loss: 0.0024

 6/43 [===>..........................] - ETA: 0s - loss: 0.0026

 9/43 [=====>........................] - ETA: 0s - loss: 0.0029

12/43 [=======>......................] - ETA: 0s - loss: 0.0029

15/43 [=========>....................] - ETA: 0s - loss: 0.0025

18/43 [===========>..................] - ETA: 0s - loss: 0.0025

20/43 [============>.................] - ETA: 0s - loss: 0.0025

23/43 [===============>..............] - ETA: 0s - loss: 0.0024

26/43 [=================>............] - ETA: 0s - loss: 0.0024

29/43 [===================>..........] - ETA: 0s - loss: 0.0024

31/43 [====================>.........] - ETA: 0s - loss: 0.0025

34/43 [======================>.......] - ETA: 0s - loss: 0.0024

37/43 [========================>.....] - ETA: 0s - loss: 0.0023

39/43 [==========================>...] - ETA: 0s - loss: 0.0023

42/43 [============================>.] - ETA: 0s - loss: 0.0023

43/43 [==============================] - 1s 26ms/step - loss: 0.0023 - val_loss: 0.0020
Epoch 12/30
 1/43 [..............................] - ETA: 1s - loss: 0.0026

 4/43 [=>............................] - ETA: 0s - loss: 0.0021

 7/43 [===>..........................] - ETA: 0s - loss: 0.0023

10/43 [=====>........................] - ETA: 0s - loss: 0.0021

13/43 [========>.....................] - ETA: 0s - loss: 0.0021

16/43 [==========>...................] - ETA: 0s - loss: 0.0020

19/43 [============>.................] - ETA: 0s - loss: 0.0021

22/43 [==============>...............] - ETA: 0s - loss: 0.0023

25/43 [================>.............] - ETA: 0s - loss: 0.0023

27/43 [=================>............] - ETA: 0s - loss: 0.0023

29/43 [===================>..........] - ETA: 0s - loss: 0.0024

32/43 [=====================>........] - ETA: 0s - loss: 0.0024

35/43 [=======================>......] - ETA: 0s - loss: 0.0024

38/43 [=========================>....] - ETA: 0s - loss: 0.0023

41/43 [===========================>..] - ETA: 0s - loss: 0.0024

43/43 [==============================] - 1s 25ms/step - loss: 0.0024 - val_loss: 0.0021
Epoch 13/30
 1/43 [..............................] - ETA: 1s - loss: 0.0021

 4/43 [=>............................] - ETA: 0s - loss: 0.0022

 7/43 [===>..........................] - ETA: 0s - loss: 0.0023

10/43 [=====>........................] - ETA: 0s - loss: 0.0021

12/43 [=======>......................] - ETA: 0s - loss: 0.0022

15/43 [=========>....................] - ETA: 0s - loss: 0.0023

18/43 [===========>..................] - ETA: 0s - loss: 0.0023

21/43 [=============>................] - ETA: 0s - loss: 0.0022

23/43 [===============>..............] - ETA: 0s - loss: 0.0022

26/43 [=================>............] - ETA: 0s - loss: 0.0022

29/43 [===================>..........] - ETA: 0s - loss: 0.0022

32/43 [=====================>........] - ETA: 0s - loss: 0.0021

35/43 [=======================>......] - ETA: 0s - loss: 0.0021

38/43 [=========================>....] - ETA: 0s - loss: 0.0022

41/43 [===========================>..] - ETA: 0s - loss: 0.0021

43/43 [==============================] - 1s 26ms/step - loss: 0.0021 - val_loss: 0.0028
Epoch 14/30
 1/43 [..............................] - ETA: 1s - loss: 0.0019

 4/43 [=>............................] - ETA: 0s - loss: 0.0019

 7/43 [===>..........................] - ETA: 0s - loss: 0.0021

10/43 [=====>........................] - ETA: 0s - loss: 0.0021

13/43 [========>.....................] - ETA: 0s - loss: 0.0021

15/43 [=========>....................] - ETA: 0s - loss: 0.0020

18/43 [===========>..................] - ETA: 0s - loss: 0.0019

21/43 [=============>................] - ETA: 0s - loss: 0.0020

24/43 [===============>..............] - ETA: 0s - loss: 0.0019

26/43 [=================>............] - ETA: 0s - loss: 0.0020

29/43 [===================>..........] - ETA: 0s - loss: 0.0020

32/43 [=====================>........] - ETA: 0s - loss: 0.0020

35/43 [=======================>......] - ETA: 0s - loss: 0.0020

38/43 [=========================>....] - ETA: 0s - loss: 0.0020

41/43 [===========================>..] - ETA: 0s - loss: 0.0020

43/43 [==============================] - 1s 25ms/step - loss: 0.0020 - val_loss: 0.0022
Epoch 15/30
 1/43 [..............................] - ETA: 0s - loss: 0.0021

 4/43 [=>............................] - ETA: 0s - loss: 0.0025

 6/43 [===>..........................] - ETA: 0s - loss: 0.0023

 8/43 [====>.........................] - ETA: 0s - loss: 0.0020

11/43 [======>.......................] - ETA: 0s - loss: 0.0018

14/43 [========>.....................] - ETA: 0s - loss: 0.0019

17/43 [==========>...................] - ETA: 0s - loss: 0.0019

20/43 [============>.................] - ETA: 0s - loss: 0.0019

22/43 [==============>...............] - ETA: 0s - loss: 0.0018

25/43 [================>.............] - ETA: 0s - loss: 0.0019

28/43 [==================>...........] - ETA: 0s - loss: 0.0019

31/43 [====================>.........] - ETA: 0s - loss: 0.0019

34/43 [======================>.......] - ETA: 0s - loss: 0.0020

37/43 [========================>.....] - ETA: 0s - loss: 0.0019

40/43 [==========================>...] - ETA: 0s - loss: 0.0020

43/43 [==============================] - ETA: 0s - loss: 0.0020

43/43 [==============================] - 1s 26ms/step - loss: 0.0020 - val_loss: 0.0025
Epoch 16/30
 1/43 [..............................] - ETA: 1s - loss: 0.0033

 4/43 [=>............................] - ETA: 0s - loss: 0.0023

 7/43 [===>..........................] - ETA: 0s - loss: 0.0020

10/43 [=====>........................] - ETA: 0s - loss: 0.0020

13/43 [========>.....................] - ETA: 0s - loss: 0.0020

15/43 [=========>....................] - ETA: 0s - loss: 0.0020

18/43 [===========>..................] - ETA: 0s - loss: 0.0020

20/43 [============>.................] - ETA: 0s - loss: 0.0020

23/43 [===============>..............] - ETA: 0s - loss: 0.0019

26/43 [=================>............] - ETA: 0s - loss: 0.0020

29/43 [===================>..........] - ETA: 0s - loss: 0.0020

32/43 [=====================>........] - ETA: 0s - loss: 0.0020

34/43 [======================>.......] - ETA: 0s - loss: 0.0020

37/43 [========================>.....] - ETA: 0s - loss: 0.0020

40/43 [==========================>...] - ETA: 0s - loss: 0.0020

42/43 [============================>.] - ETA: 0s - loss: 0.0020

43/43 [==============================] - 1s 26ms/step - loss: 0.0020 - val_loss: 0.0020
Epoch 17/30
 1/43 [..............................] - ETA: 1s - loss: 0.0025

 3/43 [=>............................] - ETA: 1s - loss: 0.0019

 6/43 [===>..........................] - ETA: 1s - loss: 0.0022

 9/43 [=====>........................] - ETA: 0s - loss: 0.0023

12/43 [=======>......................] - ETA: 0s - loss: 0.0023

14/43 [========>.....................] - ETA: 0s - loss: 0.0023

17/43 [==========>...................] - ETA: 0s - loss: 0.0024

20/43 [============>.................] - ETA: 0s - loss: 0.0024

23/43 [===============>..............] - ETA: 0s - loss: 0.0024

26/43 [=================>............] - ETA: 0s - loss: 0.0024

29/43 [===================>..........] - ETA: 0s - loss: 0.0023

32/43 [=====================>........] - ETA: 0s - loss: 0.0022

35/43 [=======================>......] - ETA: 0s - loss: 0.0022

38/43 [=========================>....] - ETA: 0s - loss: 0.0022

40/43 [==========================>...] - ETA: 0s - loss: 0.0022

43/43 [==============================] - ETA: 0s - loss: 0.0021

43/43 [==============================] - 1s 27ms/step - loss: 0.0021 - val_loss: 0.0018
Epoch 18/30
 1/43 [..............................] - ETA: 1s - loss: 0.0011

 4/43 [=>............................] - ETA: 1s - loss: 0.0016

 7/43 [===>..........................] - ETA: 0s - loss: 0.0018

 9/43 [=====>........................] - ETA: 0s - loss: 0.0020

12/43 [=======>......................] - ETA: 0s - loss: 0.0019

14/43 [========>.....................] - ETA: 0s - loss: 0.0018

16/43 [==========>...................] - ETA: 0s - loss: 0.0018

19/43 [============>.................] - ETA: 0s - loss: 0.0019

22/43 [==============>...............] - ETA: 0s - loss: 0.0019

25/43 [================>.............] - ETA: 0s - loss: 0.0020

27/43 [=================>............] - ETA: 0s - loss: 0.0019

30/43 [===================>..........] - ETA: 0s - loss: 0.0019

33/43 [======================>.......] - ETA: 0s - loss: 0.0019

36/43 [========================>.....] - ETA: 0s - loss: 0.0019

39/43 [==========================>...] - ETA: 0s - loss: 0.0019

42/43 [============================>.] - ETA: 0s - loss: 0.0019

43/43 [==============================] - 1s 27ms/step - loss: 0.0019 - val_loss: 0.0019
Epoch 19/30
 1/43 [..............................] - ETA: 0s - loss: 0.0025

 3/43 [=>............................] - ETA: 1s - loss: 0.0020

 6/43 [===>..........................] - ETA: 0s - loss: 0.0019

 8/43 [====>.........................] - ETA: 0s - loss: 0.0020

10/43 [=====>........................] - ETA: 0s - loss: 0.0020

13/43 [========>.....................] - ETA: 0s - loss: 0.0018

16/43 [==========>...................] - ETA: 0s - loss: 0.0019

19/43 [============>.................] - ETA: 0s - loss: 0.0019

22/43 [==============>...............] - ETA: 0s - loss: 0.0018

24/43 [===============>..............] - ETA: 0s - loss: 0.0019

27/43 [=================>............] - ETA: 0s - loss: 0.0019

30/43 [===================>..........] - ETA: 0s - loss: 0.0019

33/43 [======================>.......] - ETA: 0s - loss: 0.0019

36/43 [========================>.....] - ETA: 0s - loss: 0.0019

39/43 [==========================>...] - ETA: 0s - loss: 0.0019

42/43 [============================>.] - ETA: 0s - loss: 0.0020

43/43 [==============================] - 1s 27ms/step - loss: 0.0020 - val_loss: 0.0038
Epoch 20/30
 1/43 [..............................] - ETA: 0s - loss: 0.0034

 3/43 [=>............................] - ETA: 1s - loss: 0.0024

 6/43 [===>..........................] - ETA: 0s - loss: 0.0024

 9/43 [=====>........................] - ETA: 0s - loss: 0.0023

12/43 [=======>......................] - ETA: 0s - loss: 0.0024

15/43 [=========>....................] - ETA: 0s - loss: 0.0022

17/43 [==========>...................] - ETA: 0s - loss: 0.0023

20/43 [============>.................] - ETA: 0s - loss: 0.0023

23/43 [===============>..............] - ETA: 0s - loss: 0.0023

26/43 [=================>............] - ETA: 0s - loss: 0.0022

28/43 [==================>...........] - ETA: 0s - loss: 0.0022

31/43 [====================>.........] - ETA: 0s - loss: 0.0022

34/43 [======================>.......] - ETA: 0s - loss: 0.0022

37/43 [========================>.....] - ETA: 0s - loss: 0.0021

40/43 [==========================>...] - ETA: 0s - loss: 0.0021

43/43 [==============================] - ETA: 0s - loss: 0.0021

43/43 [==============================] - 1s 27ms/step - loss: 0.0021 - val_loss: 0.0022
Epoch 21/30
 1/43 [..............................] - ETA: 1s - loss: 0.0018

 4/43 [=>............................] - ETA: 0s - loss: 0.0016

 7/43 [===>..........................] - ETA: 0s - loss: 0.0019

10/43 [=====>........................] - ETA: 0s - loss: 0.0022

13/43 [========>.....................] - ETA: 0s - loss: 0.0020

15/43 [=========>....................] - ETA: 0s - loss: 0.0020

17/43 [==========>...................] - ETA: 0s - loss: 0.0020

19/43 [============>.................] - ETA: 0s - loss: 0.0020

21/43 [=============>................] - ETA: 0s - loss: 0.0019

23/43 [===============>..............] - ETA: 0s - loss: 0.0019

25/43 [================>.............] - ETA: 0s - loss: 0.0019

28/43 [==================>...........] - ETA: 0s - loss: 0.0019

31/43 [====================>.........] - ETA: 0s - loss: 0.0019

33/43 [======================>.......] - ETA: 0s - loss: 0.0019

35/43 [=======================>......] - ETA: 0s - loss: 0.0019

37/43 [========================>.....] - ETA: 0s - loss: 0.0020

39/43 [==========================>...] - ETA: 0s - loss: 0.0019

41/43 [===========================>..] - ETA: 0s - loss: 0.0019

43/43 [==============================] - ETA: 0s - loss: 0.0019

43/43 [==============================] - 1s 33ms/step - loss: 0.0019 - val_loss: 0.0021
Epoch 22/30
 1/43 [..............................] - ETA: 1s - loss: 0.0021

 3/43 [=>............................] - ETA: 1s - loss: 0.0025

 5/43 [==>...........................] - ETA: 1s - loss: 0.0022

 7/43 [===>..........................] - ETA: 1s - loss: 0.0021

10/43 [=====>........................] - ETA: 0s - loss: 0.0021

13/43 [========>.....................] - ETA: 0s - loss: 0.0019

16/43 [==========>...................] - ETA: 0s - loss: 0.0019

18/43 [===========>..................] - ETA: 0s - loss: 0.0019

20/43 [============>.................] - ETA: 0s - loss: 0.0019

22/43 [==============>...............] - ETA: 0s - loss: 0.0018

Evaluación del Modelo#

plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend();
../../_images/rnr_Times_series_Intro-lstm_64_0.png

Predicciones#

y_pred =serie_0_1_model.predict(X_test)
5/5 [==============================] - 1s 11ms/step
y_pred.shape
(150, 1)

Serie completa#

plt.plot(np.arange(0, len(y_train)), y_train, 'g', label="historia")
plt.plot(np.arange(len(y_train), len(y_train) + len(y_test)), y_test, marker='.', label="valor verdadero")
plt.plot(np.arange(len(y_train), len(y_train) + len(y_test)), y_pred, 'r', label="predicción")
plt.ylabel('Valor')
plt.xlabel('Tiempo')
plt.legend()
plt.show();
../../_images/rnr_Times_series_Intro-lstm_69_0.png

Periodo de prueba (test)#

plt.plot(y_test, marker='.', label="true")
plt.plot(y_pred, 'r', label="prediction")
plt.ylabel('Value')
plt.xlabel('Time Step')
plt.legend()
plt.show();
../../_images/rnr_Times_series_Intro-lstm_71_0.png

Transforma a la escala original#

scaled_data = scaler.fit_transform(df1.values)
dataset = pd.DataFrame(scaled_data)
dataset.columns = ['sine']
dataset.index = df1.index
y_pred = scaler.inverse_transform(y_pred)
y_test = scaler.inverse_transform(y_test.reshape(-1,1))
y_test.shape
(150, 1)
plt.plot(y_test, marker='.', label="verdadero")
plt.plot(y_pred, 'r', label="predicción")
plt.ylabel('Valor')
plt.xlabel('Tiempo')
plt.legend()
plt.show();
../../_images/rnr_Times_series_Intro-lstm_77_0.png

Modelo GRU#

# shapes
inputs_shape = (X_train.shape[1], X_train.shape[2])
lstm_output = 60

# layers
inputs = Input(inputs_shape)
x = GRU(units=lstm_output, name='GRU_layer')(inputs)
outputs = Dense(1)(x)

# model
serie_0_1_model_gru = Model(inputs=inputs, outputs=outputs, name='series_LSTM_model')


# summary
serie_0_1_model_gru.summary()

plot_model(serie_0_1_model_gru, to_file='../Imagenes/series_LSTM_model.png', 
           show_shapes=True)
Model: "series_LSTM_model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_4 (InputLayer)        [(None, 50, 1)]           0         
                                                                 
 GRU_layer (GRU)             (None, 60)                11340     
                                                                 
 dense_3 (Dense)             (None, 1)                 61        
                                                                 
=================================================================
Total params: 11,401
Trainable params: 11,401
Non-trainable params: 0
_________________________________________________________________
You must install pydot (`pip install pydot`) and install graphviz (see instructions at https://graphviz.gitlab.io/download/) for plot_model/model_to_dot to work.
# compila
serie_0_1_model_gru.compile(loss='mean_squared_error',
  optimizer=Adam(0.001)
)
# entrena
history = serie_0_1_model_gru.fit(
    X_train, y_train,
    epochs=30,
    batch_size=16,
    validation_split=0.1,
    verbose=1,
    shuffle=False
)
Epoch 1/30
43/43 [==============================] - 7s 58ms/step - loss: 0.0064 - val_loss: 0.0027
Epoch 2/30
43/43 [==============================] - 1s 35ms/step - loss: 0.0072 - val_loss: 0.0028
Epoch 3/30
43/43 [==============================] - 1s 34ms/step - loss: 0.0049 - val_loss: 0.0025
Epoch 4/30
43/43 [==============================] - 1s 32ms/step - loss: 0.0032 - val_loss: 0.0020
Epoch 5/30
43/43 [==============================] - 2s 35ms/step - loss: 0.0026 - val_loss: 0.0020
Epoch 6/30
43/43 [==============================] - 1s 34ms/step - loss: 0.0023 - val_loss: 0.0020
Epoch 7/30
43/43 [==============================] - 2s 35ms/step - loss: 0.0021 - val_loss: 0.0020
Epoch 8/30
43/43 [==============================] - 2s 38ms/step - loss: 0.0021 - val_loss: 0.0020
Epoch 9/30
43/43 [==============================] - 2s 41ms/step - loss: 0.0021 - val_loss: 0.0020
Epoch 10/30
43/43 [==============================] - 1s 29ms/step - loss: 0.0021 - val_loss: 0.0020
Epoch 11/30
43/43 [==============================] - 1s 28ms/step - loss: 0.0021 - val_loss: 0.0019
Epoch 12/30
43/43 [==============================] - 1s 28ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 13/30
43/43 [==============================] - 1s 28ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 14/30
43/43 [==============================] - 1s 28ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 15/30
43/43 [==============================] - 1s 29ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 16/30
43/43 [==============================] - 1s 30ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 17/30
43/43 [==============================] - 1s 28ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 18/30
43/43 [==============================] - 1s 29ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 19/30
43/43 [==============================] - 1s 28ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 20/30
43/43 [==============================] - 1s 29ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 21/30
43/43 [==============================] - 1s 29ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 22/30
43/43 [==============================] - 1s 28ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 23/30
43/43 [==============================] - 1s 30ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 24/30
43/43 [==============================] - 1s 29ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 25/30
43/43 [==============================] - 1s 30ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 26/30
43/43 [==============================] - 1s 29ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 27/30
43/43 [==============================] - 1s 31ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 28/30
43/43 [==============================] - 1s 34ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 29/30
43/43 [==============================] - 2s 39ms/step - loss: 0.0022 - val_loss: 0.0019
Epoch 30/30
43/43 [==============================] - 2s 40ms/step - loss: 0.0022 - val_loss: 0.0019
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend();
../../_images/rnr_Times_series_Intro-lstm_82_0.png

Predicciones#

y_pred_gru =serie_0_1_model_gru.predict(X_test)
# escala original
y_pred_gru = scaler.inverse_transform(y_pred_gru)
5/5 [==============================] - 1s 10ms/step
plt.plot(y_test, marker='.', label="true")
plt.plot(y_pred, 'r', label="prediction lstm")
plt.plot(y_pred_gru, 'g', label="prediction gru")
plt.ylabel('Value')
plt.xlabel('Time Step')
plt.legend()
plt.show();
../../_images/rnr_Times_series_Intro-lstm_85_0.png

Número de parámetros#

p= 60
n=1
3*(p*p +p*n+p)
11160

Referencias#

  1. Introducción a Redes LSTM

  2. Time Series Forecasting with LSTMs using TensorFlow 2 and Keras in Python

  3. Dive into Deep Learnig