Predicción de secuencia de valores futuros de una acción. Apple (LSTM)#

Importar las librería requeridas#

#
from __future__ import absolute_import, division, print_function, unicode_literals

import tensorflow as tf

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#%matplotlib inline
from sklearn.preprocessing import MinMaxScaler

#setting figure size
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 20,10

#importing required libraries
from sklearn.preprocessing import MinMaxScaler

# importa objetos de keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Dropout, LSTM
print("Versión de Tensorflow: ", tf.__version__)

# optimizador
from tensorflow.keras.optimizers import Adam
Versión de Tensorflow:  2.9.1

Lectura de los datos#

Estos datos corresponden a la empresa Apple. Son 3019 datos que corresponden a observaciones del precio de la acción, el número de transacciones de la acción (compra-venta). Los datos son diarios (dias hábiles o comerciales). Están entre el 3 de enero de 2006 hasta el 1 de enero de 2018.

la columna Date es la fecha, Open es el valor de acción a la apertura del mercado, High el valor más alto alcanzado en el día, Low el valor más bajo del día, Close el valor al cierre, Volume es el volúmenes de acciones transadas en el día y Name es el código de identificación de la empresa, Apple en este caso.

Los datos puede ser bajados directamente de Kaggle

#reading from a local file

df = pd.read_csv('https://raw.githubusercontent.com/AprendizajeProfundo/Libro-Fundamentos/main/Redes_Recurrentes/Datos/AAPL_2006-01-01_to_2018-01-01.csv')
# looking at the first five rows of the data
print('\n Shape of the data:')
print(df.shape)
df.head()
 Shape of the data:
(3019, 7)
Date Open High Low Close Volume Name
0 2006-01-03 10.34 10.68 10.32 10.68 201853036 AAPL
1 2006-01-04 10.73 10.85 10.64 10.71 155225609 AAPL
2 2006-01-05 10.69 10.70 10.54 10.63 112396081 AAPL
3 2006-01-06 10.75 10.96 10.65 10.90 176139334 AAPL
4 2006-01-09 10.96 11.03 10.82 10.86 168861224 AAPL

Vamos a cambiar el índice de los datos. Tomaremos la fecha como indice: df.index. Los datos se reordenan para invertir la tabla, debido a que los datos contienen las observaciones más recientes en la parte superior de la tabla.

Extrae datos para la serie que se desea predecir-close#

#creating dataframe with date and the target variable

df['Date'] = pd.to_datetime(df.Date,format='%Y-%m-%d')
df.index = df['Date']
# df = df.sort_index(ascending=True, axis=0)
data = pd.DataFrame(df[['Date', 'Close']])
#
#setting index
data.index = data.Date
data.drop('Date', axis=1, inplace=True)
data.head()
Close
Date
2006-01-03 10.68
2006-01-04 10.71
2006-01-05 10.63
2006-01-06 10.90
2006-01-09 10.86

Visualización de la serie precio al cierre#

# plot
len_data = len(data)
len_train = int(len_data*0.8) # 80%  = 3019
len_test = len_data- len_train # 20% = 2415
print (len_data, '=', len_train, '+',len_test)
3019 = 2415 + 604
plt.figure(figsize=(16,8))
plt.plot(data[:len_train], label='Conjunto de entrenamiento (Training set): {} puntos (80%)'.format(len_train))
plt.plot(data['Close'][len_train:], label='Conjunto de validación (Validation set): {} puntos (20%)'.format(len_test)) #248 data
plt.title("Apple: Historia del precio la acción al cierre (Close)", size = 20)
plt.legend()
plt.show()
../../_images/rnr_accion_Apple_Prediccion_tres_dias-multiple_12_0.png

Preparación de los datos para el entrenamiento de la red LSTM#

Para evitar problemas con las tendencias y para mejorar la estimación (entrenamiento) los datos se van a transformar a la escala \([0,1]\). Para las predicciones se utiliza la transformación inversa.

Primero extrae los valores y se crea el objeto MinMaxScaler#

#creating train and test sets
dataset = data.values


# create the scaler object and scale the data
scaler = MinMaxScaler(feature_range=(0, 1))
#scaled_data = np.array(scaler.fit_transform(dataset))
dataset = np.squeeze(np.array(scaler.fit_transform(dataset)),axis=1)
# dataset = pd.DataFrame(scaled_data,index=data.index, columns=['serie'])
dataset.shape
(3019,)

Crea datos de entrenamiento#

La red LSTM tendrá como entrada «time_step» datos consecutivos, y como salida 5 datos (la predicción a partir de esos «time_step» datos se hace para los siguentes 5 días). Se conformará de esta forma el set de entrenamiento

  1. Número de datos consecutivos para entrenamiento: time_step = 60.

  2. Días a predecir: days = 1

Función para crear los datos de entrenamiento#

def multipaso_data(dataset, target, start_index, end_index, history_size,
                      target_size,  single_step=False):
    ''' dataset: conjunto de datos para las secuencias de entrada
        target:  conjunto de datos para las secuencias de salida
        start_index: índice inicial de donde empezar a tomar los datos
        end_index: índice final para tomar los datos. None para tomarlos todos
        history_size: tamaño de la venytana para crear las secuencias
        target_size: dentro de cuántas observaciones futuras desea pronosticar
        single_step: Predecir solamente un valor futuro (=True),
                     o predecir todos los valores hasta target_size(=False)
    '''  
    data = []
    labels = []

    start_index = start_index + history_size
    if end_index is None:
        end_index = len(dataset) - target_size

    for i in range(start_index, end_index):
        indices = range(i-history_size, i)
        data.append(dataset[indices])

        if single_step:
            labels.append(target[i+target_size])
        else:
            labels.append(target[i:i+target_size])

    return np.array(data), np.array(labels)

Se coloca una semilla para gerantizar reproductibidad

tf.random.set_seed(100)
#
# hiperparámetros para crear las secuencias
#
# tamaño de pasos a futuro
future_target = 10

# tamaño secuencias de entrada
past_history = 60 

TRAIN_SPLIT = int(len_data*0.8) #2415: nuḿer0 de datos entreno

# Crea los datos
X_train, y_train = multipaso_data(dataset, dataset, 0,
                                                 TRAIN_SPLIT, past_history,
                                                 future_target)
X_test, y_test = multipaso_data(dataset, dataset, TRAIN_SPLIT,
                                                 None, past_history,
                                                 future_target)

print(TRAIN_SPLIT)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
2415
(2355, 60)
(2355, 10)
(534, 60)
(534, 10)

Crea el modelo LSTM#

Omitimos esta sección. Usaremso el modelo entrenado a un día

# shapes
input_shape = (X_train.shape[1], 1)
units = 50

# layers
inputs = Input(input_shape)
x = Dropout(0.0, name= 'Dropout_01')(inputs)
x = LSTM(units=units, name='LSTM_layer')(x)
#x = LSTM(units=units, return_sequences=True,name='LSTM_layer')(inputs)
#x = Dropout(0.4)
#x = LSTM(units=units//2, name='LSTM_layer_2')(x)
#x = Dropout(0.4)
x = Dropout(0.0, name= 'Dropout_02')(x)
outputs = Dense(future_target)(x)

# model
model_01 = Model(inputs=inputs, outputs=outputs, name='series_LSTM_model')
model_01.summary()
Model: "series_LSTM_model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 60, 1)]           0         
                                                                 
 Dropout_01 (Dropout)        (None, 60, 1)             0         
                                                                 
 LSTM_layer (LSTM)           (None, 50)                10400     
                                                                 
 Dropout_02 (Dropout)        (None, 50)                0         
                                                                 
 dense (Dense)               (None, 10)                510       
                                                                 
=================================================================
Total params: 10,910
Trainable params: 10,910
Non-trainable params: 0
_________________________________________________________________

Compila#

Se usará el optimizador Adam y la función de pérdida MSE

model_01.compile(loss='mean_squared_error',
  optimizer=Adam(0.001))

Lee modelo preentrenado a un día#

Entrena el modelo#

#history = model_01.fit(X_train,y_train,epochs=20,batch_size=32)
history = model_01.fit(
    X_train, y_train,
    epochs=40,
    batch_size=32,
    validation_split=0.1,
    verbose=1,
    shuffle=False
)
Epoch 1/40
 1/67 [..............................] - ETA: 3:55 - loss: 1.9991e-04

 3/67 [>.............................] - ETA: 1s - loss: 8.8490e-05  

 5/67 [=>............................] - ETA: 1s - loss: 1.1145e-04

 7/67 [==>...........................] - ETA: 1s - loss: 1.4138e-04

 9/67 [===>..........................] - ETA: 1s - loss: 1.6737e-04

12/67 [====>.........................] - ETA: 1s - loss: 3.8926e-04

14/67 [=====>........................] - ETA: 1s - loss: 6.7570e-04

16/67 [======>.......................] - ETA: 1s - loss: 6.6382e-04

18/67 [=======>......................] - ETA: 1s - loss: 6.6036e-04

20/67 [=======>......................] - ETA: 1s - loss: 7.1802e-04

22/67 [========>.....................] - ETA: 1s - loss: 8.9220e-04

25/67 [==========>...................] - ETA: 1s - loss: 8.8173e-04

27/67 [===========>..................] - ETA: 1s - loss: 8.8006e-04

29/67 [===========>..................] - ETA: 1s - loss: 0.0010    

31/67 [============>.................] - ETA: 1s - loss: 0.0012

33/67 [=============>................] - ETA: 0s - loss: 0.0016

35/67 [==============>...............] - ETA: 0s - loss: 0.0018

37/67 [===============>..............] - ETA: 0s - loss: 0.0021

40/67 [================>.............] - ETA: 0s - loss: 0.0022

43/67 [==================>...........] - ETA: 0s - loss: 0.0022

46/67 [===================>..........] - ETA: 0s - loss: 0.0021

49/67 [====================>.........] - ETA: 0s - loss: 0.0025

52/67 [======================>.......] - ETA: 0s - loss: 0.0025

55/67 [=======================>......] - ETA: 0s - loss: 0.0030

57/67 [========================>.....] - ETA: 0s - loss: 0.0030

59/67 [=========================>....] - ETA: 0s - loss: 0.0030

61/67 [==========================>...] - ETA: 0s - loss: 0.0033

63/67 [===========================>..] - ETA: 0s - loss: 0.0035

65/67 [============================>.] - ETA: 0s - loss: 0.0040

67/67 [==============================] - ETA: 0s - loss: 0.0044

67/67 [==============================] - 6s 43ms/step - loss: 0.0044 - val_loss: 0.0273
Epoch 2/40
 1/67 [..............................] - ETA: 2s - loss: 0.0328

 3/67 [>.............................] - ETA: 1s - loss: 0.0366

 5/67 [=>............................] - ETA: 2s - loss: 0.0356

 7/67 [==>...........................] - ETA: 2s - loss: 0.0328

 9/67 [===>..........................] - ETA: 2s - loss: 0.0292

11/67 [===>..........................] - ETA: 1s - loss: 0.0253

13/67 [====>.........................] - ETA: 1s - loss: 0.0218

15/67 [=====>........................] - ETA: 1s - loss: 0.0192

17/67 [======>.......................] - ETA: 1s - loss: 0.0170

19/67 [=======>......................] - ETA: 1s - loss: 0.0152

21/67 [========>.....................] - ETA: 1s - loss: 0.0139

23/67 [=========>....................] - ETA: 1s - loss: 0.0127

25/67 [==========>...................] - ETA: 1s - loss: 0.0117

27/67 [===========>..................] - ETA: 1s - loss: 0.0109

30/67 [============>.................] - ETA: 1s - loss: 0.0101

33/67 [=============>................] - ETA: 0s - loss: 0.0097

36/67 [===============>..............] - ETA: 0s - loss: 0.0094

38/67 [================>.............] - ETA: 0s - loss: 0.0094

40/67 [================>.............] - ETA: 0s - loss: 0.0093

42/67 [=================>............] - ETA: 0s - loss: 0.0092

44/67 [==================>...........] - ETA: 0s - loss: 0.0091

46/67 [===================>..........] - ETA: 0s - loss: 0.0089

48/67 [====================>.........] - ETA: 0s - loss: 0.0091

50/67 [=====================>........] - ETA: 0s - loss: 0.0090

52/67 [======================>.......] - ETA: 0s - loss: 0.0088

54/67 [=======================>......] - ETA: 0s - loss: 0.0087

56/67 [========================>.....] - ETA: 0s - loss: 0.0087

58/67 [========================>.....] - ETA: 0s - loss: 0.0085

60/67 [=========================>....] - ETA: 0s - loss: 0.0083

62/67 [==========================>...] - ETA: 0s - loss: 0.0081

64/67 [===========================>..] - ETA: 0s - loss: 0.0079

66/67 [============================>.] - ETA: 0s - loss: 0.0079

67/67 [==============================] - 2s 30ms/step - loss: 0.0079 - val_loss: 0.0122
Epoch 3/40
 1/67 [..............................] - ETA: 1s - loss: 0.0139

 3/67 [>.............................] - ETA: 1s - loss: 0.0149

 5/67 [=>............................] - ETA: 1s - loss: 0.0145

 7/67 [==>...........................] - ETA: 1s - loss: 0.0139

10/67 [===>..........................] - ETA: 1s - loss: 0.0122

13/67 [====>.........................] - ETA: 1s - loss: 0.0103

15/67 [=====>........................] - ETA: 1s - loss: 0.0092

18/67 [=======>......................] - ETA: 1s - loss: 0.0078

20/67 [=======>......................] - ETA: 1s - loss: 0.0071

22/67 [========>.....................] - ETA: 1s - loss: 0.0066

24/67 [=========>....................] - ETA: 1s - loss: 0.0061

26/67 [==========>...................] - ETA: 1s - loss: 0.0057

28/67 [===========>..................] - ETA: 1s - loss: 0.0053

31/67 [============>.................] - ETA: 0s - loss: 0.0049

33/67 [=============>................] - ETA: 0s - loss: 0.0047

35/67 [==============>...............] - ETA: 0s - loss: 0.0046

38/67 [================>.............] - ETA: 0s - loss: 0.0046

40/67 [================>.............] - ETA: 0s - loss: 0.0045

42/67 [=================>............] - ETA: 0s - loss: 0.0045

44/67 [==================>...........] - ETA: 0s - loss: 0.0044

47/67 [====================>.........] - ETA: 0s - loss: 0.0043

50/67 [=====================>........] - ETA: 0s - loss: 0.0042

53/67 [======================>.......] - ETA: 0s - loss: 0.0042

55/67 [=======================>......] - ETA: 0s - loss: 0.0041

57/67 [========================>.....] - ETA: 0s - loss: 0.0041

59/67 [=========================>....] - ETA: 0s - loss: 0.0039

62/67 [==========================>...] - ETA: 0s - loss: 0.0038

65/67 [============================>.] - ETA: 0s - loss: 0.0038

67/67 [==============================] - ETA: 0s - loss: 0.0038

67/67 [==============================] - 2s 29ms/step - loss: 0.0038 - val_loss: 0.0037
Epoch 4/40
 1/67 [..............................] - ETA: 2s - loss: 0.0052

 4/67 [>.............................] - ETA: 1s - loss: 0.0056

 7/67 [==>...........................] - ETA: 1s - loss: 0.0053

 9/67 [===>..........................] - ETA: 1s - loss: 0.0049

12/67 [====>.........................] - ETA: 1s - loss: 0.0043

14/67 [=====>........................] - ETA: 1s - loss: 0.0039

16/67 [======>.......................] - ETA: 1s - loss: 0.0036

18/67 [=======>......................] - ETA: 1s - loss: 0.0032

20/67 [=======>......................] - ETA: 1s - loss: 0.0029

23/67 [=========>....................] - ETA: 1s - loss: 0.0026

25/67 [==========>...................] - ETA: 1s - loss: 0.0024

27/67 [===========>..................] - ETA: 1s - loss: 0.0022

30/67 [============>.................] - ETA: 0s - loss: 0.0021

33/67 [=============>................] - ETA: 0s - loss: 0.0020

36/67 [===============>..............] - ETA: 0s - loss: 0.0019

38/67 [================>.............] - ETA: 0s - loss: 0.0018

40/67 [================>.............] - ETA: 0s - loss: 0.0017

42/67 [=================>............] - ETA: 0s - loss: 0.0017

44/67 [==================>...........] - ETA: 0s - loss: 0.0016

47/67 [====================>.........] - ETA: 0s - loss: 0.0015

49/67 [====================>.........] - ETA: 0s - loss: 0.0015

52/67 [======================>.......] - ETA: 0s - loss: 0.0015

54/67 [=======================>......] - ETA: 0s - loss: 0.0015

57/67 [========================>.....] - ETA: 0s - loss: 0.0014

60/67 [=========================>....] - ETA: 0s - loss: 0.0014

62/67 [==========================>...] - ETA: 0s - loss: 0.0014

65/67 [============================>.] - ETA: 0s - loss: 0.0014

67/67 [==============================] - 2s 27ms/step - loss: 0.0014 - val_loss: 9.5578e-04
Epoch 5/40
 1/67 [..............................] - ETA: 2s - loss: 5.9994e-04

 3/67 [>.............................] - ETA: 2s - loss: 6.7950e-04

 6/67 [=>............................] - ETA: 1s - loss: 6.3105e-04

 9/67 [===>..........................] - ETA: 1s - loss: 5.7106e-04

12/67 [====>.........................] - ETA: 1s - loss: 5.0571e-04

15/67 [=====>........................] - ETA: 1s - loss: 5.4580e-04

18/67 [=======>......................] - ETA: 1s - loss: 4.7380e-04

20/67 [=======>......................] - ETA: 1s - loss: 4.6475e-04

23/67 [=========>....................] - ETA: 1s - loss: 4.1883e-04

26/67 [==========>...................] - ETA: 1s - loss: 4.0039e-04

29/67 [===========>..................] - ETA: 0s - loss: 3.7408e-04

32/67 [=============>................] - ETA: 0s - loss: 3.5186e-04

34/67 [==============>...............] - ETA: 0s - loss: 3.3919e-04

36/67 [===============>..............] - ETA: 0s - loss: 3.2951e-04

38/67 [================>.............] - ETA: 0s - loss: 3.2088e-04

40/67 [================>.............] - ETA: 0s - loss: 3.2164e-04

42/67 [=================>............] - ETA: 0s - loss: 3.2235e-04

44/67 [==================>...........] - ETA: 0s - loss: 3.1928e-04

46/67 [===================>..........] - ETA: 0s - loss: 3.1957e-04

48/67 [====================>.........] - ETA: 0s - loss: 3.4922e-04

50/67 [=====================>........] - ETA: 0s - loss: 3.4621e-04

53/67 [======================>.......] - ETA: 0s - loss: 3.8612e-04

56/67 [========================>.....] - ETA: 0s - loss: 3.9892e-04

59/67 [=========================>....] - ETA: 0s - loss: 4.2692e-04

62/67 [==========================>...] - ETA: 0s - loss: 4.3815e-04

65/67 [============================>.] - ETA: 0s - loss: 4.3396e-04

67/67 [==============================] - 2s 27ms/step - loss: 4.3329e-04 - val_loss: 9.3428e-04
Epoch 6/40
 1/67 [..............................] - ETA: 2s - loss: 1.0477e-04

 3/67 [>.............................] - ETA: 1s - loss: 9.4719e-05

 5/67 [=>............................] - ETA: 1s - loss: 8.6982e-05

 7/67 [==>...........................] - ETA: 1s - loss: 7.8438e-05

 9/67 [===>..........................] - ETA: 1s - loss: 7.1693e-05

12/67 [====>.........................] - ETA: 1s - loss: 7.1039e-05

14/67 [=====>........................] - ETA: 1s - loss: 8.5230e-05

16/67 [======>.......................] - ETA: 1s - loss: 1.1507e-04

19/67 [=======>......................] - ETA: 1s - loss: 1.1286e-04

22/67 [========>.....................] - ETA: 1s - loss: 1.3538e-04

25/67 [==========>...................] - ETA: 1s - loss: 1.2495e-04

27/67 [===========>..................] - ETA: 1s - loss: 1.1886e-04

29/67 [===========>..................] - ETA: 0s - loss: 1.1411e-04

31/67 [============>.................] - ETA: 0s - loss: 1.1201e-04

34/67 [==============>...............] - ETA: 0s - loss: 1.1124e-04

37/67 [===============>..............] - ETA: 0s - loss: 1.1080e-04

39/67 [================>.............] - ETA: 0s - loss: 1.0950e-04

42/67 [=================>............] - ETA: 0s - loss: 1.1665e-04

44/67 [==================>...........] - ETA: 0s - loss: 1.2073e-04

47/67 [====================>.........] - ETA: 0s - loss: 1.4811e-04

50/67 [=====================>........] - ETA: 0s - loss: 1.6560e-04

52/67 [======================>.......] - ETA: 0s - loss: 2.0732e-04

55/67 [=======================>......] - ETA: 0s - loss: 2.3088e-04

58/67 [========================>.....] - ETA: 0s - loss: 2.6143e-04

60/67 [=========================>....] - ETA: 0s - loss: 2.8093e-04

63/67 [===========================>..] - ETA: 0s - loss: 2.9059e-04

65/67 [============================>.] - ETA: 0s - loss: 2.9992e-04

67/67 [==============================] - 2s 27ms/step - loss: 2.9866e-04 - val_loss: 8.7440e-04
Epoch 7/40
 1/67 [..............................] - ETA: 1s - loss: 9.3107e-05

 4/67 [>.............................] - ETA: 1s - loss: 1.1192e-04

 7/67 [==>...........................] - ETA: 1s - loss: 1.1373e-04

10/67 [===>..........................] - ETA: 1s - loss: 9.4921e-05

13/67 [====>.........................] - ETA: 1s - loss: 1.1240e-04

16/67 [======>.......................] - ETA: 1s - loss: 1.6556e-04

19/67 [=======>......................] - ETA: 1s - loss: 1.5737e-04

22/67 [========>.....................] - ETA: 1s - loss: 1.6835e-04

24/67 [=========>....................] - ETA: 1s - loss: 1.7280e-04

26/67 [==========>...................] - ETA: 1s - loss: 1.7158e-04

28/67 [===========>..................] - ETA: 1s - loss: 1.6747e-04

31/67 [============>.................] - ETA: 0s - loss: 1.5802e-04

34/67 [==============>...............] - ETA: 0s - loss: 1.5602e-04

37/67 [===============>..............] - ETA: 0s - loss: 1.5477e-04

40/67 [================>.............] - ETA: 0s - loss: 1.5836e-04

42/67 [=================>............] - ETA: 0s - loss: 1.6479e-04

44/67 [==================>...........] - ETA: 0s - loss: 1.6640e-04

47/67 [====================>.........] - ETA: 0s - loss: 1.9908e-04

50/67 [=====================>........] - ETA: 0s - loss: 2.1162e-04

52/67 [======================>.......] - ETA: 0s - loss: 2.5294e-04

55/67 [=======================>......] - ETA: 0s - loss: 2.7303e-04

58/67 [========================>.....] - ETA: 0s - loss: 3.0660e-04

61/67 [==========================>...] - ETA: 0s - loss: 3.3013e-04

64/67 [===========================>..] - ETA: 0s - loss: 3.3269e-04

67/67 [==============================] - ETA: 0s - loss: 3.3066e-04

67/67 [==============================] - 2s 27ms/step - loss: 3.3066e-04 - val_loss: 8.5611e-04
Epoch 8/40
 1/67 [..............................] - ETA: 2s - loss: 2.8977e-05

 4/67 [>.............................] - ETA: 1s - loss: 2.3987e-05

 7/67 [==>...........................] - ETA: 1s - loss: 2.3057e-05

10/67 [===>..........................] - ETA: 1s - loss: 2.0832e-05

13/67 [====>.........................] - ETA: 1s - loss: 4.3473e-05

16/67 [======>.......................] - ETA: 1s - loss: 9.8491e-05

18/67 [=======>......................] - ETA: 1s - loss: 1.0009e-04

20/67 [=======>......................] - ETA: 1s - loss: 1.3012e-04

22/67 [========>.....................] - ETA: 1s - loss: 1.2218e-04

24/67 [=========>....................] - ETA: 1s - loss: 1.2018e-04

26/67 [==========>...................] - ETA: 1s - loss: 1.1704e-04

28/67 [===========>..................] - ETA: 1s - loss: 1.1406e-04

30/67 [============>.................] - ETA: 1s - loss: 1.1001e-04

33/67 [=============>................] - ETA: 0s - loss: 1.0953e-04

35/67 [==============>...............] - ETA: 0s - loss: 1.1079e-04

37/67 [===============>..............] - ETA: 0s - loss: 1.0919e-04

40/67 [================>.............] - ETA: 0s - loss: 1.1011e-04

43/67 [==================>...........] - ETA: 0s - loss: 1.1938e-04

46/67 [===================>..........] - ETA: 0s - loss: 1.2963e-04

48/67 [====================>.........] - ETA: 0s - loss: 1.6458e-04

50/67 [=====================>........] - ETA: 0s - loss: 1.6708e-04

52/67 [======================>.......] - ETA: 0s - loss: 2.0891e-04

54/67 [=======================>......] - ETA: 0s - loss: 2.3417e-04

57/67 [========================>.....] - ETA: 0s - loss: 2.4108e-04

59/67 [=========================>....] - ETA: 0s - loss: 2.7059e-04

62/67 [==========================>...] - ETA: 0s - loss: 2.9560e-04

64/67 [===========================>..] - ETA: 0s - loss: 3.0037e-04

66/67 [============================>.] - ETA: 0s - loss: 2.9805e-04

67/67 [==============================] - 2s 30ms/step - loss: 2.9848e-04 - val_loss: 8.2353e-04
Epoch 9/40
 1/67 [..............................] - ETA: 3s - loss: 3.6394e-05

 3/67 [>.............................] - ETA: 2s - loss: 5.5096e-05

 5/67 [=>............................] - ETA: 1s - loss: 4.4846e-05

 7/67 [==>...........................] - ETA: 1s - loss: 5.5314e-05

 9/67 [===>..........................] - ETA: 1s - loss: 4.9586e-05

11/67 [===>..........................] - ETA: 1s - loss: 6.0726e-05

13/67 [====>.........................] - ETA: 1s - loss: 7.4846e-05

16/67 [======>.......................] - ETA: 1s - loss: 1.3470e-04

18/67 [=======>......................] - ETA: 1s - loss: 1.3259e-04

21/67 [========>.....................] - ETA: 1s - loss: 1.4900e-04

23/67 [=========>....................] - ETA: 1s - loss: 1.4589e-04

25/67 [==========>...................] - ETA: 1s - loss: 1.5582e-04

27/67 [===========>..................] - ETA: 1s - loss: 1.5346e-04

29/67 [===========>..................] - ETA: 1s - loss: 1.4771e-04

32/67 [=============>................] - ETA: 1s - loss: 1.4084e-04

34/67 [==============>...............] - ETA: 0s - loss: 1.4329e-04

37/67 [===============>..............] - ETA: 0s - loss: 1.4294e-04

39/67 [================>.............] - ETA: 0s - loss: 1.4445e-04

41/67 [=================>............] - ETA: 0s - loss: 1.4637e-04

44/67 [==================>...........] - ETA: 0s - loss: 1.5376e-04

46/67 [===================>..........] - ETA: 0s - loss: 1.6308e-04

49/67 [====================>.........] - ETA: 0s - loss: 2.0130e-04

51/67 [=====================>........] - ETA: 0s - loss: 2.0347e-04

53/67 [======================>.......] - ETA: 0s - loss: 2.5942e-04

55/67 [=======================>......] - ETA: 0s - loss: 2.6981e-04

58/67 [========================>.....] - ETA: 0s - loss: 3.0039e-04

60/67 [=========================>....] - ETA: 0s - loss: 3.1883e-04

62/67 [==========================>...] - ETA: 0s - loss: 3.2908e-04

64/67 [===========================>..] - ETA: 0s - loss: 3.3176e-04

66/67 [============================>.] - ETA: 0s - loss: 3.2879e-04

67/67 [==============================] - 2s 32ms/step - loss: 3.2923e-04 - val_loss: 8.0485e-04
Epoch 10/40
 1/67 [..............................] - ETA: 2s - loss: 1.8117e-05

 3/67 [>.............................] - ETA: 2s - loss: 2.4425e-05

 5/67 [=>............................] - ETA: 1s - loss: 1.8337e-05

 7/67 [==>...........................] - ETA: 2s - loss: 2.6269e-05

 9/67 [===>..........................] - ETA: 2s - loss: 2.4380e-05

11/67 [===>..........................] - ETA: 2s - loss: 3.8029e-05

13/67 [====>.........................] - ETA: 2s - loss: 5.5378e-05

15/67 [=====>........................] - ETA: 2s - loss: 1.2398e-04

17/67 [======>.......................] - ETA: 1s - loss: 1.1694e-04

18/67 [=======>......................] - ETA: 2s - loss: 1.1927e-04

19/67 [=======>......................] - ETA: 2s - loss: 1.1925e-04

21/67 [========>.....................] - ETA: 1s - loss: 1.3914e-04

23/67 [=========>....................] - ETA: 1s - loss: 1.3461e-04

25/67 [==========>...................] - ETA: 1s - loss: 1.4239e-04

27/67 [===========>..................] - ETA: 1s - loss: 1.4013e-04

29/67 [===========>..................] - ETA: 1s - loss: 1.3527e-04

32/67 [=============>................] - ETA: 1s - loss: 1.2991e-04

35/67 [==============>...............] - ETA: 1s - loss: 1.3277e-04

37/67 [===============>..............] - ETA: 1s - loss: 1.3056e-04

39/67 [================>.............] - ETA: 0s - loss: 1.3123e-04

41/67 [=================>............] - ETA: 0s - loss: 1.3315e-04

44/67 [==================>...........] - ETA: 0s - loss: 1.4125e-04

47/67 [====================>.........] - ETA: 0s - loss: 1.7760e-04

49/67 [====================>.........] - ETA: 0s - loss: 1.8796e-04

51/67 [=====================>........] - ETA: 0s - loss: 1.9064e-04

53/67 [======================>.......] - ETA: 0s - loss: 2.4738e-04

55/67 [=======================>......] - ETA: 0s - loss: 2.5958e-04

57/67 [========================>.....] - ETA: 0s - loss: 2.6489e-04

59/67 [=========================>....] - ETA: 0s - loss: 2.9414e-04

61/67 [==========================>...] - ETA: 0s - loss: 3.1871e-04

63/67 [===========================>..] - ETA: 0s - loss: 3.1572e-04

65/67 [============================>.] - ETA: 0s - loss: 3.2186e-04

67/67 [==============================] - 2s 34ms/step - loss: 3.2093e-04 - val_loss: 7.8456e-04
Epoch 11/40
 1/67 [..............................] - ETA: 1s - loss: 2.4023e-05

 3/67 [>.............................] - ETA: 1s - loss: 3.7469e-05

 5/67 [=>............................] - ETA: 1s - loss: 3.0589e-05

 8/67 [==>...........................] - ETA: 1s - loss: 4.1178e-05

11/67 [===>..........................] - ETA: 1s - loss: 5.4490e-05

13/67 [====>.........................] - ETA: 1s - loss: 7.2664e-05

15/67 [=====>........................] - ETA: 1s - loss: 1.4345e-04

18/67 [=======>......................] - ETA: 1s - loss: 1.3572e-04

21/67 [========>.....................] - ETA: 1s - loss: 1.5108e-04

23/67 [=========>....................] - ETA: 1s - loss: 1.5182e-04

26/67 [==========>...................] - ETA: 1s - loss: 1.6534e-04

29/67 [===========>..................] - ETA: 0s - loss: 1.5703e-04

32/67 [=============>................] - ETA: 0s - loss: 1.4887e-04

34/67 [==============>...............] - ETA: 0s - loss: 1.5255e-04

37/67 [===============>..............] - ETA: 0s - loss: 1.5200e-04

40/67 [================>.............] - ETA: 0s - loss: 1.5226e-04

42/67 [=================>............] - ETA: 0s - loss: 1.5852e-04

45/67 [===================>..........] - ETA: 0s - loss: 1.6066e-04

47/67 [====================>.........] - ETA: 0s - loss: 1.9989e-04

49/67 [====================>.........] - ETA: 0s - loss: 2.0874e-04

52/67 [======================>.......] - ETA: 0s - loss: 2.5282e-04

54/67 [=======================>......] - ETA: 0s - loss: 2.8255e-04

57/67 [========================>.....] - ETA: 0s - loss: 2.8671e-04

59/67 [=========================>....] - ETA: 0s - loss: 3.1653e-04

62/67 [==========================>...] - ETA: 0s - loss: 3.4280e-04

65/67 [============================>.] - ETA: 0s - loss: 3.4363e-04

67/67 [==============================] - 2s 27ms/step - loss: 3.4252e-04 - val_loss: 7.7259e-04
Epoch 12/40
 1/67 [..............................] - ETA: 2s - loss: 1.9856e-05

 4/67 [>.............................] - ETA: 1s - loss: 2.6365e-05

 6/67 [=>............................] - ETA: 1s - loss: 3.2153e-05

 8/67 [==>...........................] - ETA: 1s - loss: 3.6681e-05

10/67 [===>..........................] - ETA: 1s - loss: 3.6016e-05

12/67 [====>.........................] - ETA: 1s - loss: 5.0130e-05

14/67 [=====>........................] - ETA: 1s - loss: 9.3710e-05

16/67 [======>.......................] - ETA: 1s - loss: 1.4004e-04

18/67 [=======>......................] - ETA: 1s - loss: 1.3790e-04

20/67 [=======>......................] - ETA: 1s - loss: 1.5498e-04

23/67 [=========>....................] - ETA: 1s - loss: 1.5504e-04

26/67 [==========>...................] - ETA: 1s - loss: 1.7017e-04

29/67 [===========>..................] - ETA: 1s - loss: 1.6141e-04

32/67 [=============>................] - ETA: 0s - loss: 1.5274e-04

34/67 [==============>...............] - ETA: 0s - loss: 1.5665e-04

36/67 [===============>..............] - ETA: 0s - loss: 1.5631e-04

38/67 [================>.............] - ETA: 0s - loss: 1.5396e-04

41/67 [=================>............] - ETA: 0s - loss: 1.5540e-04

43/67 [==================>...........] - ETA: 0s - loss: 1.6179e-04

46/67 [===================>..........] - ETA: 0s - loss: 1.7309e-04

49/67 [====================>.........] - ETA: 0s - loss: 2.1154e-04

51/67 [=====================>........] - ETA: 0s - loss: 2.1256e-04

54/67 [=======================>......] - ETA: 0s - loss: 2.8735e-04

56/67 [========================>.....] - ETA: 0s - loss: 2.9324e-04

58/67 [========================>.....] - ETA: 0s - loss: 3.1390e-04

61/67 [==========================>...] - ETA: 0s - loss: 3.4682e-04

64/67 [===========================>..] - ETA: 0s - loss: 3.5101e-04

67/67 [==============================] - ETA: 0s - loss: 3.4771e-04

67/67 [==============================] - 2s 28ms/step - loss: 3.4771e-04 - val_loss: 7.5997e-04
Epoch 13/40
 1/67 [..............................] - ETA: 1s - loss: 2.2567e-05

 3/67 [>.............................] - ETA: 2s - loss: 3.5764e-05

 5/67 [=>............................] - ETA: 1s - loss: 3.0406e-05

 7/67 [==>...........................] - ETA: 1s - loss: 4.5067e-05

 8/67 [==>...........................] - ETA: 2s - loss: 4.4225e-05

10/67 [===>..........................] - ETA: 1s - loss: 4.3376e-05

12/67 [====>.........................] - ETA: 1s - loss: 5.8485e-05

14/67 [=====>........................] - ETA: 1s - loss: 1.0550e-04

16/67 [======>.......................] - ETA: 1s - loss: 1.5171e-04

18/67 [=======>......................] - ETA: 1s - loss: 1.4820e-04

20/67 [=======>......................] - ETA: 1s - loss: 1.6201e-04

22/67 [========>.....................] - ETA: 1s - loss: 1.6291e-04

25/67 [==========>...................] - ETA: 1s - loss: 1.8812e-04

27/67 [===========>..................] - ETA: 1s - loss: 1.8526e-04

29/67 [===========>..................] - ETA: 1s - loss: 1.7705e-04

32/67 [=============>................] - ETA: 1s - loss: 1.6693e-04

34/67 [==============>...............] - ETA: 1s - loss: 1.7273e-04

36/67 [===============>..............] - ETA: 0s - loss: 1.7234e-04

38/67 [================>.............] - ETA: 0s - loss: 1.6975e-04

41/67 [=================>............] - ETA: 0s - loss: 1.6926e-04

43/67 [==================>...........] - ETA: 0s - loss: 1.7582e-04

46/67 [===================>..........] - ETA: 0s - loss: 1.8756e-04

48/67 [====================>.........] - ETA: 0s - loss: 2.2628e-04

51/67 [=====================>........] - ETA: 0s - loss: 2.2621e-04

52/67 [======================>.......] - ETA: 0s - loss: 2.7051e-04

54/67 [=======================>......] - ETA: 0s - loss: 3.0324e-04

57/67 [========================>.....] - ETA: 0s - loss: 3.0597e-04

59/67 [=========================>....] - ETA: 0s - loss: 3.3523e-04

62/67 [==========================>...] - ETA: 0s - loss: 3.6389e-04

65/67 [============================>.] - ETA: 0s - loss: 3.6447e-04

67/67 [==============================] - ETA: 0s - loss: 3.6286e-04

67/67 [==============================] - 2s 31ms/step - loss: 3.6286e-04 - val_loss: 7.5029e-04
Epoch 14/40
 1/67 [..............................] - ETA: 1s - loss: 2.0981e-05

 4/67 [>.............................] - ETA: 1s - loss: 2.9686e-05
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='test')
plt.legend();
../../_images/rnr_accion_Apple_Prediccion_tres_dias-multiple_34_0.png

Predicciones#

Prepara los datos de validación#

X_test.shape
(534, 60)

Calcula predicciones#

# predictions
prediction = model_01.predict(X_test)
#prediction = scaler.inverse_transform(prediction)
17/17 [==============================] - 1s 15ms/step
print(X_test.shape)
print(prediction.shape)
print(y_test.shape)
(534, 60)
(534, 10)
(534, 10)

Prepara datos para mostrar predicciones#

pred =0
y_train_p = X_test[pred,:]
y_test_p = y_test[pred,:]
y_pred_p = prediction[pred,:]

#print(y_train_p.shape)
#print(y_test_p.shape)
#print(y_pred_p.shape)

Gráfica de las predicciones#

plt.plot(np.arange(0, len(y_train_p)), y_train_p, 'g', label="historia")
plt.plot(np.arange(len(y_train_p), len(y_train_p) + len(y_test_p)), y_test_p, marker='.', label="verdadero")
plt.plot(np.arange(len(y_train_p), len(y_train_p) + len(y_test_p)), y_pred_p, 'r', label="predicción")
#plt.ylabel('Valor')
plt.xlabel('Time Step')
plt.title("Apple: Historia del precio la acción al cierre. Escala (0,1)", size = 20)
plt.legend()
plt.show();
../../_images/rnr_accion_Apple_Prediccion_tres_dias-multiple_44_0.png

Guarda el modelo entrenado#

model_01.save('../Datos/modelo_Apple_3_dia_multipl3.h5')

Intervalos de confianza. TO DO#

model_01.get_config()
{'name': 'series_LSTM_model',
 'layers': [{'class_name': 'InputLayer',
   'config': {'batch_input_shape': (None, 60, 1),
    'dtype': 'float32',
    'sparse': False,
    'ragged': False,
    'name': 'input_1'},
   'name': 'input_1',
   'inbound_nodes': []},
  {'class_name': 'Dropout',
   'config': {'name': 'Dropout_01',
    'trainable': True,
    'dtype': 'float32',
    'rate': 0.0,
    'noise_shape': None,
    'seed': None},
   'name': 'Dropout_01',
   'inbound_nodes': [[['input_1', 0, 0, {}]]]},
  {'class_name': 'LSTM',
   'config': {'name': 'LSTM_layer',
    'trainable': True,
    'dtype': 'float32',
    'return_sequences': False,
    'return_state': False,
    'go_backwards': False,
    'stateful': False,
    'unroll': False,
    'time_major': False,
    'units': 50,
    'activation': 'tanh',
    'recurrent_activation': 'sigmoid',
    'use_bias': True,
    'kernel_initializer': {'class_name': 'GlorotUniform',
     'config': {'seed': None},
     'shared_object_id': 2},
    'recurrent_initializer': {'class_name': 'Orthogonal',
     'config': {'gain': 1.0, 'seed': None},
     'shared_object_id': 3},
    'bias_initializer': {'class_name': 'Zeros',
     'config': {},
     'shared_object_id': 4},
    'unit_forget_bias': True,
    'kernel_regularizer': None,
    'recurrent_regularizer': None,
    'bias_regularizer': None,
    'activity_regularizer': None,
    'kernel_constraint': None,
    'recurrent_constraint': None,
    'bias_constraint': None,
    'dropout': 0.0,
    'recurrent_dropout': 0.0,
    'implementation': 2},
   'name': 'LSTM_layer',
   'inbound_nodes': [[['Dropout_01', 0, 0, {}]]]},
  {'class_name': 'Dropout',
   'config': {'name': 'Dropout_02',
    'trainable': True,
    'dtype': 'float32',
    'rate': 0.0,
    'noise_shape': None,
    'seed': None},
   'name': 'Dropout_02',
   'inbound_nodes': [[['LSTM_layer', 0, 0, {}]]]},
  {'class_name': 'Dense',
   'config': {'name': 'dense',
    'trainable': True,
    'dtype': 'float32',
    'units': 10,
    'activation': 'linear',
    'use_bias': True,
    'kernel_initializer': {'class_name': 'GlorotUniform',
     'config': {'seed': None}},
    'bias_initializer': {'class_name': 'Zeros', 'config': {}},
    'kernel_regularizer': None,
    'bias_regularizer': None,
    'activity_regularizer': None,
    'kernel_constraint': None,
    'bias_constraint': None},
   'name': 'dense',
   'inbound_nodes': [[['Dropout_02', 0, 0, {}]]]}],
 'input_layers': [['input_1', 0, 0]],
 'output_layers': [['dense', 0, 0]]}

Referencias#

  1. Introducción a Redes LSTM

  2. Time Series Forecasting with LSTMs using TensorFlow 2 and Keras in Python

  3. Dive into Deep Learnig

  4. Understanding LSTM Networks

  5. Ralf C. Staudemeyer and Eric Rothstein Morris,Understanding LSTM a tutorial into Long Short-Term Memory Recurrent Neural Networks, arxiv, September 2019

  6. Karpathy, The Unreasonable Effectiveness of Recurrent Neural Networks

  7. Anton Lucanus, Making Automation More Efficient by Learning from Historical Trade Data, 8:43 AM, January 7, 2020

  8. https://www.analyticsvidhya.com/blog/2018/10/predicting-stock-price-machine-learningnd-deep-learning-techniques-python/

  9. https://www.youtube.com/watch?v=2BrpKpWwT2A&list=PLQVvvaa0QuDcOdF96TBtRtuQksErCEBYZ&index=1

  10. https://towardsdatascience.com/using-lstms-for-stock-market-predictions-tensorflow-9e83999d4653

  11. https://github.com/llSourcell/Reinforcement_Learning_for_Stock_Prediction/blob/master/README.md