Predicción del valor de una acción a un día. Apple (LSTM)
Contents
Predicción del valor de una acción a un día. Apple (LSTM)#
Con dropout
Importar las librería requeridas#
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#%matplotlib inline
from sklearn.preprocessing import MinMaxScaler
#setting figure size
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 20,10
#importing required libraries
from sklearn.preprocessing import MinMaxScaler
# importa objetos de keras
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Dropout, LSTM, Bidirectional
print("Versión de Tensorflow: ", tf.__version__)
# optimizador
from tensorflow.keras.optimizers import Adam
Versión de Tensorflow: 2.9.1
Lectura de los datos#
Estos datos corresponden a la empresa Apple. Son 3019 datos que corresponden a observaciones del precio de la acción, el número de transacciones de la acción (compra-venta). Los datos son diarios (dias hábiles o comerciales). Están entre el 3 de enero de 2006 hasta el 1 de enero de 2018.
la columna Date es la fecha, Open es el valor de acción a la apertura del mercado, High el valor más alto alcanzado en el día, Low el valor más bajo del día, Close el valor al cierre, Volume es el volúmenes de acciones transadas en el día y Name es el código de identificación de la empresa, Apple en este caso.
Los datos puede ser bajados directamente de Kaggle
#reading from a local file
df = pd.read_csv('https://raw.githubusercontent.com/AprendizajeProfundo/Libro-Fundamentos/main/Redes_Recurrentes/Datos/AAPL_2006-01-01_to_2018-01-01.csv')
# looking at the first five rows of the data
print('\n Shape of the data:')
print(df.shape)
df.head()
Shape of the data:
(3019, 7)
Date | Open | High | Low | Close | Volume | Name | |
---|---|---|---|---|---|---|---|
0 | 2006-01-03 | 10.34 | 10.68 | 10.32 | 10.68 | 201853036 | AAPL |
1 | 2006-01-04 | 10.73 | 10.85 | 10.64 | 10.71 | 155225609 | AAPL |
2 | 2006-01-05 | 10.69 | 10.70 | 10.54 | 10.63 | 112396081 | AAPL |
3 | 2006-01-06 | 10.75 | 10.96 | 10.65 | 10.90 | 176139334 | AAPL |
4 | 2006-01-09 | 10.96 | 11.03 | 10.82 | 10.86 | 168861224 | AAPL |
Vamos a cambiar el índice de los datos. Tomaremos la fecha como indice: df.index. Los datos se reordenan para invertir la tabla, debido a que los datos contienen las observaciones más recientes en la parte superior de la tabla.
Extrae datos para la serie que se desea predecir-close#
#creating dataframe with date and the target variable
df['Date'] = pd.to_datetime(df.Date,format='%Y-%m-%d')
df.index = df['Date']
# df = df.sort_index(ascending=True, axis=0)
data = pd.DataFrame(df[['Date', 'Close']])
#
#setting index
data.index = data.Date
data.drop('Date', axis=1, inplace=True)
data.head()
Close | |
---|---|
Date | |
2006-01-03 | 10.68 |
2006-01-04 | 10.71 |
2006-01-05 | 10.63 |
2006-01-06 | 10.90 |
2006-01-09 | 10.86 |
Visualización de la serie precio al cierre#
# plot
len_data = len(data)
len_train = int(len_data*0.8) # 80% = 2415
len_test = len_data- len_train # 20% = 604
print (len_data, '=', len_train, '+',len_test)
3019 = 2415 + 604
plt.figure(figsize=(16,8))
plt.plot(data[:len_train], label='Conjunto de entrenamiento (Training set): {} puntos (80%)'.format(len_train))
plt.plot(data['Close'][len_train:], label='Conjunto de validación (Validation set): {} puntos (20%)'.format(len_test))
plt.title("Apple: Historia del precio la acción al cierre (Close)", size = 20)
plt.legend()
plt.show()
Preparación de los datos para el entrenamiento de la red LSTM#
Para evitar problemas con las tendencias y para mejorar la estimación (entrenamiento) los datos se van a transformar a la escala \([0,1]\). Para las predicciones se utiliza la transformación inversa.
Primero extrae los valores y se crea el objeto MinMaxScaler#
#creating train and test sets
dataset = data.values
# create the scaler object and scale the data
scaler = MinMaxScaler(feature_range=(0, 1))
#scaled_data = np.array(scaler.fit_transform(dataset))
dataset = np.squeeze(np.array(scaler.fit_transform(dataset)),axis=1)
# dataset = pd.DataFrame(scaled_data,index=data.index, columns=['serie'])
dataset.shape
(3019,)
Crea datos de entrenamiento#
La red LSTM tendrá como entrada «time_step» datos consecutivos, y como salida 5 datos (la predicción a partir de esos «time_step» datos se hace para los siguentes 5 días). Se conformará de esta forma el set de entrenamiento
Número de datos consecutivos para entrenamiento: time_step = 60.
Días a predecir: days = 1
Función para crear los datos de entrenamiento#
def univariate_data(dataset, start_index, end_index, history_size, target_size):
''' dataset: conjunto de datos
start_index: índice inicial de donde empezar a tomar los datos
end_index: índice final para tomar los datos. None para tomarlos todos
history_size: tamaño de la ventana para crear las secuencias
target_size: dentro de cuántas observaciones futuras desea pronosticar
'''
data = []
labels = []
start_index = start_index + history_size
if end_index is None:
end_index = len(dataset) - target_size
for i in range(start_index, end_index):
indices = range(i-history_size, i)
# Reshape data from (history_size,) to (history_size, 1)
data.append(np.reshape(dataset[indices], (history_size, 1)))
labels.append(dataset[i+target_size])
return np.array(data), np.array(labels)
Se coloca una semilla para garantizar reproductibidad dentre de tensorflow
tf.random.set_seed(500)
#
# hiperparámetros para crear las secuencias
past_history = 60 # tamaño secuencias de entrada
future_target = 1 # días adelante
TRAIN_SPLIT = int(len_data*0.8) #2415: nuḿero de datos entrenamiento
# secuencias de entrenamiento
X_train, y_train = univariate_data(dataset, 0, TRAIN_SPLIT,
past_history,
future_target)
#
#secuencias de validación
# No se usará ningún dato que el modelo haya visto
X_test, y_test = univariate_data(dataset, TRAIN_SPLIT, None,
past_history,
future_target)
print(TRAIN_SPLIT)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
2415
(2355, 60, 1)
(2355,)
(543, 60, 1)
(543,)
print ('Ventana de la historia pasada')
print (X_train[1])
print ('\n Valor de la acción a predecir')
print (y_train[1])
Ventana de la historia pasada
[[0.0205107 ]
[0.02003783]
[0.02163376]
[0.02139733]
[0.02547582]
[0.0280766 ]
[0.02837215]
[0.02949521]
[0.0287268 ]
[0.02683532]
[0.023939 ]
[0.02145644]
[0.02281594]
[0.02139733]
[0.0198605 ]
[0.01826457]
[0.01802814]
[0.0205107 ]
[0.02098357]
[0.02086535]
[0.01808724]
[0.01785081]
[0.01400875]
[0.01430429]
[0.01530914]
[0.01205816]
[0.01406786]
[0.01182173]
[0.01430429]
[0.01566379]
[0.01678685]
[0.01655042]
[0.01554557]
[0.01743705]
[0.0177917 ]
[0.01755527]
[0.01714151]
[0.01501359]
[0.01554557]
[0.01595933]
[0.0143634 ]
[0.01247192]
[0.01318123]
[0.01264925]
[0.01117153]
[0.01058045]
[0.01264925]
[0.01406786]
[0.01312212]
[0.01152619]
[0.01182173]
[0.01123064]
[0.00939827]
[0.00928006]
[0.00797967]
[0.00786145]
[0.00744769]
[0.00679749]
[0.00981203]
[0.01016669]]
Valor de la acción a predecir
0.010107577727863803
Crea el modelo LSTM#
# shapes
input_shape = (X_train.shape[1], X_train.shape[2])
units = 50
# layers
inputs = Input(input_shape)
x = Dropout(0.2, name= 'Dropout_01')(inputs)
x = LSTM(units=units, name='LSTM_layer')(x)
#x = LSTM(units=units, return_sequences=True,name='LSTM_layer')(inputs)
#x = Dropout(0.4)
#x = LSTM(units=units//2, name='LSTM_layer_2')(x)
#x = Dropout(0.4)
x = Dropout(0.2, name= 'Dropout_02')(x)
outputs = Dense(1)(x)
# model
model_01 = Model(inputs=inputs, outputs=outputs, name='series_LSTM_model')
model_01.summary()
Model: "series_LSTM_model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 60, 1)] 0
Dropout_01 (Dropout) (None, 60, 1) 0
LSTM_layer (LSTM) (None, 50) 10400
Dropout_02 (Dropout) (None, 50) 0
dense (Dense) (None, 1) 51
=================================================================
Total params: 10,451
Trainable params: 10,451
Non-trainable params: 0
_________________________________________________________________
Compila#
Se usará el optimizador Adam y la función de pérdida MSE
model_01.compile(loss='mean_squared_error',
optimizer=Adam(0.001))
Entrena el modelo#
#history = model_01.fit(X_train,y_train,epochs=20,batch_size=32)
history = model_01.fit(
X_train, y_train,
epochs=10,
batch_size=32,
validation_split=0.1,
verbose=1,
shuffle=False
)
Epoch 1/10
1/67 [..............................] - ETA: 3:01 - loss: 2.7316e-04
3/67 [>.............................] - ETA: 2s - loss: 1.1787e-04
6/67 [=>............................] - ETA: 1s - loss: 8.5117e-05
8/67 [==>...........................] - ETA: 1s - loss: 7.0579e-05
11/67 [===>..........................] - ETA: 1s - loss: 9.3167e-05
14/67 [=====>........................] - ETA: 1s - loss: 1.3963e-04
16/67 [======>.......................] - ETA: 1s - loss: 3.7312e-04
19/67 [=======>......................] - ETA: 1s - loss: 3.4224e-04
22/67 [========>.....................] - ETA: 1s - loss: 4.0740e-04
25/67 [==========>...................] - ETA: 1s - loss: 3.9848e-04
27/67 [===========>..................] - ETA: 0s - loss: 5.2039e-04
29/67 [===========>..................] - ETA: 0s - loss: 8.0787e-04
31/67 [============>.................] - ETA: 0s - loss: 9.8183e-04
33/67 [=============>................] - ETA: 0s - loss: 0.0012
36/67 [===============>..............] - ETA: 0s - loss: 0.0012
39/67 [================>.............] - ETA: 0s - loss: 0.0013
42/67 [=================>............] - ETA: 0s - loss: 0.0014
44/67 [==================>...........] - ETA: 0s - loss: 0.0014
47/67 [====================>.........] - ETA: 0s - loss: 0.0017
50/67 [=====================>........] - ETA: 0s - loss: 0.0021
53/67 [======================>.......] - ETA: 0s - loss: 0.0024
55/67 [=======================>......] - ETA: 0s - loss: 0.0027
57/67 [========================>.....] - ETA: 0s - loss: 0.0028
59/67 [=========================>....] - ETA: 0s - loss: 0.0028
61/67 [==========================>...] - ETA: 0s - loss: 0.0029
64/67 [===========================>..] - ETA: 0s - loss: 0.0031
67/67 [==============================] - ETA: 0s - loss: 0.0036
67/67 [==============================] - 5s 39ms/step - loss: 0.0036 - val_loss: 0.0123
Epoch 2/10
1/67 [..............................] - ETA: 1s - loss: 0.0204
3/67 [>.............................] - ETA: 1s - loss: 0.0221
5/67 [=>............................] - ETA: 1s - loss: 0.0214
7/67 [==>...........................] - ETA: 1s - loss: 0.0193
9/67 [===>..........................] - ETA: 1s - loss: 0.0170
12/67 [====>.........................] - ETA: 1s - loss: 0.0135
15/67 [=====>........................] - ETA: 1s - loss: 0.0110
18/67 [=======>......................] - ETA: 1s - loss: 0.0092
20/67 [=======>......................] - ETA: 1s - loss: 0.0084
23/67 [=========>....................] - ETA: 1s - loss: 0.0074
26/67 [==========>...................] - ETA: 1s - loss: 0.0067
29/67 [===========>..................] - ETA: 0s - loss: 0.0062
31/67 [============>.................] - ETA: 0s - loss: 0.0059
34/67 [==============>...............] - ETA: 0s - loss: 0.0056
37/67 [===============>..............] - ETA: 0s - loss: 0.0053
39/67 [================>.............] - ETA: 0s - loss: 0.0051
41/67 [=================>............] - ETA: 0s - loss: 0.0050
43/67 [==================>...........] - ETA: 0s - loss: 0.0048
45/67 [===================>..........] - ETA: 0s - loss: 0.0047
48/67 [====================>.........] - ETA: 0s - loss: 0.0047
50/67 [=====================>........] - ETA: 0s - loss: 0.0048
53/67 [======================>.......] - ETA: 0s - loss: 0.0048
56/67 [========================>.....] - ETA: 0s - loss: 0.0048
58/67 [========================>.....] - ETA: 0s - loss: 0.0047
60/67 [=========================>....] - ETA: 0s - loss: 0.0047
62/67 [==========================>...] - ETA: 0s - loss: 0.0047
64/67 [===========================>..] - ETA: 0s - loss: 0.0048
67/67 [==============================] - ETA: 0s - loss: 0.0050
67/67 [==============================] - 2s 27ms/step - loss: 0.0050 - val_loss: 0.0027
Epoch 3/10
1/67 [..............................] - ETA: 2s - loss: 0.0080
4/67 [>.............................] - ETA: 1s - loss: 0.0089
6/67 [=>............................] - ETA: 1s - loss: 0.0085
8/67 [==>...........................] - ETA: 1s - loss: 0.0078
11/67 [===>..........................] - ETA: 1s - loss: 0.0064
13/67 [====>.........................] - ETA: 1s - loss: 0.0056
15/67 [=====>........................] - ETA: 1s - loss: 0.0049
17/67 [======>.......................] - ETA: 1s - loss: 0.0044
19/67 [=======>......................] - ETA: 1s - loss: 0.0040
21/67 [========>.....................] - ETA: 1s - loss: 0.0037
23/67 [=========>....................] - ETA: 1s - loss: 0.0035
25/67 [==========>...................] - ETA: 1s - loss: 0.0033
28/67 [===========>..................] - ETA: 1s - loss: 0.0030
31/67 [============>.................] - ETA: 1s - loss: 0.0028
33/67 [=============>................] - ETA: 0s - loss: 0.0027
35/67 [==============>...............] - ETA: 0s - loss: 0.0026
37/67 [===============>..............] - ETA: 0s - loss: 0.0025
39/67 [================>.............] - ETA: 0s - loss: 0.0025
42/67 [=================>............] - ETA: 0s - loss: 0.0025
45/67 [===================>..........] - ETA: 0s - loss: 0.0025
48/67 [====================>.........] - ETA: 0s - loss: 0.0027
51/67 [=====================>........] - ETA: 0s - loss: 0.0030
53/67 [======================>.......] - ETA: 0s - loss: 0.0032
54/67 [=======================>......] - ETA: 0s - loss: 0.0032
56/67 [========================>.....] - ETA: 0s - loss: 0.0032
59/67 [=========================>....] - ETA: 0s - loss: 0.0032
62/67 [==========================>...] - ETA: 0s - loss: 0.0033
64/67 [===========================>..] - ETA: 0s - loss: 0.0034
67/67 [==============================] - ETA: 0s - loss: 0.0035
67/67 [==============================] - 2s 31ms/step - loss: 0.0035 - val_loss: 0.0011
Epoch 4/10
1/67 [..............................] - ETA: 2s - loss: 0.0021
3/67 [>.............................] - ETA: 1s - loss: 0.0024
5/67 [=>............................] - ETA: 1s - loss: 0.0023
7/67 [==>...........................] - ETA: 1s - loss: 0.0022
9/67 [===>..........................] - ETA: 1s - loss: 0.0019
12/67 [====>.........................] - ETA: 1s - loss: 0.0016
14/67 [=====>........................] - ETA: 1s - loss: 0.0014
16/67 [======>.......................] - ETA: 1s - loss: 0.0013
18/67 [=======>......................] - ETA: 1s - loss: 0.0012
21/67 [========>.....................] - ETA: 1s - loss: 0.0011
23/67 [=========>....................] - ETA: 1s - loss: 0.0011
25/67 [==========>...................] - ETA: 1s - loss: 0.0011
27/67 [===========>..................] - ETA: 1s - loss: 0.0010
30/67 [============>.................] - ETA: 1s - loss: 9.5842e-04
32/67 [=============>................] - ETA: 0s - loss: 9.3163e-04
35/67 [==============>...............] - ETA: 0s - loss: 9.4776e-04
37/67 [===============>..............] - ETA: 0s - loss: 9.5982e-04
39/67 [================>.............] - ETA: 0s - loss: 0.0010
41/67 [=================>............] - ETA: 0s - loss: 0.0010
43/67 [==================>...........] - ETA: 0s - loss: 0.0011
46/67 [===================>..........] - ETA: 0s - loss: 0.0013
48/67 [====================>.........] - ETA: 0s - loss: 0.0015
51/67 [=====================>........] - ETA: 0s - loss: 0.0018
52/67 [======================>.......] - ETA: 0s - loss: 0.0018
54/67 [=======================>......] - ETA: 0s - loss: 0.0020
56/67 [========================>.....] - ETA: 0s - loss: 0.0021
59/67 [=========================>....] - ETA: 0s - loss: 0.0021
60/67 [=========================>....] - ETA: 0s - loss: 0.0021
61/67 [==========================>...] - ETA: 0s - loss: 0.0022
62/67 [==========================>...] - ETA: 0s - loss: 0.0022
64/67 [===========================>..] - ETA: 0s - loss: 0.0023
65/67 [============================>.] - ETA: 0s - loss: 0.0024
67/67 [==============================] - ETA: 0s - loss: 0.0025
67/67 [==============================] - 2s 33ms/step - loss: 0.0025 - val_loss: 6.0377e-04
Epoch 5/10
1/67 [..............................] - ETA: 2s - loss: 0.0017
3/67 [>.............................] - ETA: 1s - loss: 0.0017
5/67 [=>............................] - ETA: 1s - loss: 0.0017
8/67 [==>...........................] - ETA: 1s - loss: 0.0016
10/67 [===>..........................] - ETA: 1s - loss: 0.0014
12/67 [====>.........................] - ETA: 1s - loss: 0.0013
15/67 [=====>........................] - ETA: 1s - loss: 0.0012
18/67 [=======>......................] - ETA: 1s - loss: 0.0012
21/67 [========>.....................] - ETA: 1s - loss: 0.0013
23/67 [=========>....................] - ETA: 1s - loss: 0.0014
25/67 [==========>...................] - ETA: 1s - loss: 0.0013
27/67 [===========>..................] - ETA: 1s - loss: 0.0013
30/67 [============>.................] - ETA: 1s - loss: 0.0015
32/67 [=============>................] - ETA: 0s - loss: 0.0015
34/67 [==============>...............] - ETA: 0s - loss: 0.0016
36/67 [===============>..............] - ETA: 0s - loss: 0.0016
38/67 [================>.............] - ETA: 0s - loss: 0.0016
40/67 [================>.............] - ETA: 0s - loss: 0.0016
43/67 [==================>...........] - ETA: 0s - loss: 0.0018
45/67 [===================>..........] - ETA: 0s - loss: 0.0018
47/67 [====================>.........] - ETA: 0s - loss: 0.0020
49/67 [====================>.........] - ETA: 0s - loss: 0.0021
51/67 [=====================>........] - ETA: 0s - loss: 0.0023
53/67 [======================>.......] - ETA: 0s - loss: 0.0025
56/67 [========================>.....] - ETA: 0s - loss: 0.0025
58/67 [========================>.....] - ETA: 0s - loss: 0.0025
60/67 [=========================>....] - ETA: 0s - loss: 0.0026
62/67 [==========================>...] - ETA: 0s - loss: 0.0027
64/67 [===========================>..] - ETA: 0s - loss: 0.0028
66/67 [============================>.] - ETA: 0s - loss: 0.0029
67/67 [==============================] - 2s 32ms/step - loss: 0.0029 - val_loss: 6.3398e-04
Epoch 6/10
1/67 [..............................] - ETA: 2s - loss: 0.0016
3/67 [>.............................] - ETA: 1s - loss: 0.0018
5/67 [=>............................] - ETA: 1s - loss: 0.0019
7/67 [==>...........................] - ETA: 1s - loss: 0.0018
9/67 [===>..........................] - ETA: 1s - loss: 0.0016
11/67 [===>..........................] - ETA: 1s - loss: 0.0014
13/67 [====>.........................] - ETA: 1s - loss: 0.0013
15/67 [=====>........................] - ETA: 1s - loss: 0.0012
17/67 [======>.......................] - ETA: 1s - loss: 0.0012
19/67 [=======>......................] - ETA: 1s - loss: 0.0011
22/67 [========>.....................] - ETA: 1s - loss: 0.0012
24/67 [=========>....................] - ETA: 1s - loss: 0.0012
26/67 [==========>...................] - ETA: 1s - loss: 0.0012
28/67 [===========>..................] - ETA: 1s - loss: 0.0011
30/67 [============>.................] - ETA: 1s - loss: 0.0011
32/67 [=============>................] - ETA: 1s - loss: 0.0012
34/67 [==============>...............] - ETA: 0s - loss: 0.0012
36/67 [===============>..............] - ETA: 0s - loss: 0.0012
39/67 [================>.............] - ETA: 0s - loss: 0.0013
41/67 [=================>............] - ETA: 0s - loss: 0.0013
43/67 [==================>...........] - ETA: 0s - loss: 0.0014
45/67 [===================>..........] - ETA: 0s - loss: 0.0015
46/67 [===================>..........] - ETA: 0s - loss: 0.0015
47/67 [====================>.........] - ETA: 0s - loss: 0.0016
49/67 [====================>.........] - ETA: 0s - loss: 0.0018
51/67 [=====================>........] - ETA: 0s - loss: 0.0020
53/67 [======================>.......] - ETA: 0s - loss: 0.0021
55/67 [=======================>......] - ETA: 0s - loss: 0.0022
57/67 [========================>.....] - ETA: 0s - loss: 0.0023
59/67 [=========================>....] - ETA: 0s - loss: 0.0023
61/67 [==========================>...] - ETA: 0s - loss: 0.0024
63/67 [===========================>..] - ETA: 0s - loss: 0.0024
65/67 [============================>.] - ETA: 0s - loss: 0.0025
66/67 [============================>.] - ETA: 0s - loss: 0.0026
67/67 [==============================] - 2s 37ms/step - loss: 0.0026 - val_loss: 0.0011
Epoch 7/10
1/67 [..............................] - ETA: 2s - loss: 4.5129e-04
3/67 [>.............................] - ETA: 2s - loss: 5.9852e-04
5/67 [=>............................] - ETA: 2s - loss: 6.0549e-04
7/67 [==>...........................] - ETA: 2s - loss: 6.2384e-04
9/67 [===>..........................] - ETA: 2s - loss: 5.7649e-04
11/67 [===>..........................] - ETA: 1s - loss: 5.4838e-04
13/67 [====>.........................] - ETA: 1s - loss: 5.2666e-04
15/67 [=====>........................] - ETA: 1s - loss: 5.7552e-04
17/67 [======>.......................] - ETA: 1s - loss: 5.5066e-04
19/67 [=======>......................] - ETA: 1s - loss: 5.3701e-04
21/67 [========>.....................] - ETA: 1s - loss: 5.6779e-04
23/67 [=========>....................] - ETA: 1s - loss: 6.3917e-04
25/67 [==========>...................] - ETA: 1s - loss: 6.7867e-04
27/67 [===========>..................] - ETA: 1s - loss: 6.5391e-04
29/67 [===========>..................] - ETA: 1s - loss: 6.4211e-04
31/67 [============>.................] - ETA: 1s - loss: 6.5589e-04
33/67 [=============>................] - ETA: 1s - loss: 6.6992e-04
35/67 [==============>...............] - ETA: 1s - loss: 7.1210e-04
37/67 [===============>..............] - ETA: 1s - loss: 7.2978e-04
39/67 [================>.............] - ETA: 1s - loss: 7.8095e-04
41/67 [=================>............] - ETA: 0s - loss: 8.3066e-04
43/67 [==================>...........] - ETA: 0s - loss: 8.6818e-04
45/67 [===================>..........] - ETA: 0s - loss: 9.1179e-04
46/67 [===================>..........] - ETA: 0s - loss: 9.5417e-04
48/67 [====================>.........] - ETA: 0s - loss: 0.0013
50/67 [=====================>........] - ETA: 0s - loss: 0.0015
52/67 [======================>.......] - ETA: 0s - loss: 0.0017
54/67 [=======================>......] - ETA: 0s - loss: 0.0019
56/67 [========================>.....] - ETA: 0s - loss: 0.0019
58/67 [========================>.....] - ETA: 0s - loss: 0.0019
60/67 [=========================>....] - ETA: 0s - loss: 0.0021
62/67 [==========================>...] - ETA: 0s - loss: 0.0022
64/67 [===========================>..] - ETA: 0s - loss: 0.0023
66/67 [============================>.] - ETA: 0s - loss: 0.0024
67/67 [==============================] - 2s 37ms/step - loss: 0.0024 - val_loss: 0.0013
Epoch 8/10
1/67 [..............................] - ETA: 2s - loss: 0.0020
3/67 [>.............................] - ETA: 1s - loss: 0.0022
5/67 [=>............................] - ETA: 2s - loss: 0.0021
7/67 [==>...........................] - ETA: 1s - loss: 0.0018
9/67 [===>..........................] - ETA: 1s - loss: 0.0016
11/67 [===>..........................] - ETA: 1s - loss: 0.0014
13/67 [====>.........................] - ETA: 1s - loss: 0.0013
15/67 [=====>........................] - ETA: 1s - loss: 0.0012
17/67 [======>.......................] - ETA: 1s - loss: 0.0012
20/67 [=======>......................] - ETA: 1s - loss: 0.0012
22/67 [========>.....................] - ETA: 1s - loss: 0.0014
25/67 [==========>...................] - ETA: 1s - loss: 0.0014
28/67 [===========>..................] - ETA: 1s - loss: 0.0014
30/67 [============>.................] - ETA: 1s - loss: 0.0015
33/67 [=============>................] - ETA: 1s - loss: 0.0017
36/67 [===============>..............] - ETA: 0s - loss: 0.0017
39/67 [================>.............] - ETA: 0s - loss: 0.0017
41/67 [=================>............] - ETA: 0s - loss: 0.0017
43/67 [==================>...........] - ETA: 0s - loss: 0.0018
45/67 [===================>..........] - ETA: 0s - loss: 0.0019
48/67 [====================>.........] - ETA: 0s - loss: 0.0021
50/67 [=====================>........] - ETA: 0s - loss: 0.0022
53/67 [======================>.......] - ETA: 0s - loss: 0.0025
55/67 [=======================>......] - ETA: 0s - loss: 0.0025
57/67 [========================>.....] - ETA: 0s - loss: 0.0026
59/67 [=========================>....] - ETA: 0s - loss: 0.0026
62/67 [==========================>...] - ETA: 0s - loss: 0.0027
64/67 [===========================>..] - ETA: 0s - loss: 0.0027
67/67 [==============================] - ETA: 0s - loss: 0.0029
67/67 [==============================] - 2s 30ms/step - loss: 0.0029 - val_loss: 0.0012
Epoch 9/10
1/67 [..............................] - ETA: 1s - loss: 3.5227e-04
3/67 [>.............................] - ETA: 1s - loss: 5.4173e-04
5/67 [=>............................] - ETA: 1s - loss: 6.3656e-04
7/67 [==>...........................] - ETA: 1s - loss: 7.0150e-04
9/67 [===>..........................] - ETA: 1s - loss: 6.6278e-04
11/67 [===>..........................] - ETA: 1s - loss: 7.3620e-04
13/67 [====>.........................] - ETA: 1s - loss: 8.1685e-04
15/67 [=====>........................] - ETA: 1s - loss: 9.3637e-04
17/67 [======>.......................] - ETA: 1s - loss: 9.1148e-04
19/67 [=======>......................] - ETA: 1s - loss: 9.1169e-04
21/67 [========>.....................] - ETA: 1s - loss: 0.0011
23/67 [=========>....................] - ETA: 1s - loss: 0.0014
25/67 [==========>...................] - ETA: 1s - loss: 0.0014
27/67 [===========>..................] - ETA: 1s - loss: 0.0013
30/67 [============>.................] - ETA: 1s - loss: 0.0014
32/67 [=============>................] - ETA: 1s - loss: 0.0014
34/67 [==============>...............] - ETA: 0s - loss: 0.0015
36/67 [===============>..............] - ETA: 0s - loss: 0.0015
38/67 [================>.............] - ETA: 0s - loss: 0.0016
41/67 [=================>............] - ETA: 0s - loss: 0.0016
43/67 [==================>...........] - ETA: 0s - loss: 0.0016
45/67 [===================>..........] - ETA: 0s - loss: 0.0017
47/67 [====================>.........] - ETA: 0s - loss: 0.0019
49/67 [====================>.........] - ETA: 0s - loss: 0.0020
51/67 [=====================>........] - ETA: 0s - loss: 0.0021
53/67 [======================>.......] - ETA: 0s - loss: 0.0023
55/67 [=======================>......] - ETA: 0s - loss: 0.0024
57/67 [========================>.....] - ETA: 0s - loss: 0.0024
59/67 [=========================>....] - ETA: 0s - loss: 0.0024
61/67 [==========================>...] - ETA: 0s - loss: 0.0025
63/67 [===========================>..] - ETA: 0s - loss: 0.0026
65/67 [============================>.] - ETA: 0s - loss: 0.0027
67/67 [==============================] - ETA: 0s - loss: 0.0028
67/67 [==============================] - 2s 33ms/step - loss: 0.0028 - val_loss: 8.1421e-04
Epoch 10/10
1/67 [..............................] - ETA: 1s - loss: 7.3949e-04
3/67 [>.............................] - ETA: 2s - loss: 9.2602e-04
5/67 [=>............................] - ETA: 1s - loss: 0.0010
7/67 [==>...........................] - ETA: 1s - loss: 0.0011
9/67 [===>..........................] - ETA: 1s - loss: 9.7551e-04
11/67 [===>..........................] - ETA: 1s - loss: 0.0010
13/67 [====>.........................] - ETA: 1s - loss: 0.0010
15/67 [=====>........................] - ETA: 1s - loss: 0.0011
17/67 [======>.......................] - ETA: 1s - loss: 0.0010
19/67 [=======>......................] - ETA: 1s - loss: 9.6974e-04
21/67 [========>.....................] - ETA: 1s - loss: 0.0011
23/67 [=========>....................] - ETA: 1s - loss: 0.0013
25/67 [==========>...................] - ETA: 1s - loss: 0.0013
27/67 [===========>..................] - ETA: 1s - loss: 0.0012
29/67 [===========>..................] - ETA: 1s - loss: 0.0012
32/67 [=============>................] - ETA: 1s - loss: 0.0013
35/67 [==============>...............] - ETA: 0s - loss: 0.0014
38/67 [================>.............] - ETA: 0s - loss: 0.0014
41/67 [=================>............] - ETA: 0s - loss: 0.0015
43/67 [==================>...........] - ETA: 0s - loss: 0.0015
46/67 [===================>..........] - ETA: 0s - loss: 0.0016
49/67 [====================>.........] - ETA: 0s - loss: 0.0018
52/67 [======================>.......] - ETA: 0s - loss: 0.0020
55/67 [=======================>......] - ETA: 0s - loss: 0.0022
58/67 [========================>.....] - ETA: 0s - loss: 0.0023
61/67 [==========================>...] - ETA: 0s - loss: 0.0023
63/67 [===========================>..] - ETA: 0s - loss: 0.0024
66/67 [============================>.] - ETA: 0s - loss: 0.0024
67/67 [==============================] - 2s 29ms/step - loss: 0.0024 - val_loss: 8.3660e-04
plt.plot(history.history['loss'][:], label='train')
plt.plot(history.history['val_loss'][:], label='test')
plt.legend();
Predicciones#
Prepara los datos de validación#
X_test.shape
(543, 60, 1)
Calcula predicciones#
# predictions
prediction1 = model_01.predict(X_test)
#prediction = scaler.inverse_transform(prediction)
1/17 [>.............................] - ETA: 11s
7/17 [===========>..................] - ETA: 0s
14/17 [=======================>......] - ETA: 0s
17/17 [==============================] - 1s 8ms/step
print(prediction1.shape)
print(y_test.shape)
(543, 1)
(543,)
Elimina dimensiones sobrantes para los gráficos#
y_train_p1 = y_train #np.squeeze(y_train, axis=-1)
y_test_p1 = y_test#np.squeeze(y_test, axis=-1)
y_pred_p1 = np.squeeze(prediction1, axis=-1)
print(y_train_p1.shape)
print(y_test_p1.shape)
print(y_pred_p1.shape)
k=0
for i,j in zip(y_test_p1, y_pred_p1):
print (i,j, i-j)
k+=1
if k==10:
break
(2355,)
(543,)
(543,)
0.6783307719588605 0.637711 0.04061978343728456
0.6719470386570517 0.64086896 0.03107807684598607
0.6727745596406196 0.64351714 0.029257423066828103
0.6698782361981322 0.64548534 0.02439289464921257
0.6474169523584349 0.64730024 0.000116708980749336
0.6435157820073296 0.6487008 -0.005184991708643283
0.6412105449816764 0.6483837 -0.007173132024091422
0.6212318240926825 0.6477511 -0.026519268818084085
0.6321078141624307 0.64690495 -0.014797131211104486
0.6292114907199433 0.64466983 -0.015458340079159472
Gráfica de las predicciones#
plt.plot(np.arange(0, len(y_train_p1)), y_train_p1, 'g', label="historia")
plt.plot(np.arange(len(y_train_p1), len(y_train_p1) + len(y_test_p1)), y_test_p1, marker='.', label="verdadero")
plt.plot(np.arange(len(y_train_p1), len(y_train_p1) + len(y_test_p1)), y_pred_p1, 'r', label="predicción")
#plt.ylabel('Valor')
plt.xlabel('Time Step')
plt.title("Apple: Historia del precio la acción al cierre. Escala (0,1)", size = 20)
plt.legend()
plt.show();
Regreso a la escala original#
y_pred_or1 = scaler.inverse_transform(y_pred_p1.reshape(-1,1))
y_test_or1 = scaler.inverse_transform(y_test_p1.reshape(-1,1))
k=0
for i,j in zip(y_test_or1, y_pred_or1):
print (i,j, i-j)
k+=1
if k==10:
break
[122.] [115.127945] [6.87205505]
[120.92] [115.66221] [5.25779144]
[121.06] [116.11023] [4.94977051]
[120.57] [116.44321] [4.12679321]
[116.77] [116.75025] [0.01974823]
[116.11] [116.9872] [-0.87719788]
[115.72] [116.93355] [-1.21354797]
[112.34] [116.82653] [-4.48653046]
[114.18] [116.68338] [-2.50338013]
[113.69] [116.30524] [-2.61523682]
rmsLSTM = np.sqrt(np.mean(np.power(y_pred_or1-y_test_or1,2)))
print(rmsLSTM )
12.17391516580725
plt.plot(np.arange(0, len(y_test_or1)), y_test_or1, marker='.', label="verdadero")
plt.plot(np.arange(0, len(y_test_or1)), y_pred_or1, marker='+', label="predicho")
plt.xlabel('Time Step')
plt.annotate("rms = "+str(round(rmsLSTM,2)) , xy=(100, 140), size = 15)
plt.annotate("modelo = LSTM(50), timestep=60" , xy=(100, 146), size = 15)
plt.annotate("epochs=40" , xy=(100, 143), size = 15)
plt.title("Apple: Intervalo de predicción a un día. Escala original", size = 20)
plt.legend()
plt.show();
Guarda el modelo entrenado#
model_01.save('../Datos/modelo_Apple_1_dia.h5')
Intervalos de confianza. TO DO#
Recupera la configuración del modelo#
model_01.get_config()
{'name': 'series_LSTM_model',
'layers': [{'class_name': 'InputLayer',
'config': {'batch_input_shape': (None, 60, 1),
'dtype': 'float32',
'sparse': False,
'ragged': False,
'name': 'input_1'},
'name': 'input_1',
'inbound_nodes': []},
{'class_name': 'Dropout',
'config': {'name': 'Dropout_01',
'trainable': True,
'dtype': 'float32',
'rate': 0.2,
'noise_shape': None,
'seed': None},
'name': 'Dropout_01',
'inbound_nodes': [[['input_1', 0, 0, {}]]]},
{'class_name': 'LSTM',
'config': {'name': 'LSTM_layer',
'trainable': True,
'dtype': 'float32',
'return_sequences': False,
'return_state': False,
'go_backwards': False,
'stateful': False,
'unroll': False,
'time_major': False,
'units': 50,
'activation': 'tanh',
'recurrent_activation': 'sigmoid',
'use_bias': True,
'kernel_initializer': {'class_name': 'GlorotUniform',
'config': {'seed': None},
'shared_object_id': 2},
'recurrent_initializer': {'class_name': 'Orthogonal',
'config': {'gain': 1.0, 'seed': None},
'shared_object_id': 3},
'bias_initializer': {'class_name': 'Zeros',
'config': {},
'shared_object_id': 4},
'unit_forget_bias': True,
'kernel_regularizer': None,
'recurrent_regularizer': None,
'bias_regularizer': None,
'activity_regularizer': None,
'kernel_constraint': None,
'recurrent_constraint': None,
'bias_constraint': None,
'dropout': 0.0,
'recurrent_dropout': 0.0,
'implementation': 2},
'name': 'LSTM_layer',
'inbound_nodes': [[['Dropout_01', 0, 0, {}]]]},
{'class_name': 'Dropout',
'config': {'name': 'Dropout_02',
'trainable': True,
'dtype': 'float32',
'rate': 0.2,
'noise_shape': None,
'seed': None},
'name': 'Dropout_02',
'inbound_nodes': [[['LSTM_layer', 0, 0, {}]]]},
{'class_name': 'Dense',
'config': {'name': 'dense',
'trainable': True,
'dtype': 'float32',
'units': 1,
'activation': 'linear',
'use_bias': True,
'kernel_initializer': {'class_name': 'GlorotUniform',
'config': {'seed': None}},
'bias_initializer': {'class_name': 'Zeros', 'config': {}},
'kernel_regularizer': None,
'bias_regularizer': None,
'activity_regularizer': None,
'kernel_constraint': None,
'bias_constraint': None},
'name': 'dense',
'inbound_nodes': [[['Dropout_02', 0, 0, {}]]]}],
'input_layers': [['input_1', 0, 0]],
'output_layers': [['dense', 0, 0]]}
Crea el modelo LSTM bidirecional#
# shapes
input_shape = (X_train.shape[1], X_train.shape[2])
units_2 = 64
dropout = 0.2
# layers
inputs = Input(input_shape)
x = Dropout(dropout, name= 'Dropout_01')(inputs)
x = Bidirectional(LSTM(units, return_sequences=True,dropout=dropout,
recurrent_dropout=dropout,))(x)
x = Bidirectional(LSTM(units//4, dropout=dropout,
recurrent_dropout=dropout,))(x)
x = Dropout(dropout, name= 'Dropout_02')(x)
outputs = Dense(1)(x)
# model
model_02 = Model(inputs=inputs, outputs=outputs, name='series_LSTM_model')
model_02.summary()
Model: "series_LSTM_model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 60, 1)] 0
Dropout_01 (Dropout) (None, 60, 1) 0
bidirectional (Bidirectiona (None, 60, 100) 20800
l)
bidirectional_1 (Bidirectio (None, 24) 10848
nal)
Dropout_02 (Dropout) (None, 24) 0
dense_1 (Dense) (None, 1) 25
=================================================================
Total params: 31,673
Trainable params: 31,673
Non-trainable params: 0
_________________________________________________________________
Compila#
Se usará el optimizador Adam y la función de pérdida MSE
model_02.compile(loss='mean_squared_error',
optimizer=Adam(0.001),metrics=["mae"])
tf.random.set_seed(500)
history = model_02.fit(
X_train, y_train,
epochs=10,
batch_size=64,
validation_split=0.1,
verbose=1,
shuffle=False
)
Epoch 1/10
1/34 [..............................] - ETA: 7:49 - loss: 9.8992e-05 - mae: 0.0091
2/34 [>.............................] - ETA: 4s - loss: 1.3097e-04 - mae: 0.0097
3/34 [=>............................] - ETA: 4s - loss: 1.1024e-04 - mae: 0.0086
4/34 [==>...........................] - ETA: 4s - loss: 9.5619e-05 - mae: 0.0079
5/34 [===>..........................] - ETA: 4s - loss: 1.5393e-04 - mae: 0.0097
6/34 [====>.........................] - ETA: 4s - loss: 2.0190e-04 - mae: 0.0109
7/34 [=====>........................] - ETA: 4s - loss: 2.9127e-04 - mae: 0.0125
8/34 [======>.......................] - ETA: 4s - loss: 6.8085e-04 - mae: 0.0174
9/34 [======>.......................] - ETA: 3s - loss: 6.5419e-04 - mae: 0.0173
10/34 [=======>......................] - ETA: 3s - loss: 7.7920e-04 - mae: 0.0193
11/34 [========>.....................] - ETA: 3s - loss: 9.4269e-04 - mae: 0.0219
12/34 [=========>....................] - ETA: 3s - loss: 9.0007e-04 - mae: 0.0216
13/34 [==========>...................] - ETA: 3s - loss: 8.7719e-04 - mae: 0.0217
14/34 [===========>..................] - ETA: 3s - loss: 0.0010 - mae: 0.0240
15/34 [============>.................] - ETA: 2s - loss: 0.0013 - mae: 0.0272
16/34 [=============>................] - ETA: 2s - loss: 0.0016 - mae: 0.0303
17/34 [==============>...............] - ETA: 2s - loss: 0.0020 - mae: 0.0334
18/34 [==============>...............] - ETA: 2s - loss: 0.0022 - mae: 0.0358
19/34 [===============>..............] - ETA: 2s - loss: 0.0024 - mae: 0.0377
20/34 [================>.............] - ETA: 2s - loss: 0.0024 - mae: 0.0380
21/34 [=================>............] - ETA: 1s - loss: 0.0025 - mae: 0.0382
22/34 [==================>...........] - ETA: 1s - loss: 0.0025 - mae: 0.0387
23/34 [===================>..........] - ETA: 1s - loss: 0.0027 - mae: 0.0399
24/34 [====================>.........] - ETA: 1s - loss: 0.0028 - mae: 0.0410
25/34 [=====================>........] - ETA: 1s - loss: 0.0029 - mae: 0.0417
26/34 [=====================>........] - ETA: 1s - loss: 0.0031 - mae: 0.0427
27/34 [======================>.......] - ETA: 0s - loss: 0.0035 - mae: 0.0446
28/34 [=======================>......] - ETA: 0s - loss: 0.0036 - mae: 0.0451
29/34 [========================>.....] - ETA: 0s - loss: 0.0036 - mae: 0.0452
30/34 [=========================>....] - ETA: 0s - loss: 0.0036 - mae: 0.0457
31/34 [==========================>...] - ETA: 0s - loss: 0.0039 - mae: 0.0474
32/34 [===========================>..] - ETA: 0s - loss: 0.0041 - mae: 0.0488
33/34 [============================>.] - ETA: 0s - loss: 0.0047 - mae: 0.0514
34/34 [==============================] - ETA: 0s - loss: 0.0047 - mae: 0.0517
34/34 [==============================] - 20s 179ms/step - loss: 0.0047 - mae: 0.0517 - val_loss: 0.0167 - val_mae: 0.1245
Epoch 2/10
1/34 [..............................] - ETA: 3s - loss: 0.0206 - mae: 0.1423
2/34 [>.............................] - ETA: 3s - loss: 0.0206 - mae: 0.1422
3/34 [=>............................] - ETA: 3s - loss: 0.0196 - mae: 0.1385
4/34 [==>...........................] - ETA: 3s - loss: 0.0193 - mae: 0.1371
5/34 [===>..........................] - ETA: 3s - loss: 0.0179 - mae: 0.1314
6/34 [====>.........................] - ETA: 3s - loss: 0.0163 - mae: 0.1241
7/34 [=====>........................] - ETA: 3s - loss: 0.0148 - mae: 0.1166
8/34 [======>.......................] - ETA: 3s - loss: 0.0141 - mae: 0.1136
9/34 [======>.......................] - ETA: 2s - loss: 0.0127 - mae: 0.1049
10/34 [=======>......................] - ETA: 2s - loss: 0.0117 - mae: 0.0992
11/34 [========>.....................] - ETA: 2s - loss: 0.0108 - mae: 0.0931
12/34 [=========>....................] - ETA: 2s - loss: 0.0099 - mae: 0.0865
13/34 [==========>...................] - ETA: 2s - loss: 0.0093 - mae: 0.0825
14/34 [===========>..................] - ETA: 2s - loss: 0.0088 - mae: 0.0799
15/34 [============>.................] - ETA: 2s - loss: 0.0084 - mae: 0.0781
16/34 [=============>................] - ETA: 2s - loss: 0.0082 - mae: 0.0772
17/34 [==============>...............] - ETA: 2s - loss: 0.0081 - mae: 0.0770
18/34 [==============>...............] - ETA: 1s - loss: 0.0079 - mae: 0.0761
19/34 [===============>..............] - ETA: 1s - loss: 0.0077 - mae: 0.0755
20/34 [================>.............] - ETA: 1s - loss: 0.0076 - mae: 0.0747
21/34 [=================>............] - ETA: 1s - loss: 0.0074 - mae: 0.0733
22/34 [==================>...........] - ETA: 1s - loss: 0.0072 - mae: 0.0723
23/34 [===================>..........] - ETA: 1s - loss: 0.0071 - mae: 0.0711
24/34 [====================>.........] - ETA: 1s - loss: 0.0071 - mae: 0.0710
25/34 [=====================>........] - ETA: 1s - loss: 0.0071 - mae: 0.0711
26/34 [=====================>........] - ETA: 1s - loss: 0.0071 - mae: 0.0714
27/34 [======================>.......] - ETA: 0s - loss: 0.0075 - mae: 0.0725
28/34 [=======================>......] - ETA: 0s - loss: 0.0075 - mae: 0.0726
29/34 [========================>.....] - ETA: 0s - loss: 0.0074 - mae: 0.0720
30/34 [=========================>....] - ETA: 0s - loss: 0.0073 - mae: 0.0716
31/34 [==========================>...] - ETA: 0s - loss: 0.0073 - mae: 0.0714
32/34 [===========================>..] - ETA: 0s - loss: 0.0073 - mae: 0.0713
33/34 [============================>.] - ETA: 0s - loss: 0.0077 - mae: 0.0725
34/34 [==============================] - ETA: 0s - loss: 0.0077 - mae: 0.0727
34/34 [==============================] - 4s 133ms/step - loss: 0.0077 - mae: 0.0727 - val_loss: 0.0091 - val_mae: 0.0920
Epoch 3/10
1/34 [..............................] - ETA: 5s - loss: 1.8013e-04 - mae: 0.0116
2/34 [>.............................] - ETA: 4s - loss: 1.8550e-04 - mae: 0.0116
3/34 [=>............................] - ETA: 4s - loss: 1.7999e-04 - mae: 0.0113
4/34 [==>...........................] - ETA: 4s - loss: 2.4685e-04 - mae: 0.0133
5/34 [===>..........................] - ETA: 4s - loss: 2.4028e-04 - mae: 0.0130
6/34 [====>.........................] - ETA: 4s - loss: 2.8498e-04 - mae: 0.0140
7/34 [=====>........................] - ETA: 4s - loss: 3.7196e-04 - mae: 0.0153
8/34 [======>.......................] - ETA: 3s - loss: 6.3080e-04 - mae: 0.0188
9/34 [======>.......................] - ETA: 3s - loss: 6.1261e-04 - mae: 0.0186
10/34 [=======>......................] - ETA: 3s - loss: 7.7202e-04 - mae: 0.0208
11/34 [========>.....................] - ETA: 3s - loss: 8.7146e-04 - mae: 0.0225
12/34 [=========>....................] - ETA: 3s - loss: 8.2036e-04 - mae: 0.0217
13/34 [==========>...................] - ETA: 3s - loss: 7.6981e-04 - mae: 0.0208
14/34 [===========>..................] - ETA: 3s - loss: 7.4438e-04 - mae: 0.0205
15/34 [============>.................] - ETA: 2s - loss: 7.3443e-04 - mae: 0.0205
16/34 [=============>................] - ETA: 2s - loss: 7.4598e-04 - mae: 0.0208
17/34 [==============>...............] - ETA: 2s - loss: 8.1789e-04 - mae: 0.0217
18/34 [==============>...............] - ETA: 2s - loss: 8.4770e-04 - mae: 0.0221
19/34 [===============>..............] - ETA: 2s - loss: 8.9844e-04 - mae: 0.0228
20/34 [================>.............] - ETA: 2s - loss: 9.5990e-04 - mae: 0.0236
21/34 [=================>............] - ETA: 2s - loss: 0.0010 - mae: 0.0245
22/34 [==================>...........] - ETA: 1s - loss: 0.0011 - mae: 0.0254
23/34 [===================>..........] - ETA: 1s - loss: 0.0012 - mae: 0.0262
24/34 [====================>.........] - ETA: 1s - loss: 0.0015 - mae: 0.0282
25/34 [=====================>........] - ETA: 1s - loss: 0.0018 - mae: 0.0299
26/34 [=====================>........] - ETA: 1s - loss: 0.0020 - mae: 0.0317
27/34 [======================>.......] - ETA: 1s - loss: 0.0023 - mae: 0.0334
28/34 [=======================>......] - ETA: 0s - loss: 0.0024 - mae: 0.0342
29/34 [========================>.....] - ETA: 0s - loss: 0.0024 - mae: 0.0343
30/34 [=========================>....] - ETA: 0s - loss: 0.0025 - mae: 0.0353
31/34 [==========================>...] - ETA: 0s - loss: 0.0026 - mae: 0.0360
32/34 [===========================>..] - ETA: 0s - loss: 0.0028 - mae: 0.0370
33/34 [============================>.] - ETA: 0s - loss: 0.0030 - mae: 0.0386
34/34 [==============================] - ETA: 0s - loss: 0.0030 - mae: 0.0386
34/34 [==============================] - 5s 151ms/step - loss: 0.0030 - mae: 0.0386 - val_loss: 0.0019 - val_mae: 0.0379
Epoch 4/10
plt.plot(history.history['loss'][:], label='train')
plt.plot(history.history['val_loss'][:], label='test')
plt.plot(history.history['mae'][:], label='mae_train')
plt.plot(history.history['val_mae'][:], label='mae_loss')
plt.legend();
Predicciones#
Prepara los datos de validación#
X_test.shape
(543, 60, 1)
Calcula predicciones#
# predictions
prediction2 = model_02.predict(X_test)
#prediction = scaler.inverse_transform(prediction)
17/17 [==============================] - 2s 26ms/step
print(prediction2.shape)
print(y_test.shape)
(543, 1)
(543,)
Elimina dimensiones sobrante para los gráficos#
y_train_p2 = y_train #np.squeeze(y_train, axis=-1)
y_test_p2 = y_test#np.squeeze(y_test, axis=-1)
y_pred_p2 = np.squeeze(prediction2, axis=-1)
print(y_train_p2.shape)
print(y_test_p2.shape)
print(y_pred_p2.shape)
k=0
for i,j in zip(y_test_p2, y_pred_p2):
print (i,j, i-j)
k+=1
if k==10:
break
(2355,)
(543,)
(543,)
0.6783307719588605 0.61679226 0.061538510358366105
0.6719470386570517 0.61782646 0.054120576865059555
0.6727745596406196 0.6188961 0.05387843289348826
0.6698782361981322 0.6195038 0.050374440097515794
0.6474169523584349 0.62001574 0.027401211963842598
0.6435157820073296 0.62032115 0.02319462741290823
0.6412105449816764 0.61981595 0.021394599356371224
0.6212318240926825 0.61928403 0.0019477903173529265
0.6321078141624307 0.6189245 0.013183315604386237
0.6292114907199433 0.61821944 0.010992055504946974
Gráfica de las predicciones#
plt.plot(np.arange(0, len(y_train_p2)), y_train_p2, 'g', label="historia")
plt.plot(np.arange(len(y_train_p2), len(y_train_p2) + len(y_test_p2)), y_test_p2, marker='.', label="verdadero")
plt.plot(np.arange(len(y_train_p2), len(y_train_p2) + len(y_test_p2)), y_pred_p2, 'r', label="predicción")
#plt.ylabel('Valor')
plt.xlabel('Time Step')
plt.title("Apple: Historia del precio la acción al cierre. Escala (0,1)", size = 20)
plt.legend()
plt.show();
Regreso a la escala original#
y_pred_or2 = scaler.inverse_transform(y_pred_p2.reshape(-1,1))
y_test_or2 = scaler.inverse_transform(y_test_p2.reshape(-1,1))
k=0
for i,j in zip(y_test_or2, y_pred_or2):
print (i,j, i-j)
k+=1
if k==10:
break
[122.] [111.58891] [10.41108704]
[120.92] [111.76388] [9.15612213]
[121.06] [111.94485] [9.11515289]
[120.57] [112.04765] [8.5223468]
[116.77] [112.13426] [4.63573792]
[116.11] [112.18593] [3.92407166]
[115.72] [112.10046] [3.61954376]
[112.34] [112.01047] [0.32953247]
[114.18] [111.949646] [2.230354]
[113.69] [111.83036] [1.85963959]
rmsLSTM = np.sqrt(np.mean(np.power(y_pred_or2-y_test_or2,2)))
print(rmsLSTM )
16.754417648439286
plt.plot(np.arange(0, len(y_test_or1)), y_test_or1, marker='.', label="verdadero")
plt.plot(np.arange(0, len(y_test_or1)), y_pred_or1, marker='+', label="predicho")
plt.xlabel('Time Step')
plt.annotate("rms = "+str(round(rmsLSTM,2)) , xy=(100, 140), size = 15)
plt.annotate("modelo = LSTM(50), timestep=60" , xy=(100, 146), size = 15)
plt.annotate("epochs=40" , xy=(100, 143), size = 15)
plt.title("Apple: Intervalo de predicción a un día. Escala original", size = 20)
plt.legend()
plt.show();
Referencias#
Introducción a Redes LSTM
Time Series Forecasting with LSTMs using TensorFlow 2 and Keras in Python
Ralf C. Staudemeyer and Eric Rothstein Morris,Understanding LSTM a tutorial into Long Short-Term Memory Recurrent Neural Networks, arxiv, September 2019
Karpathy, The Unreasonable Effectiveness of Recurrent Neural Networks
Anton Lucanus, Making Automation More Efficient by Learning from Historical Trade Data, 8:43 AM, January 7, 2020
https://www.youtube.com/watch?v=2BrpKpWwT2A&list=PLQVvvaa0QuDcOdF96TBtRtuQksErCEBYZ&index=1
https://towardsdatascience.com/using-lstms-for-stock-market-predictions-tensorflow-9e83999d4653
https://github.com/llSourcell/Reinforcement_Learning_for_Stock_Prediction/blob/master/README.md