Learn which R packages and data sets you need by reviewing Part I, Part II ,Part III, Part IV, Part V , Part VI, Part VII and Part VIII of this series.
How the training X (input variables) data looks:
ac_9_ac_9 | acf_features_x_acf1 | acf_features_x_acf10 | acf_features_diff1_acf1 | acf_features_diff1_acf10 | acf_features_diff2_acf1 | acf_features_diff2_acf10 | ARCH.LM | autocorr_features_embed2_incircle_1 | autocorr_features_embed2_incircle_2 | autocorr_features_ac_9 | autocorr_features_firstmin_ac | autocorr_features_trev_num | autocorr_features_motiftwo_entro3 | autocorr_features_walker_propcross | binarize_mean_binarize_mean | binarize_mean_NA | compengine_embed2_incircle_1 | compengine_embed2_incircle_2 | compengine_ac_9 | compengine_firstmin_ac | compengine_trev_num | compengine_motiftwo_entro3 | compengine_walker_propcross | compengine_localsimple_mean1 | compengine_localsimple_lfitac | compengine_sampen_first | compengine_std1st_der | compengine_spreadrandomlocal_meantaul_50 | compengine_spreadrandomlocal_meantaul_ac2 | compengine_histogram_mode_10 | compengine_outlierinclude_mdrmd | compengine_fluctanal_prop_r1 | crossing_points | dist_features_histogram_mode_10 | dist_features_outlierinclude_mdrmd | embed2_incircle | entropy | firstmin_ac | firstzero_ac | flat_spots | fluctanal_prop_r1_fluctanal_prop_r1 | arch_acf | garch_acf | arch_r2 | garch_r2 | histogram_mode | alpha | beta | hurst | hw_parameters_hw_parameters | hw_parameters_NA | localsimple_taures | lumpiness | max_kl_shift | time_kl_shift | max_level_shift | time_level_shift | max_var_shift | time_var_shift | motiftwo_entro3 | nonlinearity | outlierinclude_mdrmd | x_pacf5 | diff1x_pacf5 | diff2x_pacf5 | pred_features_localsimple_mean1 | pred_features_localsimple_lfitac | pred_features_sampen_first | sampen_first_sampen_first | sampenc | scal_features_fluctanal_prop_r1 | spreadrandomlocal_meantaul | stability | station_features_std1st_der | station_features_spreadrandomlocal_meantaul_50 | station_features_spreadrandomlocal_meantaul_ac2 | std1st_der_std1st_der | nperiods | seasonal_period | trend | spike | linearity | curvature | e_acf1 | e_acf10 | trev_num | tsfeatures_frequency | tsfeatures_nperiods | tsfeatures_seasonal_period | tsfeatures_trend | tsfeatures_spike | tsfeatures_linearity | tsfeatures_curvature | tsfeatures_e_acf1 | tsfeatures_e_acf10 | tsfeatures_entropy | tsfeatures_x_acf1 | tsfeatures_x_acf10 | tsfeatures_diff1_acf1 | tsfeatures_diff1_acf10 | tsfeatures_diff2_acf1 | tsfeatures_diff2_acf10 | unitroot_kpss | unitroot_pp | walker_propcross | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6801 | 0.0498492 | -0.0642025 | 0.0542648 | -0.4423482 | 0.2575236 | -0.5981303 | 0.4149592 | 0.0271444 | 0.4710425 | 0.7181467 | 0.0498492 | 2 | 0.8754566 | 2.057333 | 0.5598456 | 0 | 1 | 0.4710425 | 0.7181467 | 0.0498492 | 2 | 0.8754566 | 2.057333 | 0.5598456 | 1 | 1 | 1.704503 | 1.460466 | 1.33 | 1.00 | -0.50 | 0.1115385 | 0.8604651 | 139 | -0.50 | 0.1115385 | 0.4710425 | 0.9888208 | 2 | 1 | 3 | 0.8604651 | 0.0332257 | 0.0244434 | 0.0370423 | 0.0287773 | -0.50 | 0.0001000 | 0.0001000 | 0.5000458 | NA | NA | 1 | 0.7769640 | 3.827223 | 209 | 1.027671 | 131 | 3.254518 | 195 | 2.057333 | 0.0695918 | 0.1115385 | 0.0474059 | 0.5669070 | 1.0663179 | 1 | 1 | 1.704503 | 1.704503 | 1.704503 | 0.8604651 | 1.41 | 0.0639649 | 1.460466 | 1.42 | 1.00 | 1.460466 | 0 | 1 | 0.0069481 | 0.0000643 | -0.8628963 | 0.2636951 | -0.0719026 | 0.0587799 | 0.8754566 | 1 | 0 | 1 | 0.0069481 | 0.0000643 | -0.8628963 | 0.2636951 | -0.0719026 | 0.0587799 | 0.9888208 | -0.0642025 | 0.0542648 | -0.4423482 | 0.2575236 | -0.5981303 | 0.4149592 | 0.1777957 | -246.9618 | 0.5598456 |
4209 | -0.0037257 | -0.0166400 | 0.0302609 | -0.5444182 | 0.3391695 | -0.7025401 | 0.5898760 | 0.0369855 | 0.3976834 | 0.6409266 | -0.0037257 | 1 | 0.0772589 | 2.065480 | 0.5598456 | 1 | 1 | 0.3976834 | 0.6409266 | -0.0037257 | 1 | 0.0772589 | 2.065480 | 0.5598456 | 1 | 1 | 1.752028 | 1.427591 | 1.39 | 1.00 | -0.25 | -0.1000000 | 0.4651163 | 137 | -0.25 | -0.1000000 | 0.3976834 | 0.9866480 | 1 | 1 | 4 | 0.4651163 | 0.0328564 | 0.0286941 | 0.0369855 | 0.0347972 | -0.25 | 0.0008843 | 0.0008843 | 0.5000458 | NA | NA | 1 | 0.2267605 | 3.549229 | 215 | 1.390319 | 3 | 2.017745 | 143 | 2.065480 | 0.0236440 | -0.1000000 | 0.0060988 | 0.4859730 | 1.0685267 | 1 | 1 | 1.752028 | 1.752028 | 1.752028 | 0.4651163 | 1.49 | 0.0831999 | 1.427591 | 1.53 | 1.00 | 1.427591 | 0 | 1 | 0.0431696 | 0.0000288 | -0.6356332 | 1.0362897 | -0.0608160 | 0.0358936 | 0.0772589 | 1 | 0 | 1 | 0.0431696 | 0.0000288 | -0.6356332 | 1.0362897 | -0.0608160 | 0.0358936 | 0.9866480 | -0.0166400 | 0.0302609 | -0.5444182 | 0.3391695 | -0.7025401 | 0.5898760 | 0.0372919 | -268.4757 | 0.5598456 |
11168 | 0.0236704 | -0.0269749 | 0.0299079 | -0.4943006 | 0.2640054 | -0.6626027 | 0.4906038 | 0.1265569 | 0.4401544 | 0.6640927 | 0.0236704 | 2 | -0.4569401 | 2.075666 | 0.4633205 | 1 | 1 | 0.4401544 | 0.6640927 | 0.0236704 | 2 | -0.4569401 | 2.075666 | 0.4633205 | 1 | 1 | 1.709466 | 1.431144 | 1.52 | 1.00 | 0.25 | -0.0961538 | 0.1627907 | 122 | 0.25 | -0.0961538 | 0.4401544 | 0.9882937 | 2 | 1 | 4 | 0.1627907 | 0.1453674 | 0.1490540 | 0.1265569 | 0.1247021 | 0.25 | 0.0411075 | 0.0001000 | 0.5000458 | NA | NA | 1 | 0.3863291 | 2.834691 | 227 | 1.096209 | 123 | 2.760158 | 197 | 2.075666 | 0.1218026 | -0.0961538 | 0.0088598 | 0.4643608 | 1.0505751 | 1 | 1 | 1.709466 | 1.709466 | 1.709466 | 0.1627907 | 1.61 | 0.0691848 | 1.431144 | 1.50 | 1.00 | 1.431144 | 0 | 1 | 0.0134781 | 0.0000342 | -0.6468298 | -1.1770328 | -0.0419291 | 0.0376999 | -0.4569401 | 1 | 0 | 1 | 0.0134781 | 0.0000342 | -0.6468298 | -1.1770328 | -0.0419291 | 0.0376999 | 0.9882937 | -0.0269749 | 0.0299079 | -0.4943006 | 0.2640054 | -0.6626027 | 0.4906038 | 0.1743418 | -260.0758 | 0.4633205 |
5794 | -0.0007087 | 0.1194830 | 0.0616705 | -0.4062897 | 0.2206195 | -0.6016700 | 0.4137913 | 0.1556551 | 0.4806202 | 0.6782946 | -0.0007087 | 2 | -0.5797405 | 2.066637 | 0.4787645 | 1 | 0 | 0.4806202 | 0.6782946 | -0.0007087 | 2 | -0.5797405 | 2.066637 | 0.4787645 | 1 | 1 | 1.558307 | 1.328565 | 2.03 | 1.18 | -0.25 | -0.3000000 | 0.2325581 | 120 | -0.25 | -0.3000000 | 0.4806202 | 0.9815963 | 2 | 2 | 5 | 0.2325581 | 0.2198692 | 0.0941053 | 0.1406280 | 0.0756639 | -0.25 | 0.0125856 | 0.0001000 | 0.5477543 | NA | NA | 1 | 0.7772726 | 8.411092 | 48 | 1.573682 | 146 | 3.802986 | 149 | 2.066637 | 0.1381103 | -0.3000000 | 0.0193037 | 0.3959500 | 0.9255264 | 1 | 1 | 1.558307 | 1.558307 | 1.558307 | 0.2325581 | 1.98 | 0.1331827 | 1.328565 | 2.01 | 1.27 | 1.328565 | 0 | 1 | 0.0139233 | 0.0000358 | -0.8988748 | 0.9389128 | 0.1079346 | 0.0661260 | -0.5797405 | 1 | 0 | 1 | 0.0139233 | 0.0000358 | -0.8988748 | 0.9389128 | 0.1079346 | 0.0661260 | 0.9815963 | 0.1194830 | 0.0616705 | -0.4062897 | 0.2206195 | -0.6016700 | 0.4137913 | 0.1182423 | -224.0670 | 0.4787645 |
8693 | -0.0814496 | -0.0984498 | 0.1142883 | -0.4688008 | 0.3181153 | -0.6166136 | 0.4555893 | 0.1508792 | 0.4054054 | 0.6602317 | -0.0814496 | 2 | 0.3988370 | 2.060571 | 0.5250965 | 0 | 1 | 0.4054054 | 0.6602317 | -0.0814496 | 2 | 0.3988370 | 2.060571 | 0.5250965 | 1 | 1 | 1.651243 | 1.484233 | 1.19 | 1.00 | -0.50 | -0.0576923 | 0.3488372 | 136 | -0.50 | -0.0576923 | 0.4054054 | 0.9745764 | 2 | 1 | 6 | 0.3488372 | 0.0946062 | 0.0937635 | 0.1057152 | 0.1052409 | -0.50 | 0.0269522 | 0.0001000 | 0.5000458 | NA | NA | 1 | 0.5495742 | 7.853783 | 195 | 1.039641 | 191 | 4.458772 | 187 | 2.060571 | 0.1164590 | -0.0576923 | 0.0467339 | 0.5896074 | 1.1095330 | 1 | 1 | 1.651243 | 1.651243 | 1.651243 | 0.3488372 | 1.24 | 0.0998210 | 1.484233 | 1.35 | 1.00 | 1.484233 | 0 | 1 | 0.0033231 | 0.0000574 | 0.1887497 | 0.4564879 | -0.1022983 | 0.1171558 | 0.3988370 | 1 | 0 | 1 | 0.0033231 | 0.0000574 | 0.1887497 | 0.4564879 | -0.1022983 | 0.1171558 | 0.9745764 | -0.0984498 | 0.1142883 | -0.4688008 | 0.3181153 | -0.6166136 | 0.4555893 | 0.0391658 | -262.9010 | 0.5250965 |
1073 | -0.1253873 | 0.1511912 | 0.0608605 | -0.3832523 | 0.2048003 | -0.5832067 | 0.3861283 | 0.0876692 | 0.4031008 | 0.6356589 | -0.1253873 | 2 | 0.2463431 | 2.061698 | 0.4594595 | 1 | 1 | 0.4031008 | 0.6356589 | -0.1253873 | 2 | 0.2463431 | 2.061698 | 0.4594595 | 1 | 1 | 1.763381 | 1.304792 | 2.44 | 1.13 | -0.25 | 0.1230769 | 0.1395349 | 121 | -0.25 | 0.1230769 | 0.4031008 | 0.9867903 | 2 | 2 | 4 | 0.1395349 | 0.0779468 | 0.0618625 | 0.0695878 | 0.0601294 | -0.25 | 0.0778294 | 0.0001000 | 0.5663347 | NA | NA | 1 | 0.3151884 | 7.528904 | 185 | 2.069230 | 177 | 2.340804 | 169 | 2.061698 | 0.0279574 | 0.1230769 | 0.0310540 | 0.3527793 | 0.8978003 | 1 | 1 | 1.763381 | 1.763381 | 1.763381 | 0.1395349 | 2.45 | 0.0816322 | 1.304792 | 2.35 | 1.23 | 1.304792 | 0 | 1 | 0.0213244 | 0.0000306 | -0.5577693 | 0.6111726 | 0.1329904 | 0.0758345 | 0.2463431 | 1 | 0 | 1 | 0.0213244 | 0.0000306 | -0.5577693 | 0.6111726 | 0.1329904 | 0.0758345 | 0.9867903 | 0.1511912 | 0.0608605 | -0.3832523 | 0.2048003 | -0.5832067 | 0.3861283 | 0.0849681 | -208.4546 | 0.4594595 |
How the training Y (predictor variable) data looks:
. |
---|
1 |
0 |
1 |
0 |
0 |
1 |
I set the data up for an XGBoost model:
I create a grid search in order search over a parameter space to locate the optimal parameters for the data set. It needs a little more work but it’s a pretty good starting point. I can just add code to the expand.grid
function. That is, say I want to increase the depth of the tree I can add to max_depth = c(5, 8, 14)
more parameters such as max_depth = c(5, 8, 14, 1, 2, 3, 4, 6, 7)
. Note Adding parameters to the grid search increases computational time exponentially. Every parameter you add a value to, the model has to search all possible combinations associated with that parameter. That is, adding an eta = c(0.1)
and max_depth = c(5)
would give me the optimal parameter for one iteration/loop through the training model, i.e. an eta = c(0.1)
mapped onto a max_depth = c(5)
. Adding an additional value to the eta = c(0.1, 0.3)
and max_depth = c(5)
would map eta = 0.1
onto max_depth = 5
and eta = 0.3
onto max_depth = 5
. If I add another value such that eta = c(0.1, 0.3, 0.4)
then all 3 of these values will be mapped to max_depth = c(5)
. Adding values to the max_depth = c(5)
parameter would add an extra layer of complexity to the grid search. This added into the fact that there are many parameters to optimize in an XGBoost model can drastically increase computational complexity. Thus, understanding the statistics behind the models in Machine Learning is important when trying to avoid getting stuck in a local minimum (which any greedy algorithm using gradient descent optimisation can do: greedy algorithm).
######################################################################
################# XGBoost Grid Search to locate Optimal Parameters ###
##############################################################################################################################
# NOTE: This section was taken from the first chapter of my PhD where I needed to search over a parameter space to locate the
# most optimal parameters - I have just adapted it for this problem of Time Series Classification.
# Its simple enough to add parameters and different values - I just optimise a few important parameters from domain knowledge
# of the XGBoost model for this task, i.e depth and eta are quite important in gradient boosting.
# 1) I create a "grid" with different parameter values or combinations of parameter values
# 2) I apply cross validation over the parameter space to fine the most optimal values for the XGBoost model.
# 3) I print the model parameters which give the best train / (in-sample test) results in a data table.
##############################################################################################################################
# Grid Search Parameters:
# 1)
searchGridSubCol <- expand.grid(subsample = c(1), #Range (0,1], default = 1, set to 0.5 will prevent overfitting
colsample_bytree = c(1), #Range (0,1], default = 1
max_depth = c(5, 8, 14), #Range (0, inf], default = 6
min_child = c(1), #Range (0, inf], default = 1
eta = c(0.1, 0.05, 0.3), #Range (0,1], default = 0.3
gamma = c(0), #Range (0, inf], default = 0
lambda = c(1), #Default = 1, L2 regularisation on weights, higher the more conservative the model
alpha = c(0), #Default = 0, L1 regularisation on weights, higher the more conservative the model
max_delta_step = c(0), #Range (0, inf], default = 0 (Helpful for logisitc regression when class is extremely imbalanced, set to value 1-10 may help control the update)
colsample_bylevel = c(1) #Range (0,1], default = 1
)
ntrees = 200
nfold <- 10 # I use nfold = 10 which is probably too many folds, 5 should be sufficient.
watchlist <- list(train = dtrain, test = dval)
# 2)
system.time(
AUCHyperparameters <- apply(searchGridSubCol, 1, function(parameterList){
#Extract Parameters to test
currentSubsampleRate <- parameterList[["sub_sample"]]
currentColsampleRate <- parameterList[["colsample_bytree"]]
currentDepth <- parameterList[["max_depth"]]
currentEta <- parameterList[["eta"]]
currentMinChild <- parameterList[["min_child"]]
gamma <- parameterList[["gamma"]]
lambda <- parameterList[["lambda"]]
alpha <- parameterList[["alpha"]]
max_delta_step <- parameterList[["max_delta_step"]]
colsample_bylevel <- parameterList[["colsample_bylevel"]]
xgboostModelCV <- xgb.cv(data = dtrain,
nrounds = ntrees,
nfold = nfold,
showsd = TRUE,
metrics = c("auc", "logloss", "error"),
verbose = TRUE,
"eval_metric" = c("auc", "logloss", "error"),
"objective" = "binary:logistic", #Outputs a probability "binary:logitraw" - outputs score before logistic transformation
"max.depth" = currentDepth,
"eta" = currentEta,
"gamma" = gamma,
"lambda" = lambda,
"alpha" = alpha,
"subsample" = currentSubsampleRate,
"colsample_bytree" = currentColsampleRate,
print_every_n = 50, # print ever 50 trees to reduce the outputs printed.
"min_child_weight" = currentMinChild,
booster = "gbtree", #booster = "dart" #using dart can help improve accuracy.
early_stopping_rounds = 10,
watchlist = watchlist,
seed = 1234)
xvalidationScores <<- as.data.frame(xgboostModelCV$evaluation_log)
train_auc_mean <- tail(xvalidationScores$train_auc_mean, 1)
test_auc_mean <- tail(xvalidationScores$test_auc_mean, 1)
train_logloss_mean <- tail(xvalidationScores$train_logloss_mean, 1)
test_logloss_mean <- tail(xvalidationScores$test_logloss_mean, 1)
train_error_mean <- tail(xvalidationScores$train_error_mean, 1)
test_error_mean <- tail(xvalidationScores$test_error_mean, 1)
output <- return(c(train_auc_mean, test_auc_mean, train_logloss_mean, test_logloss_mean, train_error_mean, test_error_mean, xvalidationScores, currentSubsampleRate, currentColsampleRate, currentDepth, currentEta, gamma, lambda, alpha, max_delta_step, colsample_bylevel, currentMinChild))
hypemeans <- which.max(AUCHyperparameters[[1]]$test_auc_mean)
output2 <- return(hypemeans)
}))
The output of the grid search can be set into a nice data frame using the following code. However I did not save this output to file and therefore cannot read it in. You can view the output on the original Jupyter Notebook In [49]
here
# 3)
output <- as.data.frame(t(sapply(AUCHyperparameters, '[', c(1:6, 20:29))))
varnames <- c("TrainAUC", "TestAUC", "TrainLogloss", "TestLogloss", "TrainError", "TestError", "SubSampRate", "ColSampRate", "Depth", "eta", "gamma", "lambda", "alpha", "max_delta_step", "col_sample_bylevel", "currentMinChild")
colnames(output) <- varnames
data.table(output)
According to the results at the time the optimal parameters were:
- ntrees = 95,
- eta = 0.1,
- max_depth = 5,
With the other parameters left to default settings for simplicity.
Visit Matthew Smith – R Blog to download the complete R code and see additional details featured in this tutorial: https://lf0.com/post/synth-real-time-series/financial-time-series/
Disclosure: Interactive Brokers
Information posted on IBKR Campus that is provided by third-parties does NOT constitute a recommendation that you should contract for the services of that third party. Third-party participants who contribute to IBKR Campus are independent of Interactive Brokers and Interactive Brokers does not make any representations or warranties concerning the services offered, their past or future performance, or the accuracy of the information provided by the third party. Past performance is no guarantee of future results.
This material is from Matthew Smith - R Blog and is being posted with its permission. The views expressed in this material are solely those of the author and/or Matthew Smith - R Blog and Interactive Brokers is not endorsing or recommending any investment or trading discussed in the material. This material is not and should not be construed as an offer to buy or sell any security. It should not be construed as research or investment advice or a recommendation to buy, sell or hold any security or commodity. This material does not and is not intended to take into account the particular financial conditions, investment objectives or requirements of individual customers. Before acting on this material, you should consider whether it is suitable for your particular circumstances and, as necessary, seek professional advice.