Understanding DeepAR – Applying Machine Learning Algorithms – MLS-C01 Study Guide

Understanding DeepAR

The DeepAR forecasting algorithm is a built-in SageMaker algorithm that is used to forecast a one-dimensional time series using a Recurrent Neural Network (RNN).

Traditional time series algorithms, such as ARIMA and ETS, are designed to fit one model per time series. For example, if you want to forecast sales per region, you might have to create one model per region, since each region may have its own sales behaviors. DeepAR, on the other hand, allows you to operate more than one time series in a single model, which seems to be a huge advantage for more complex use cases.

The input data for DeepAR, as expected, is one or more time series. Each of these time series can be associated with the following:

  • A vector of static (time-independent) categorical features, controlled by the cat field
  • A vector of dynamic (time-dependent) time series, controlled by dynamic_feat

Important note

Note that the ability to train and make predictions on top of multiple time series is strictly related to the vector of static categorical features. While defining the time series that DeepAR will train on, you can set categorical variables to specify which group each time series belongs to.

Two of the main hyperparameters of DeepAR are context_length, which is used to control how far in the past the model can see during the training process, and prediction_length, which is used to control how far in the future the model will output predictions.

DeepAR can also handle missing values, which, in this case, refers to existing gaps in the time series. A very interesting functionality of DeepAR is its ability to create derived features from time series. These derived features, which are created from basic time frequencies, help the algorithm learn time-dependent patterns. Table 6.7 shows all the derived features created by DeepAR, according to each type of time series that it is trained on.

Frequency of the time seriesDerived feature
MinuteMinute of hour, hour of day, day of week, day of month, day of year
HourHour of day, day of week, day of month, day of year
DayDay of week, day of month, day of year
WeekDay of month, week of year
MonthMonth of year

Table 6.7 – DeepAR derived features per frequency of time series

You have now completed this section about forecasting models. Next, you will take a look at the last algorithm regarding supervised learning – that is, the Object2Vec algorithm.

Object2Vec

Object2Vec is a built-in SageMaker algorithm that generalizes the well-known Word2Vec algorithm. Object2Vec is used to create embedding spaces for high dimensional objects. These embedding spaces are, per definition, compressed representations of the original object and can be used for multiple purposes, such as feature engineering or object comparison.

Figure 6.12 – A visual example of an embedding space

Figure 6.12 illustrates what is meant by an embedding space. The first and last layers of the neural network model just map the input data with itself (represented by the same vector size).

As you move on to the internal layers of the model, the data is compressed more and more until it hits the layer in the middle of this architecture, known as the embedding layer. On that particular layer, you have a smaller vector, which aims to be an accurate and compressed representation of the high-dimensional original vector from the first layer.

With this, you just completed the first section about machine learning algorithms in AWS. Coming up next, you will take a look at some unsupervised algorithms.