DeepARModel

class DeepARModel(batch_size: int = 64, context_length: Optional[int] = None, max_epochs: int = 10, gpus: Union[int, List[int]] = 0, gradient_clip_val: float = 0.1, learning_rate: Optional[List[float]] = None, cell_type: str = 'LSTM', hidden_size: int = 10, rnn_layers: int = 2, dropout: float = 0.1, trainer_kwargs: Optional[Dict[str, Any]] = None)[source]

Bases: etna.models.base.Model

Wrapper for pytorch_forecasting.models.deepar.DeepAR.

Notes

We save pytorch_forecasting.data.timeseries.TimeSeriesDataSet in instance to use it in the model. It`s not right pattern of using Transforms and TSDataset.

Initialize DeepAR wrapper.

Parameters
  • batch_size (int) – Batch size.

  • context_length (Optional[int]) – Max encoder length, if None max encoder length is equal to 2 horizons.

  • max_epochs (int) – Max epochs.

  • gpus (Union[int, List[int]]) – 0 - is CPU, or [n_{i}] - to choose n_{i} GPU from cluster.

  • gradient_clip_val (float) – Clipping by norm is using, choose 0 to not clip.

  • learning_rate (Optional[List[float]]) – Learning rate.

  • cell_type (str) – One of ‘LSTM’, ‘GRU’.

  • hidden_size (int) – Hidden size of network which can range from 8 to 512.

  • rnn_layers (int) – Number of LSTM layers.

  • dropout (float) – Dropout rate.

  • trainer_kwargs (Optional[Dict[str, Any]]) – Additional arguments for pytorch_lightning Trainer.

Inherited-members

Methods

fit(ts)

Fit model.

forecast(ts)

Predict future.

fit(ts: etna.datasets.tsdataset.TSDataset) etna.models.nn.deepar.DeepARModel[source]

Fit model.

Parameters

ts (etna.datasets.tsdataset.TSDataset) – TSDataset to fit.

Return type

DeepARModel

forecast(ts: etna.datasets.tsdataset.TSDataset) etna.datasets.tsdataset.TSDataset[source]

Predict future.

Parameters

ts (etna.datasets.tsdataset.TSDataset) – TSDataset to forecast.

Returns

TSDataset with predictions.

Return type

TSDataset