TFTModel¶
- class TFTModel(max_epochs: int = 10, gpus: Union[int, List[int]] = 0, gradient_clip_val: float = 0.1, learning_rate: Optional[List[float]] = None, batch_size: int = 64, context_length: Optional[int] = None, hidden_size: int = 16, lstm_layers: int = 1, attention_head_size: int = 4, dropout: float = 0.1, hidden_continuous_size: int = 8, trainer_kwargs: Optional[Dict[str, Any]] = None, *args, **kwargs)[source]¶
Bases:
etna.models.base.Model
Wrapper for
pytorch_forecasting.models.temporal_fusion_transformer.TemporalFusionTransformer
.Notes
We save
pytorch_forecasting.data.timeseries.TimeSeriesDataSet
in instance to use it in the model. It`s not right pattern of using Transforms and TSDataset.Initialize TFT wrapper.
- Parameters
batch_size (int) – Batch size.
context_length (Optional[int]) – Max encoder length, if None max encoder length is equal to 2 horizons.
max_epochs (int) – Max epochs.
gpus (Union[int, List[int]]) – 0 - is CPU, or [n_{i}] - to choose n_{i} GPU from cluster.
gradient_clip_val (float) – Clipping by norm is using, choose 0 to not clip.
learning_rate (Optional[List[float]]) – Learning rate.
hidden_size (int) – Hidden size of network which can range from 8 to 512.
lstm_layers (int) – Number of LSTM layers.
attention_head_size (int) – Number of attention heads.
dropout (float) – Dropout rate.
hidden_continuous_size (int) – Hidden size for processing continuous variables.
trainer_kwargs (Optional[Dict[str, Any]]) – Additional arguments for pytorch_lightning Trainer.
- Inherited-members
Methods
fit
(ts)Fit model.
forecast
(ts)Predict future.
- fit(ts: etna.datasets.tsdataset.TSDataset) etna.models.nn.tft.TFTModel [source]¶
Fit model.
- Parameters
ts (etna.datasets.tsdataset.TSDataset) – TSDataset to fit.
- Return type
- forecast(ts: etna.datasets.tsdataset.TSDataset) etna.datasets.tsdataset.TSDataset [source]¶
Predict future.
- Parameters
ts (etna.datasets.tsdataset.TSDataset) – TSDataset to forecast.
- Returns
TSDataset with predictions.
- Return type