This article first appeared in Data Science Briefings, the DataMiningApps newsletter. Subscribe now for free if you want to be the first to receive our feature articles, or follow us @DataMiningApps. Do you also wish to contribute to Data Science Briefings? Shoot us an e-mail over at briefings@dataminingapps.com and let’s get in touch!
Contributed by Boje Deforce, based on this recent paper – accepted at KDD ’24 Fragile Earth.
Key Takeaways
- Foundation models are becoming popular for time-series forecasting
- Early results show usefulness in agriculture, a domain plagued by limited data
Introduction
Recent years have witnessed a shift in artificial intelligence (AI) research, driven by the emergence of foundation models in natural language processing (NLP) [1] and computer vision (CV) [2]. Such foundation models, trained on massive datasets and capable of complex tasks, have spurred a wave of innovation across various domains. More recently, there has also been a rise of foundation models for time-series forecasting [3-4]. Based on these recent developments, we explore the potential of foundation models for time-series forecasting in agriculture, specifically focusing on predicting soil water potential, a key indicator of field water status.
Why Time-Series Foundation Models Make Sense
The evolution of foundation models for time-series forecasting has paralleled the advances seen in NLP and CV, though with a small lag. A notable example is Chronos [1], developed by Amazon, which leverages actual large language models (LLMs) for forecasting tasks. Chronos employs a unique approach by converting time-series data into discrete tokens (i.e. quantization), allowing it to utilize the powerful capabilities of LLMs to generate predictions. This method capitalizes on the extensive knowledge and sequential understanding embedded in LLMs, offering the potential to interpret and forecast complex sequential patterns in various domains [1].
In contrast, TimeGPT represents a different strand of innovation within time-series foundation models. Unlike Chronos, TimeGPT relies on a transformer architecture specifically designed for time-series data. This model is trained directly on a diverse range of time-series datasets, including financial, weather, and IoT data. By learning from this vast array of temporal data, TimeGPT can capture intricate temporal relationships and provide accurate forecasts without the need for extensive domain-specific data. Notably, TimeGPT’s architecture is one of the few which includes mechanisms for incorporating exogenous variables, allowing for a more nuanced understanding of the factors influencing the target variable, such as e.g. weather variables in agricultural settings.
Despite the promising capabilities of both Chronos and TimeGPT, it remains unclear which model type—LLM-based or time-series transformer-based—is best suited for time-series forecasting, which has finally led to the construction of a foundation time series arena, with the current stand shown below (although the authors of Chronos might disagree).
Overview of the time-series Arena, hosted by Nixtla
Useful in Agriculture?
Agriculture often struggles with limited data and varying environmental conditions, making it difficult to develop accurate predictive models. Foundation models could offer a solution by generalizing from diverse datasets and learning intricate temporal patterns, even with minimal data.
Early results in our work show that TimeGPT, for example, is on par with advanced forecasting models to forecast soil moisture levels, with significantly less data than models like the Temporal Fusion Transformer [6] (TFT). This is especially valuable in scenarios where the high cost of data collection restricts the available dataset size.
However, adding exogenous variables such as weather and irrigation did not consistently enhance TimeGPT’s performance, suggesting that the pre-training data might not fully capture the complexities of agricultural systems. The opaque nature of these models can also make it difficult to interpret predictions. Thus, while foundation models have great potential in agriculture, there is a need for careful application and possibly the development of more specialized models to better address specific agricultural challenges.
References:
- [1] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, and others, “Language models are few-shot learners,” Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901, 2020.
- [2] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale,” in Proc. International Conference on Learning Representations, 2021.
- [3] A. F. Ansari, L. Stella, C. Turkmen, X. Zhang, P. Mercado, H. Shen, O. Shchur, S. S. Rangapuram, S. P. Arango, S. Kapoor, and others, “Chronos: Learning the language of time series,” arXiv preprint arXiv:2403.07815, 2024.
- [4] A. Garza and M. Mergenthaler-Canseco, “TimeGPT-1,” arXiv preprint arXiv:2310.03589, 2023.
- [5] Y. Liang, H. Wen, Y. Nie, Y. Jiang, M. Jin, D. Song, S. Pan, and Q. Wen, “Foundation models for time series analysis: A tutorial and survey,” arXiv preprint arXiv:2403.14735, 2024.
- [6] B. Lim, S. Ö. Arık, N. Loeff, and T. Pfister, “Temporal fusion transformers for interpretable multi-horizon time series forecasting,” International Journal of Forecasting, vol. 37, no. 4, pp. 1748–1764, 2021.