Every so often, we find the most interesting data science links from around the web and collect them in Data Science Briefings, the DataMiningApps newsletter. Subscribe now for free if you want to be the first to get up to speed on interesting resources.
- TextGrad: Automatic “Differentiation” via Text
TextGrad (from Stanford) is a powerful framework performing automatic differentiation via text. TextGrad backpropagates textual feedback provided by LLMs to improve individual components of a compound AI system. The package itself is available here - The Winds of AI Winter
“People are raising doubts about AI Summer.” A interesting summary of recent events in the space of AI. - TinyML: Why the Future of Machine Learning is Tiny and Bright
TinyML sits at the intersection of machine learning and embedded systems. It is the application of ML algorithms on small, low-power devices. - Time Series Are Not That Different for LLMs
Harnessing the power of LLMs for time series modeling - TeVAE: A Variational Autoencoder Approach for Discrete Online Anomaly Detection in Variable-state Multivariate Time-series Data
From Mercedes: “we propose a temporal variational autoencoder (TeVAE) that can detect anomalies with minimal false positives when trained on unlabelled data” - We need superknowledge before superintelligence
The most important technical problem of our time? - Want to spot a deepfake? Look for the stars in their eyes
Reflections in the eyeballs are consistent for the real person, but incorrect (from a physics point of view) for a fake person - SpreadsheetLLM: Encoding Spreadsheets for Large Language Models
SheetCompressor is an innovative encoding framework that compresses spreadsheets effectively for LLMs - xLSTMTime : Long-term Time Series Forecasting With xLSTM
This paper presents an adaptation of the extended LSTM (xLSTM) architecture, called xLSTMTime, for LTSF tasks. xLSTMTime incorporates exponential gating and a revised memory structure to improve performance on LTSF. - Vision language models are blind
“Surprisingly, four state-of-the-art VLMs are, on average, only 58.12% accurate on our benchmark where the human expected accuracy is 100%.”