tags:
- agent
- scaling-law
- in-progress
Texts, images, videos, audios, they all encode information, and we can view these formats from the early days of multimedia: things being played on Nokias. we know what we will expect when we see one such media, regardless of the modality. we would also have a feeling, but we don’t immediately and fully know how to transcribe this feeling into texts. While language and texts are lossy compression of our brain’s information, one can train himself, by vast amount of reading, writing, and iteratively improving these two skills, to minimize the information compression by using more effective ways to describe his feelings. neural scaling law concerns the capability of a deep learning model with respect to compute, model size and data size. using the chinchilla optimal scaling law, we are running out of human-generated data soon. many theses have been proposed to break this scaling law in many directions: - prune data and train on less to achieve the same - generate synthetic data - explore the limitations of repeated training on the same sets of data.
A lot of directions to take on this! Sadly I have no time in figuring out which one I can meaningfully work on, so I’m just going to write them out.
1. Architecture Design Efficient model architectures are key to reducing computational demands. Beyond simple data pruning, designing models that inherently require less compute will probably be the way to go. A promising direction is the development of hybrid models like the StripedHyena, which integrates sparse connectivity patterns to maintain performance while cutting down on resource usage. Another prolific researcher, Albert Gu, has already shipped new text-to-speech models with state space models. This actually flipped my view on API based product, as I was eager to try it (and found out it didn’t work 100% as I wanted, more like 92%). Implication is that these new architectures will pave the way for more sustainable AI systems, considering that at least one major domain has demonstrated that new model arch exhibits better scaling law than tranformers.
2. Cross-Modal Learning Leveraging cross-modal learning—where models learn from diverse data types such as text, images, and audio—can enhance AI’s ability to generalize across tasks. This holistic approach not only improves performance but also facilitates the generation of synthetic data that is richer and more varied. By harnessing information from multiple modalities, we can create more comprehensive and robust AI systems.
3. Knowledge Distillation Knowledge distillation involves training smaller, efficient models (students) to replicate the capabilities of larger models (teachers). This technique helps in deploying high-performance AI with reduced computational overhead. By transferring knowledge from complex models to simpler ones, we can maintain accuracy while significantly cutting down on resource requirements.
4. Synthetic Data Generation Generating high-quality synthetic data is a viable solution to the impending data scarcity problem. Advanced techniques in generative modeling can produce realistic and diverse datasets, enabling continuous training and improvement of AI models. Synthetic data can augment human-generated data, ensuring that models have ample training material without solely relying on real-world data.
5. Repetitive Training on Core Datasets Re-evaluating the impact of repetitive training on core datasets can reveal insights into model improvement without expanding data volumes. By iteratively refining models on essential datasets, we can enhance their performance and stability. This approach focuses on deepening the understanding and capabilities of AI through repeated exposure to critical data, rather than sheer data expansion.