A new approach to AI model training could disrupt the centralized power structure that has dominated artificial intelligence development. By using distributed computing across regular GPUs connected via the internet, two startups have demonstrated an alternative path that might challenge the resource-intensive model building methods that currently give tech giants their competitive edge in AI development.
The big picture: Researchers from Flower AI and Vana have successfully trained a language model called Collective-1 using GPUs spread across the globe rather than concentrated in datacenters.
Why this matters: The current AI landscape is dominated by companies with access to massive computing resources in centralized datacenters, creating high barriers to entry.
What’s next: Flower AI is already scaling up its distributed approach with ambitious plans for larger models.
The bottom line: While still early in development, this distributed training methodology represents a potential inflection point in how AI systems are built, potentially reshaping industry power dynamics by lowering the technical and financial barriers to advanced AI development.