Anadyr Horizon is using predictive AI to forecast and prevent global conflicts through its “peace tech” platform called North Star. The startup, founded in 2024, creates digital twins of world leaders and runs thousands of simulations to predict how they might react to real-world scenarios like economic sanctions or military blockades, with clients already including government agencies and corporate risk managers.
Why this matters: Violent conflict cost the global economy an estimated $19 trillion in 2023, and Anadyr’s approach represents a departure from traditional defense tech by focusing on conflict prevention rather than warfare capabilities.
• The company emerges as AI firms like OpenAI and Meta increasingly compete for high-profile defense contracts, positioning “peace tech” as a potential billion-dollar market opportunity.
• Co-founder Arvid Bell, a former Harvard lecturer who taught conflict de-escalation, emphasizes the distinction: unlike defense tech, Anadyr’s technology is designed to prevent wars, not fight them.
How it works: North Star simulates global leaders’ decision-making processes by creating digital twins that account for factors including sleep deprivation and other human variables.
• The AI runs thousands of simulations with slight variations to calculate probabilities of different outcomes in geopolitical scenarios.
• While Anadyr acknowledges it cannot predict the future with certainty, it aims to provide diplomats and politicians with probability assessments to inform strategic decisions that promote conflict resolution.
What could go wrong: Experts warn the technology may not be sophisticated enough for such high-stakes applications, with several concerning limitations.
• AI researcher Timnit Gebru cautions that AI trained on open-source information will reflect the biases of the loudest voices online, which tend to be Western or European perspectives.
• A 2024 study on diplomatic decision-making uses of large language models found that AI systems tend to favor warmongering approaches.
• Corporate applications pose additional risks: if Anadyr’s AI incorrectly predicts unfavorable conditions in a country, leading to investment withdrawals, the consequences could include mass unemployment and currency depreciation.
What they’re saying: Bell acknowledges the challenges while emphasizing careful development and client selection.
• “I want to simulate what breaks the world. I don’t want to break the world,” Bell told Business Insider.
• The company has shared limited details about its AI training data, raising questions about transparency in such critical applications.