In this conversation, Vladyslav Larin, Co-founder and CTO of Fortytwo, traces his journey from childhood fascination with AI to pioneering an AI research lab aimed at achieving AGI through networked intelligence.
Starting with sci-fi influences and early academic explorations, Larin shares how his skepticism of centralized AI models led to innovative solutions in distributed computing.
This interview explores how Fortytwo’s unique approach to AI mirrors natural systems and could reshape the future of artificial intelligence.
Inspirations and Influences
What sparked your interest in AI, and who were your role models shaping your thinking?
AI captured my attention early on in school. I viewed programming and even game development as stepping stones to AI. The Matrix movie was an influence, which introduced the idea that AI could not only reach human-level intelligence but eventually become far more powerful.
My role models were mainly academic texts. I recall reading an article about the perceptron, the first artificial neuron model. Understanding this and the backprop learning algorithm was a defining moment—it showed me how artificial neurons could enable reasoning, logic, and even fuzzy logic, allowing them to compute and solve complex problems.
How did your PhD in Applied Mathematics shape your approach to AI?
The road to my PhD was directly related to AI. When I first read about backprop, I saw a huge bottleneck in having one centralized entity governing the learning. Even back then, I couldn’t accept the idea of a single ‘God model.’ It felt ineffective, even unnatural.
That drove me toward decentralized algorithms. During my PhD, I worked on distributed agent systems that utilize one hundred percent of available resources through decentralized coordination. With centralization, we sacrifice robustness to solve problems; the algorithm gains efficiency but loses its adaptability, flexibility. While it’s easier practically for open-ended problems it’s better to have concurrency with multiple pieces independently working on it.
The Birth of Swarm Intelligence
How does Fortytwo’s swarm inference fundamentally differ from traditional AI scaling?
The key difference is we don’t split one single model across nodes or replicate data between them. Instead, we treat every AI node as a black box that independently produces its inference. Each node can run custom tools. After participants generate answers, a subset performs peer review to rank these responses, helping us find the best completions.
With this setup, thousands of unique models can coexist and reach consensus. This architecture means everyone can participate in the AI swarm. Fortytwo functions as an AI research lab, developing protocols where decentralized nodes contribute to a collective intelligence, enhancing overall AI capabilities.
Consider building an app: some nodes generate art, others write code, some lay out requirements, while others verify by running results in virtual machines. We have a super heterogeneous architecture finding consensus at different reasoning points, uniting strong elements to deliver final answers.
What challenges drove you to create this swarm-based approach?
My earlier projects brought scaling challenges into sharp focus. With conversational AI, costs escalated quickly when trying to make interactions natural. With multimodal systems, adding dimensions meant exponentially more compute. It became clear that for AI to fulfill its potential, we needed a fundamentally different approach to organizing and distributing AI compute. That’s where Fortytwo’s ideas began taking shape.
The Future of Decentralized AI
Do you think decentralized inference provides a superior path to AGI?
Decentralized inference unlocks nearly unlimited compute by distributing the load across all available resources, including consumer devices that opt in. By leveraging user-contributed compute, Fortytwo’s decentralized approach aims to unlock scalable and efficient pathways toward AGI, utilizing a network of interconnected models.
A decentralized solution provides algorithmic security, alongside scalability and pricing far lower than centralized data centers. For most tasks, decentralized inference is the better path. Centralized approaches may still serve niche users, but their relevance will likely diminish as we address decentralized coordination challenges, which we are working on at Fortytwo.
Where do you see decentralized AI in five years?
Five years is enough to kickstart widespread adoption of decentralized models and inference. Hopefully, it will outperform centralized approaches. Decentralized AI will tap into latent compute everywhere—in everyday users’ hands or data centers not optimized for AI workloads.
By utilizing this hidden compute power, we can make AI more affordable and accurate, deploying more compute in peer review for every query. It’s realistic to expect decentralized AI to eventually capture the majority of the AI inference market.
What’s Fortytwo’s ultimate contribution to AI’s future?
Fortytwo’s research lab is dedicated to developing decentralized AI architectures that harness user-contributed compute, aiming to achieve AGI as an emergent property of a network of models. It provides nearly unlimited compute by leveraging underutilized global resources, improves accuracy through peer review to address hallucination, and democratizes AI ownership, moving away from corporate control to a system where thousands or millions can contribute.
This approach scales economically while enhancing accuracy and participation. As AI becomes more embedded in our lives, centralized models’ limitations will become clearer. Fortytwo’s architecture is an evolutionary step, transforming AI into a globally distributed intelligence that can truly serve humanity’s diverse needs.
Read more interviews here.