DeepSeek’s R1 AI model might be the first time an open-source release crashed the NASDAQ. The markets reacted strongly for a simple reason. DeepSeek appears to have created a much cheaper way to deliver high-quality AI applications, on par with the efforts of industry giants like OpenAI and Anthropic. DeepSeek was able to produce its high-quality model with far fewer computing resources than comparable models. For NVIDIA and the entire bull market on AI data infrastructure, DeepSeek implied that massive data centers and $500 billion projects like Stargate would not be required to train and deliver industrial-grade AI.
Here’s the kicker. DeepSeek was offering access to its models to enterprises wishing to build AI applications for 1/50th of the cost of OpenAI models. Because DeepSeek was open source, its models could be enhanced by any other company or AI model builder. This unleashes a different form of innovation from that of the closed proprietary models. Within a few days, dozens of versions of DeepSeek’s R1 model flooded Huggingface, the model and tool repository company where open-source AI products are most frequently posted for download and sharing.
Many view this as another example of Jevon’s Paradox. When technological advancements make a resource less expensive, then demand for the resource increases. Given the concerns about the potential cost of deployment of AI-based applications, the upshot of all this is that AI could be quickly democratized by plunging prices. Ultra-low prices could make it economical to sprinkle AI into every technology application and business process touched by a computer. Over time, AI can become just another component of a robust tech stack.
What does all this mean for the average CEO, Private Equity investor, or Board Member trying to figure out an AI strategy for their company? Let’s pick through the hype and pull out some nuggets. But first….
What Does Open Source AI Mean?
Open Source AI means artificial intelligence models available for reuse and modification, free of charge. This generally includes reuse and modification for commercial applications, although there may be significant limitations on license terms. (For example, Meta’s Llama family of open-source AI models cannot be used for applications with more than 700 million users or to improve competing AI models and any derivative models must include the Llama name). Open-source AI also means that anyone is free to study the components and inner workings of an AI model. More transparent and completely open-source AI models include details on how a model was trained (so-called checkpoints) and how a model is tuned (so-called model weights). The most aggressively open-sourced AI models also include data on which the models were trained. These completely open-source models, which include training data, are rare. Llama and DeepSeek are open source but the creators did not disclose training data.
In some ways, the battle over open-source AI is history repeating itself, albeit on a much faster timescale. The history of Linux and open-source software began in the early days of computing when sharing software was the norm, but by the 1970s and 80s, proprietary operating systems like Windows dominated. In 1991, Linus Torvalds, a Finnish student, created the Linux kernel and released it with a permissive license that allowed anyone to modify, distribute, and improve it. Combined with existing open-source tools, this led to a powerful, free alternative to expensive proprietary systems. Throughout the 1990s and early 2000s, Linux gained traction as companies like Red Hat and Debian built user-friendly distributions, while the rise of Apache, MySQL, and PHP (the LAMP stack) made Linux the foundation of the internet.
Linux became a dominant server operating system during the Dot-Com Boom and cemented its dominance as the platform underpinning cloud computing. Major companies like Google, Amazon, and Facebook built their infrastructures on Linux, while Microsoft struggled with scalability and licensing fees. The shift toward open standards, virtualization, and later containerization (Docker, Kubernetes) further solidified Linux’s dominance, as proprietary systems rapidly ceded market share. By the 2010s, even Microsoft embraced Linux, integrating it into its cloud services, effectively conceding that open-source had won the server wars. Today, Linux powers the majority of web servers, supercomputers, and cloud platforms, proving that open collaboration and free software can outcompete closed, proprietary models.
Looking at the history of open source, advocates for open-source AI models say they accelerate innovation and democratize AI by giving access to those who might not otherwise be able to afford it. Meta’s open-source AI approach is particularly interesting; the firm has decided to open source and give away its AI, which it views as a complement rather than a profit center. Meta appears to view its AI efforts as a tool to help creators and advertisers become more efficient and creative for Meta posts and ads. In contrast, OpenAI and Google have chosen not to open source their top-level foundational models, believing they are better off protecting their technology advantage and trying to recoup their costs by charging for access and usage (both for enterprises accessing the API and consumers accessing via desktop and mobile apps).
Critics of open-source AI says it holds greater security risks because bad guys can see it and manipulate it with greater ease. Researchers who tested DeepSeek after the release found it failed to protect against many types of common attacks used to manipulate outputs. Italy blocked DeepSeek. New York State banned DeepSeek from government devices and systems. Many expressed fear that DeepSeek could harbor malicious software benefitting China (to be fair, China also has concerns about U.S. open-source AI models and other U.S. software).
Open-source advocates countered that open models are more airtight because they are more accessible and battle-tested at scale by crowds of users, researchers, and others. And a host of very large companies, including Dell Computing, Microsoft, and Amazon, rushed to launch their own tuned versions of DeepSeek’s R1 model either in the cloud or for on-premise solutions. The companies launching DeepSeek versions said they had tested it and secured it further, thanks to its open-source nature.
Why Was DeepSeek Such a Big Deal?
There’s the hype and there’s the reality. The hype held that DeepSeek created a model for roughly $5 million that rivaled OpenAI’s GPT4o model, which cost perhaps $100 million to create. The reality was more nuanced. DeepSeek only reported the cost of the final training run, not the total cost of development (they were not being shifty — DeepSeek was explicit about this fact in their paper). DeepSeek also had access to a relatively large fleet of GPUs for tuning and pre-training. This fleet was smaller than the fleet of OpenAI or Google but it was not small, and would be too expensive for even most large companies to purchase. DeepSeek did offer similar AI capabilities at a much lower price than OpenAI but its prices were actually close to those of comparable Google Gemini AI models for API access and usage.
There is also a lot of speculation that DeepSeek trained using outputs from OpenAI or other advanced AI models, a way to accelerate the process and benefit from previous developments. OpenAI is considering legal action but all the major cloud providers and software companies which are offering DeepSeek models also indemnify customers against legal action. For example, here’s Microsoft’s guarantee). There is also a question of whether the output of an AI can be copyrighted, given that it is distilled from training data. Courts have yet to mark AI outputs (particularly text or code) as subject to copyright laws.
All of that said, DeepSeek was surprising because it clearly demonstrated a smaller entity could build and train a world-class frontier foundational AI system at a much lower cost and with a smaller fleet of GPUs requiring less capital. DeepSeek’s team used a number of innovative methods to improve efficiency and results. R1’s training was significantly more efficient due to smarter data selection, improved training methods, and optimized infrastructure. Instead of relying on sheer data volume, they focused on high-quality, relevant data to reduce wasted computation. They also refined knowledge from earlier stages rather than constantly starting from scratch, making learning more efficient. Additionally, they built R1 using customer code to turbocharge NVIDIA’s CUDA stack, allowing for better hardware and software integration. This sped up training and lowered costs. By eliminating inefficiencies at every step, they achieved faster and more cost-effective training. None of the methods were by themselves revolutionary. Collectively, the methods added up to significant efficiency and performance gains.
How Does DeepSeek Change the AI Game for Enterprises?
First, let’s make a few assumptions. DeepSeek and ongoing open-source AI competition with closed AI providers like OpenAI will continue to drive costs down rapidly. That was already happening across OpenAI, Anthropic, Google, and Meta (with their Llama models), not to mention IBM with their open-source Granite family of models and large models from Databricks and Snowflake. But DeepSeek appears to have put the cost decline process on overdrive, with OpenAI quickly responding and Meta researchers reported they are looking to incorporate some of DeepSeek’s methods into their own Llama models. Second, we can now assume that the capital requirements for training and running AI inference at scale will also decline.
So what does this mean?
First, in technology, cost declines in fiercely competitive spaces tend to accelerate over time, not slow down. For a brief period of 2024, it looked like massive computing resources would be required to train and operate powerful, enterprise-ready AI. DeepSeek appears to have blown up that assumption. Now CEOs and firms investing in AI can more realistically forecast for costs of running AI applications to continue dropping quickly as the technology becomes better and more efficient. This is the same historical price/performance curve that other major tech innovations have gone through.
Second, correspondingly, the cost of developing AI applications and adding it to existing applications will plummet. This means that AI will, in the very near future, move from a special to a standard requirement of every product. Translation? Everything will have AI.
Third, this shift might accelerate an ongoing movement towards pricing per action or result, as opposed to pricing per seat. When a SaaS product enabled a human being to, say, work in a call center, then it made sense to charge per month per human. Now, with AI agents, it is much easier to charge per call resolution and tie cost to performance. Prior to DeepSeek, this new performance-based software pricing model appeared overly costly. With DeepSeek and the resulting rush to deliver cheaper and cheaper AI, cost per resolution or action or result will be far more economical and make agentic (autonomous) AI much more affordable. This matches how cloud based infrastructure platforms like AWS, Azure and Google Cloud are priced.
Conclusion: Cleared for AI Liftoff in 2025
AI technology could yet hit a capability wall that slows down growth. There are questions about the efficacy of current large language model architectures for more complex tasks, and problems remain with hallucinations and reliability. That said, the costs of entering the AI game have now dropped to the point where the cost of experimentation is significantly lower, while the potential upside has never been greater. Thus, the risks of remaining on the sidelines have increased. In 2025, many businesses that have been hesitant or stuck in proof-of-concept mode are likely to accelerate their AI deployments, whether for internal productivity tools or customer-facing applications. More importantly, the rapid decline in AI costs is poised to usher in levels of experimentation and adoption previously thought impossible.
With DeepSeek R1 and other lower-priced AI alternatives now on the market, companies building AI applications need to consider whether their efforts will be “AI-first” or “AI-enhanced”. AI-first applications are more novel and would not be possible without AI. AI-enhanced applications are AI improvements made to existing applications that will make the experience and output better but will be less revolutionary. (Our colleague Andrew Tahvildary wrote about that in a previous Substack).
Real opportunities for enterprises may not lie in simply embedding AI-driven features like chatbots or summarization tools into their products but rather in leveraging AI to automate complex business processes that traditionally required skilled human effort. While some of these processes, such as customer support, maybe user-facing, the bigger transformation is happening in back-end operations, such as medical forms processing or litigation discovery, where AI can significantly reduce costs and improve efficiency.
There are already domains, such as machine translation, where AI is now a normal part of the workflows and sometimes even runs autonomously. For example, The Economist uses AI-powered translation to translate content on its Espresso app into multiple languages without human intervention. This opens up the potential addressable market of the paid product and further extends the brand, at minimal cost.
This shift is not just an opportunity—it is an imperative. Companies that fail to embrace AI-driven automation will find themselves at a competitive disadvantage as rivals adopt these technologies to streamline operations and gain an edge. Enterprise leaders and decision-makers should focus on identifying mission-critical workflows that can benefit from AI and prioritize implementation strategies that maximize efficiency and scalability.
DeepSeek is a significant milestone in the broader movement toward ubiquitous, affordable AI. Its emergence has created market dynamics that favor the commoditization of AI models and much lower operational costs. CEOs, private equity investors, and board members should ensure their teams move aggressively on AI adoption—because the industry has now entered the rocketship phase. Those who hesitate risk being left behind.