All About Open-Source AI
It’s the future — but only if we act now.
The Owners Not Renters Blueprint
Closed AI is winning because it’s easier to use, not because it’s better. That’s not a permanent condition. It’s a design problem. And design problems can be solved.
Most AI coverage doesn’t treat it as a problem at all. It’s more focused on benchmarks and funding rounds and whatever shipped this week. This newsletter is for the people downstream of those announcements: the engineers and engineering leaders who have to decide, in a sprint, whether to build on something they don’t control.
I believe that you’re not choosing between open and closed AI. You’re choosing between owning and renting. Think about it: The landlord sets the price, the context window, the terms of service, and the deprecation schedule. Most of the time you learn about changes in a routine email…until the one that isn’t routine — the one where the model you built on will be gone in 90 days, or the price just doubled. The people sending that email aren’t malicious. They’re optimizing for their business. Which is exactly why you need to be building differently.
We’ve been here before. By 2003, Internet Explorer had 95% of the browser market. Firefox launched in 2004. IE never recovered, fading slowly at first, then completely. Open standards and open source decentralized control over the core technologies of the web.
Once again, the ground is shifting. Small models run on hardware that organizations already own. Enterprises are migrating off closed platforms in numbers that don’t make press releases. Governments are building sovereign AI supply chains. The question isn’t whether this transition will happen. The question is whether we build the developer experience to make open easier than closed before someone else locks the next layer down — before the defaults harden and a new landlord inherits the keys.
That’s the north star: that openness wins on ease, not principle. Before the window closes.
Open isn’t always the right call. In this newsletter, I’ll tell you when it isn’t. What I won’t do is pretend the default is neutral. Every team that ships on a closed platform without examining the decision is making a choice…by not making it.
My job at Mozilla puts me close to where that’s actually being built: developer tools, data infrastructure, investments across the ecosystem. I’ve spent a career inside platforms. I know what it looks like when a technology transition is inevitable and the only question is who shapes it.
This newsletter is for the people shaping — and shipping — it. Twice a month, I’ll explore orchestration, inference, cost, security, governance — the real stuff, from original analysis to deployment patterns. I’ll evaluate model releases based on what matters in production, not what looks good in a press release. I’ll share migration stories from builders doing the work (and making real tradeoffs). And I’ll report with transparency from the seminars, symposiums, and events that I attend.
That’s the “Think” part of the newsletter. In the “Do” section, I’ll share an open source project that I’ve been playing with.
So, ready to run Claude Code equivalent on your own hardware? Catch the next issue. And in the meantime, let me know what you’ve been working on!
Before you go… here’s what I’m reading this week
A compromised litellm release hit PyPI this week - credential theft, Kubernetes lateral movement, persistent backdoors. Your LLM proxy library just turned hostile. Time to route through Any-LLM instead of installing your attack surface.
Finally, skill-based adaptation converges: Working independently, three papers (MetaClaw, Memento-Skills, OpenSeeker) show that externalized skill/knowledge structures do better than static fine-tuning.
Which Chinese lab’s open-source drop panicked the market this week? As this reaction says: “The AI race isn’t US vs China anymore... It’s closed vs open. And closed is losing.”
Meanwhile, Nvidia is stepping into the open-source game, putting $26b behind its effort to make OpenAI and the gang very nervous. This week, it introduced NemoClaw and OpenShell runtime, with (much) more to come.
Ready for a personal AI agent that runs on your personal device? Meet Stanford’s OpenJarvis.




Loved the read. Very interesting and insightful!