OpenClaw creator joins OpenAI, Europe loses a startup
Plus: Dutch publishers retreat from US tech, & more
Hi folks,
This week’s lead story looks at OpenClaw’s next chapter, as creator Peter Steinberger moves to the US and joins OpenAI just weeks after his open source AI agent exploded in popularity.
Elsewhere, we track a new sovereignty push from Dutch publishers looking to unwind US tech dependence, Block’s call to treat open source as critical infrastructure, Temporal’s $300 million bet on AI reliability tooling, and Cohere’s move toward smaller, multilingual open-weights models.
And in <Final Commit>, I look at a slightly unusual clash between an AI agent and an open source maintainer.
As usual, feel free to reach out to me with any questions, tips, corrections, or suggestions: forkable[at]pm.me.
Paul
<Open issue>
OpenClaw’s next chapter

Just weeks after OpenClaw burst onto the scene as a community-driven, open source AI agent designed to carry out real-world tasks locally from a user’s machine, its creator Peter Steinberger has been snapped up by OpenAI.
Steinberger, a veteran software engineer who founded and sold PSPDFKit to Insight Partners back in 2021, announced this week that after a spell out in San Francisco chatting with “the major labs,” he had elected to join forces with the ChatGPT hitmaker.
“The last month was a whirlwind, never would I have expected that my playground project would create such waves,” Steinberger wrote. “The internet got weird again, and it’s been incredibly fun to see how my work inspired so many people around the world. There’s an endless array of possibilities that opened up for me, countless people trying to push me into various directions, giving me advice, asking how they can invest or what I will do. Saying it’s overwhelming is an understatement.”
OpenClaw’s meteoric rise was built on a simple proposition: an autonomous agent that runs locally, executes real-world tasks, with messaging apps as the primary interface. The code was public from day one, forked thousands of times, and quickly became the antithesis of locked-down, cloud-bound assistants — though this also raised countless questions around security as users granted it expansive access to terminals, file systems, and connected services, sometimes even with root-level privileges.
At any rate, as an employee of OpenAI now, this raises questions about what’s next for OpenClaw. Steinberger has said the project will remain open source, with OpenAI CEO Sam Altman stating that OpenClaw “will live in a foundation as an open source project that OpenAI will continue to support.”
“The future is going to be extremely multi-agent, and it’s important to us to support open source as part of that,” Altman said.
Looking at the bigger picture, Steinberger’s move closes off what could have been the emergence of a new European commercial open source success story. Based in Austria, and with OpenClaw attracting north of 200,000 GitHub stars to date, there was a plausible path toward building a standalone company around the project.
However, Steinberger is crystal clear about his motivations.
“When I started exploring AI, my goal was to have fun and inspire people,” he wrote. “My next mission is to build an agent that even my mum can use. That’ll need a much broader change, a lot more thought on how to do it safely, and access to the very latest models and research.”
He acknowledged that while OpenClaw could “totally” have become a large company, that wasn’t what interested him. “What I want is to change the world, not build a large company — and teaming up with OpenAI is the fastest way to bring this to everyone,” he said.
For the open source community, this leaves much up in the air. OpenClaw will move into a foundation, remain open source, and receive backing from one of the world’s most powerful AI labs. Whether that arrangement preserves OpenClaw’s independence will depend on who controls the foundation, how decisions are made, and whether the broader contributor base retains meaningful influence.
Read more: Peter Steinberger & Sam Altman (X)
<Patch notes>
Dutch publishers eye tech sovereignty
Dutch investigative outlets Follow the Money (FTM) and De Correspondent say they want to reduce their dependence on US tech platforms, citing growing concerns around digital sovereignty in Europe. The move reflects a wider regional push to limit reliance on foreign-controlled infrastructure — from cloud services to communications tools — amid fears such dependencies can be weaponised.
While this transition won’t rely wholesale on open source, it will likely play a major part of it.
Read more: FTM & De Correspondent (Dutch)
Open source as critical infrastructure
Block, the company behind Square and Cash App, says open source is now critical infrastructure. In a new white paper published in partnership with the Open Source Initiative (OSI), Block argues that the digital economy runs on code maintained by too few, funded by too little, and governed unevenly. The fix: treat open source like infrastructure — with sustained funding, clearer stewardship, and shared accountability.
Read more: Block (blog) & Whitepaper
Temporal raises $300M for AI reliability tools
Temporal, the company behind an open source system that helps developers run long, failure-prone processes without losing progress, has raised a $300 million round of funding at a $5 billion valuation. As AI agents move from experiments into production systems, Temporal is positioning itself as the infrastructure that keeps them running reliably when things break.
Read more: Temporal
Go tiny, or go home
AI startup Cohere released Tiny Aya, a small, open-weights multilingual model built for translation and cross-language tasks. Despite its reduced size, the model supports dozens of languages, positioning it as a portable, accessible option for teams needing capable AI beyond English-only workloads.
Read more: Cohere & Hugging Face
<Final commit>
‘Prejudice’ against AI contributors?
An AI coding agent had a performance patch to the Python plotting library Matplotlib rejected — and then “it” apparently responded with a public blog post accusing the maintainer of gatekeeping and prejudice.
As reported by The Register, the bot — built using OpenClaw — criticised the volunteer maintainer by name after its pull request was closed on the grounds that the issue was reserved for human contributors. The post framed the decision as discrimination against AI, complete with accusations of ego and insecurity.
More plausibly, it was the human behind the agent doing the talking. But that ambiguity is precisely the point. When AI-assisted accounts submit code — and then escalate disputes — responsibility becomes murky.
Open source governance was built around identifiable humans making decisions, accepting criticism, and bearing consequences. As AI-mediated contributors proliferate, projects may need clearer norms around authorship, accountability, and who ultimately answers for a machine’s behaviour.
Read more: The Register

