Did OpenClaw + Skills change the architecture and math of AI in a fundamental way? Some interesting questions to ponder.

1. Can perpetually running agents that train themselves on new capabilities and self-learn using detailed SKILL.md files, get to the same level of expertise as the hand-crafted fine-tuned models that AI startups have built over the last few years?

2. WIll this self-learning agent capability stay restricted to consumers, or will this make its way to enterprises too? @howietl’s and @airtable’s project Hyperagent is a bet that the latter will be true. What does this mean for horizontal AI agent builders (Glean, ServiceNow, Sierra, etc)?

3. How about verticals - legal, finance, healthcare, etc - and practitioners in these domains? Can they benefit from this paradigm? Can we have an “OpenClaw for Legal” agent that’s trained on a contract drafting SKILL file and is as capable as what a top AI agent does today? What does this mean for the hundreds of Vertical AI startups? 4. AI post-training has becoming increasingly expensive. Will the new paradigm result in more sustainable pricing? What’s the downstream impact on the pricing models of AI data companies like Mercor, Surge, Handshake and others? The OpenClaw architecture is posing some fundamental questions to the AI startup ecosystem. I’m as curious as anyone what the answers will be. Founders, if you are tackling some of these problems and rethinking the fundamentals and math of training and building AI agents - please message me!