Yesterday at a canal museum, I found myself staring at a placard: "The engine caused a seismic shift in working practices bringing an end to the era of the tow horse, and changing the pace and sounds of life on the waterways dramatically, forever."

As we close out 2025, the parallel to AI is impossible to miss.
Canal infrastructure had been designed for horses. When diesel engines arrived, expert horse handlers had to learn entirely new skills. The stabling and blacksmithing along towpaths began their slow demise. The very rhythm of work changed forever.
We are at a similar inflection point now.
As [Satya Nadella noted](https://snscratchpad.com/posts/looking-ahead-2026/) recently, we've moved past discovery into diffusion. We're distinguishing spectacle from substance. But the canal story illuminates something crucial: the questions we need to answer are fundamentally human ones, not technical ones.
What does it mean to be an expert when AI can generate sophisticated analysis in seconds? How do we define craft and mastery in a world of cognitive scaffolding? These aren't questions about model capabilities. They're questions about human identity and purpose.
The canal companies didn't just bolt engines onto horse-drawn boats. They redesigned everything, and entire adjacent industries transformed with them. Blacksmiths, stable keepers, feed suppliers all had to find new purposes or fade away.
This is where we must focus in 2026 and beyond. Technological progress needs to be underpinned by strong philosophical foundations. Just as frameworks like NICE's cost-effectiveness process guide healthcare decisions with rigor, we need equivalent frameworks for AI deployment that are grounded in clear principles about human flourishing and societal benefit.
If AI advancement is constrained by energy, compute, and material resources (as it appears to be) then humanity faces unprecedented choices about resource allocation. Where do we direct this transformative capability? Who decides? On what basis? The methodology for making these decisions is, at best, at its infancy.
As we enter 2026, the most meaningful measure of progress won't be the benchmarks our models achieve. It will be whether we develop the capacity to make wise choices about deployment. Whether we can build the new mental models and social contracts that this moment demands.
The engine replaced the horse, but the canal industry, its workers, and its organizations had to decide how to fundamentally reorganize themselves around this new paradigm. That decision couldn't be made by the technology itself.
This is the work that matters most as we move forward. Developing the scaffolding not just for AI capability, but for human wisdom in an age of AI.