Software is the mechanism for turning human intention into machine execution.

This is the definition of software that I’ve developed after five decades of software development. At the 2026 Shading Languages Symposium in San Diego, I had the opportunity to share this definition with attendees and expound on it. In my opening keynote on the origins, observations, and opportunities for shading languages (watch the presentation, slides here), I used this definition to frame broader thoughts on how we translate human ideas to machine action.

Software is the critical translator that amplifies human intentions, allowing complex tasks to be automated reliably on machines. We’ve invested ENORMOUS effort in the “machine execution” side. Hardware vendors—GPU makers especially—optimize relentlessly so that software translates efficiently into their architecture. Compilers, drivers, instruction sets, and ecosystems are tuned to make useful code run fast and power-efficient. Why? Because without software, hardware is just an expensive collection of tiny little space heaters (credit: Roger Chandler). Their success depends on software thriving on their platforms.

But is this the correct emphasis for software development? Is our current software “stack” overly fixated on what hardware can do and how the hardware works? Rather than focusing on how to help software developers express their intentions?

We’ve made far less progress toward making hardware programmability truly accessible. Programming languages and APIs serve developers well, but they’re foreign to most people. Only a dedicated few master them to create meaningful software. And when I asked software developers at the Shading Languages Symposium “Who thinks software development is easy?” not a single hand was raised.

Low-code/no-code tools help broaden access with visual interfaces and templates, yet they often feel cumbersome, limited in expressiveness, and still demand learning a new paradigm. For many, they’re not intuitive enough to feel like natural expression.

So, what will it take to let normal people “speak their software into existence?”

Current AI advancements in code generation are promising. Tools now assist developers by producing code from natural language descriptions, handling full projects, bug fixes, and iterations through conversation. This lowers barriers for those already in the loop.

I’m not yet convinced, however, that today’s large language models—or even near-future ones—can reliably take any arbitrary human intention and directly produce robust, optimized executables for any hardware type. The gap is too wide: intentions can be ambiguous, incomplete, or domain-specific; execution requires precise semantics, error handling, performance tuning, security, and hardware-specific optimizations. Hallucinations, edge cases, and verification remain real hurdles for production use.

I believe this transformation will likely occur in steps. I helped explore one compelling path at Intel as part of an internal startup called Inteon. We explored the development of a more useful intermediate representation (IR)—a structured, high-level format that captures human intention more faithfully than today’s code or bytecode. (Credit: Adam Herr and the Inteon team)

This IR could be:

  • Something that starts simply (e.g., with a specific and limited domain focus) and grows over time.
  • Expressive enough for natural language to map into it reliably (perhaps with AI agents refining ambiguous inputs through dialogue).
  • Abstracted from specific hardware, focusing on intent, logic, data flows, constraints, and goals.
  • Verifiable and composable, allowing validation before lowering.

Then, specialized AI agents could translate this IR into optimized code for particular hardware—GPUs for shaders, CPUs for general apps, edge devices for IoT, etc. Different backends could compete on efficiency, just as compilers do today. This layered approach mirrors how shading languages evolved: high-level intent (e.g., GLSL) compiled to intermediate forms (SPIR-V) then to device-specific code.

Such an IR might draw from ideas in shading (portable representations like SPIR-V), functional programming, or even emerging AI planning formats. It could enable iterative refinement: describe your app, get an IR scaffold, converse to adjust, then target hardware.

This stepwise path—intention → expressive IR → hardware-specific execution—feels more achievable and reliable than end-to-end magic. It builds on existing compiler wisdom while leveraging AI for the hardest parts: intent capture and optimization.

When we get there, software creation could become as natural as describing a problem to an expert colleague. Teachers crafting custom tools, entrepreneurs prototyping in minutes, non-profits automating without devs.

Creativity unlocked at scale.

Hardware perfected execution.

The next leap is perfecting intention capture.

The future of software isn’t just written—it’s spoken, refined, compiled, and reused through natural language conversations.

If you are working on anything along these lines, let me know!