Indurotek Achieves First Revenue in Less Than 4 Months

When Indurotek was still just a nameless idea in December 2024, founder Roger Pilant reached out for help turning the concept into a real, revenue-generating business.

We hit the ground running.

On December 12, 2025, we kicked off with an intensive brainstorming session to develop the company name. From that point forward, we moved at full speed.

On April 1, 2026 — less than four months later — Indurotek recorded its first revenue.

In that short time, we successfully delivered:

  • A strong, memorable company name and brand identity
  • A professional website with clean design and clear messaging
  • A focused business plan and company overview
  • High-quality product photos
  • Brand assets and positioning ready for market
  • Product listings (nearly complete) on the Amazon web store

This rapid progress from pure idea to first sale demonstrates what’s possible when strategic guidance, clear execution, and focused momentum come together.

At Randi Rost Consulting LLC, we specialize in helping early-stage tech and hardware companies move quickly from concept to market traction. Whether you need help with naming, branding, business strategy, go-to-market planning, or simply accelerating your launch, we deliver results — fast.

Congratulations to the Indurotek team on this impressive achievement. The foundation is now set for strong growth ahead.

If you’re an entrepreneur with a promising idea but need help turning it into a real business, feel free to reach out. I’d be happy to discuss how we can help you move just as quickly.

— Randi Rost
Founder, Randi Rost Consulting LLC

Indurotek logo
Indurotek logo

What is Software, Really?

Software is the mechanism for turning human intention into machine execution.

This is the definition of software that I’ve developed after five decades of software development. At the 2026 Shading Languages Symposium in San Diego, I had the opportunity to share this definition with attendees and expound on it. In my opening keynote on the origins, observations, and opportunities for shading languages (watch the presentation, slides here), I used this definition to frame broader thoughts on how we translate human ideas to machine action.

Software is the critical translator that amplifies human intentions, allowing complex tasks to be automated reliably on machines. We’ve invested ENORMOUS effort in the “machine execution” side. Hardware vendors—GPU makers especially—optimize relentlessly so that software translates efficiently into their architecture. Compilers, drivers, instruction sets, and ecosystems are tuned to make useful code run fast and power-efficient. Why? Because without software, hardware is just an expensive collection of tiny little space heaters (credit: Roger Chandler). Their success depends on software thriving on their platforms.

But is this the correct emphasis for software development? Is our current software “stack” overly fixated on what hardware can do and how the hardware works? Rather than focusing on how to help software developers express their intentions?

We’ve made far less progress toward making hardware programmability truly accessible. Programming languages and APIs serve developers well, but they’re foreign to most people. Only a dedicated few master them to create meaningful software. And when I asked software developers at the Shading Languages Symposium “Who thinks software development is easy?” not a single hand was raised.

Low-code/no-code tools help broaden access with visual interfaces and templates, yet they often feel cumbersome, limited in expressiveness, and still demand learning a new paradigm. For many, they’re not intuitive enough to feel like natural expression.

So, what will it take to let normal people “speak their software into existence?”

Current AI advancements in code generation are promising. Tools now assist developers by producing code from natural language descriptions, handling full projects, bug fixes, and iterations through conversation. This lowers barriers for those already in the loop.

I’m not yet convinced, however, that today’s large language models—or even near-future ones—can reliably take any arbitrary human intention and directly produce robust, optimized executables for any hardware type. The gap is too wide: intentions can be ambiguous, incomplete, or domain-specific; execution requires precise semantics, error handling, performance tuning, security, and hardware-specific optimizations. Hallucinations, edge cases, and verification remain real hurdles for production use.

I believe this transformation will likely occur in steps. I helped explore one compelling path at Intel as part of an internal startup called Inteon. We explored the development of a more useful intermediate representation (IR)—a structured, high-level format that captures human intention more faithfully than today’s code or bytecode. (Credit: Adam Herr and the Inteon team)

This IR could be:

  • Something that starts simply (e.g., with a specific and limited domain focus) and grows over time.
  • Expressive enough for natural language to map into it reliably (perhaps with AI agents refining ambiguous inputs through dialogue).
  • Abstracted from specific hardware, focusing on intent, logic, data flows, constraints, and goals.
  • Verifiable and composable, allowing validation before lowering.

Then, specialized AI agents could translate this IR into optimized code for particular hardware—GPUs for shaders, CPUs for general apps, edge devices for IoT, etc. Different backends could compete on efficiency, just as compilers do today. This layered approach mirrors how shading languages evolved: high-level intent (e.g., GLSL) compiled to intermediate forms (SPIR-V) then to device-specific code.

Such an IR might draw from ideas in shading (portable representations like SPIR-V), functional programming, or even emerging AI planning formats. It could enable iterative refinement: describe your app, get an IR scaffold, converse to adjust, then target hardware.

This stepwise path—intention → expressive IR → hardware-specific execution—feels more achievable and reliable than end-to-end magic. It builds on existing compiler wisdom while leveraging AI for the hardest parts: intent capture and optimization.

When we get there, software creation could become as natural as describing a problem to an expert colleague. Teachers crafting custom tools, entrepreneurs prototyping in minutes, non-profits automating without devs.

Creativity unlocked at scale.

Hardware perfected execution.

The next leap is perfecting intention capture.

The future of software isn’t just written—it’s spoken, refined, compiled, and reused through natural language conversations.

If you are working on anything along these lines, let me know!

Read More

When did we achieve photorealism in computer graphics?

For the first twenty years of my career in computer graphics, the big question everyone kept asking was: “How do we create a CG image that’s completely indistinguishable from a photograph?”

The real world is insanely complex, and our eyes plus brain are tuned to spot even the tiniest flaws. So that quest dragged on for decades.

Back in 1982, Tron hit theaters as one of the first major films to lean heavily on computer graphics. The whole visual style—bare geometric shapes, glowing lines, no textures—was shaped by what the tech could actually do at the time. I loved that movie, and having spent most of my career doing software at hardware companies, I identified with the line, “That’s Tron. He fights for the users.”

That same year, Star Trek II: The Wrath of Khan showed off the amazing Genesis Effect sequence—the first fully computer-generated scene in a feature film. It looked incredible, but it was framed as a computer simulation, so it didn’t have to pass as photoreal.

Movie VFX advanced fast after that, but CGI was still easy to spot. Films like The Last Starfighter (1984), Young Sherlock Holmes (1985), and The Abyss (1989) pushed boundaries with great effects, yet nothing quite fooled you into thinking it was real.

The 1990s brought bigger leaps: Terminator 2 (1991) with its liquid metal T-1000, Jurassic Park (1993) and its groundbreaking dinosaurs from ILM, Forrest Gump (1994), and then Pixar’s Toy Story (1995) proving full CG animation could carry a feature.

By Titanic in 1997, things looked more polished—3Dlabs even had a small hand in it, as some of our graphics boards powered parts of the animation pipeline. Still, the ship and water didn’t quite feel 100% real on screen.

For me, the moment I finally thought, “We’ve done it—we’ve achieved photorealism!” was Panic Room in 2002. I saw it in the theater and was blown away; nothing screamed “computer-generated.” Later that year at SIGGRAPH, the Electronic Theater showed a production reel with that famous impossible camera move gliding up through the house’s interior—walls, floors, keyholes, the works. It was physically impossible to film for real, so it had to be CG… and it looked seamless. For me, that was the tipping point.

Of course, nailing photorealism in movies was just the start. The next frontier was doing it in real time, on affordable hardware, in stereo, at 4K, and beyond. Our work on OpenGL and especially the OpenGL Shading Language (GLSL) benefited directly on those early film milestones. Our efforts were based on bringing programmable shading power to interactive graphics so real-time could chase the same dream.

Here are some clips and breakdowns that capture these moments:

  • The iconic Genesis Effect from Star Trek II: The Wrath of Khan (1982):
  • Early CGI showcase from Tron (1982):
  • Dinosaurs brought to life in Jurassic Park (1993) – ILM VFX breakdown:
  • BUF’s making-of for Panic Room VFX:

Our work in graphics is never truly “done”—there’s always the next level of realism, speed, or immersion to chase. What film moment made you think we’d crossed into photoreal territory?

Read More

New client – Indurotek

Over the past several months I’ve had the pleasure of working as a consultant with Roger Pilant and the team at Indurotek, a new startup building high-tech composite trailer bunks that eliminate the rot, warp, and maintenance headaches of traditional wood bunks.

My role has been broad and hands-on as they moved from idea to launch-ready business. Key contributions include:

  • Brainstorming and finalizing the company name Indurotek (a blend that conveys industrial-strength durability and advanced technology)
  • Securing their domain name and beginning to populate their website at indurotek.com (more content and features rolling out soon)
  • Developing complete brand identity: logo design, primary/secondary color palette, and brand typography guidelines
  • Synthesizing a clear business plan from numerous strategy conversations
  • Creating a polished company overview slide deck for investor and partner meetings
  • Designing business cards, a one-page sales flyer for trade shows and dealer outreach, and other launch collateral tailored to the marine products industry

It’s been rewarding to see the project evolve from early sketches and discussions into a professional, market-ready brand with a compelling value proposition: bunks that last a lifetime and protect boats better than wood ever could.

Indurotek is now actively taking orders and building partnerships. If you’re a boat owner tired of rotten bunks, a trailer dealer looking for a premium upgrade, or a marine OEM interested in white-label or co-branded options, check out indurotek.com or reach out to Roger directly.

Excited to watch this one grow—congratulations to the Indurotek team on a strong start!

Indurotek Company Overview presentation
Indurotek Company Overview presentation
Indurotek sales flyer
Indurotek sales flyer
Indurotek go-to-market plan
Indurotek go-to-market plan

Read More

Upcoming: The Road to GLSL

I’m thrilled to be delivering the opening keynote at the inaugural Shading Languages Symposium in February 2026! As someone who’s spent decades immersed in the world of computer graphics, I’m excited to take the audience on a journey through the rich history that led to one of the most transformative innovations in our field: the OpenGL Shading Language (GLSL). My talk will weave together three technical threads that converged with the creation of GLSL:

  • The Quest for Photorealism – The relentless drive to make rendered images indistinguishable from photographs.
  • Efforts to Tame Software Complexity in Rendering – Managing the exploding sophistication of software used to interact with graphics hardware.
  • Hardware Advancements – The evolution of GPUs that finally made programmable shading practical and performant.

From there, I’ll dive into the behind-the-scenes story of how GLSL was conceived, implemented, and standardized – a collaborative effort that forever changed how we program graphics hardware. Along the way, I’ll share some fascinating historical nuggets to bring the story to life, including:

  • Some surprising facts about the order of key computer graphics innovations (e.g., can you place these four innovations in the correct chronological order? Phong Shading, Environment Mapping, Texture Mapping, Fractals.
  • The profound truth about computer graphics uttered by Ken Perlin when he first demonstrated procedural texturing and someone asked, “But isn’t this just fake?”
  • The “aha!” moment for ray tracing: Turner Whitted’s breakthrough, inspired by a specific SIGGRAPH paper – with a surprising involvement from a milkshake!
  • The dramatic day when 3Dlabs first presented their initial GLSL work to the OpenGL Architecture Review Board.
  • The very first GLSL shader ever executed on real hardware.

These stories aren’t just trivia – they illustrate the creativity, serendipity, and sheer persistence that propelled shading languages from academic curiosities to the cornerstone of modern real-time graphics. If you’re attending the symposium, I can’t wait to share this history with you in person. And if not, be on the lookout for the slides and (hopefully) the video recording of the presentation!

One last thing: After several decades in the field of software development, I finally realized what software really is. I will share that profound insight with you as well!

See you in February!

Read More

Creating the LunarG “Software Cube”

Hey everyone! As LunarG’s Marketing Manager, I recently dove into prototyping a fresh visual identity that captures what LunarG’s all about—low-level GPU software like Vulkan and WebGPU. I started brainstorming and thought, “How cool would it be to show advanced computing with a circuit board and some orange energy flow?” So, I tossed that idea into Grok and Midjourney—back when Grok was just starting to whip up images from text.

After churning out 40–50 images, I fell in love with the vibrant colors and vibe from Grok’s creations, especially ones with a microprocessor chip on a circuit board. But wait—slapping the LunarG logo on the chip made us look like a hardware company, and LunarG is all about software! So, I switched gears and came up with the idea of a cube to represent low-level software, running on top of hardware and powering a variety of sophisticated graphics applications. I tweaked the prompt to: “a dark cyan circuit board with orange energy flow, and a transparent cyan cube floating above with 3D sim software screenshots on the sides.” After another 40–50 tries, I hit on an image the whole team loved.

Sure, the cube’s side images don’t hold up under a magnifying glass, but that circuit? Super cool! We rocked it on LunarG’s homepage and at Vulkanised 2025 in Cambridge for a few months.

Then, LunarG wanted a promo video for AWE 2025 in Long Beach this June. My boss, David Desormeaux, and I brainstormed bringing the AI image to life with the cube as the star. We needed a 3D model, so I tried a Fiverr freelancer—meh, not great.

Then I chatted with a colleague from my time at Intel, Alexander Oshiro, shared the AI pic, and explained the vision. Alex got it and said he would love to do the whole project. He connected with his friend Eric Stenmark about doing the 3D modeling work. Get this—after describing the project and sharing the concept image on Thursday, Eric had a perfect Blender model of the cube and circuit board by Friday night! He had to texture map the application screenshots from the original AI pic, but it was clear that we’d be able to use real video footage when we rendered the real animation. As for the model itself, no revisions were needed—NOT ONE! The colors and details were spot-on!

That proof shot was enough to green light the video project. Subsequently, Eric crafted a “cube construction” animation, like software coming alive on hardware. He and Alex even figured out a slick way to animate the 3D cube with stock video textures in a 2D video. Check out a frame from the final video below and the resulting video—pretty awesome, right?

This whole journey shows how AI, ultra-talented artists, and teamwork can make magic happen!