Back to blog
What CTOs Actually Said When I Asked About Rust and LLMs

What CTOs Actually Said When I Asked About Rust and LLMs

March 14, 2026By Alex Rezvov

I recently wrote about why my team chose Rust for LLM-assisted development. The short version: the compiler catches what code review shouldn't have to.

I posted the question to a CTO mailing list: what languages are you using for LLM-assisted development, and how do they hold up?

Seven replies came in within 24 hours. The answers split into three camps.

Camp 1: Rust works, and the strictness pays off

Several respondents confirmed the same thing we see. One team enforces clippy at pedantic level with pre-commit checks and calls the results "fantastic." They also expose rust-analyzer LSP as tooling for their agents, which gives LLMs accurate documentation for the exact dependency version in use.

Another CTO runs Rust and WASM both in browser and backend across multiple startups. No regrets.

A third is building an AI Gateway in Rust using Claude Code. Their workflow: specs in Linear, agents write implementation plans, agents build and test, PRs merge automatically. We're heading in the same direction, but they're further along.

Camp 2: Rust is wrong for web backends

The counterargument was direct: Go libraries are more developed, performance is indistinguishable due to network latency, and the borrow checker adds little value when your backend is a set of stateless instances that restart regularly. "Multiple friends regret using Rust for web backends. The slow speed of Rust development kills the startup."

Even a Rust advocate in the thread agreed: "Rust is still painful for production web services doing anything more than bare APIs." His team settled on TypeScript for frontend, Python for backend services, and Rust for accelerating targeted Python code, CLIs, and embedded devices.

Fair point for traditional development. But there's a nuance this argument misses: the main knock against Rust has always been slow development speed, and that was driven by the learning curve. Ownership, lifetimes, the borrow checker. That was Rust's single biggest barrier, and LLMs removed most of it. When the agent fights the borrow checker instead of the developer, the "slow development" argument loses its teeth. What remains is the upside: reliability and performance, now without the traditional cost of getting there.

Camp 3: Rails and the training data advantage

Two respondents chose Ruby on Rails for agentic-first development. Their reasoning: 20+ years of documentation, blog posts, conference videos, and open source projects means LLMs have a mountain of training data. Rails conventions are strong, code is human-readable, and tools like Rubocop provide linting similar to what clippy does for Rust.

One of them raised a valid concern: "I would have thought that Golang has more training data than Rust, and may produce better results due to that." More on this below.

Training data quality

One response reframed the question for me. A developer wrote:

"The code quality feels better, but I do not know if this is language features, the quality of Rust code on which it is trained, or if I am just wrong."

Rust is young compared to most languages. It attracts developers who've already spent years in other ecosystems. The barrier to entry filtered the training data: there is simply less bad Rust code out there for LLMs to learn from.

Compare that to PHP (which I actually like as a language). Nobody would argue that decades of WordPress plugins, tutorials, and quick-and-dirty scripts haven't produced a massive corpus of questionable code that LLMs happily absorbed. The same applies to JavaScript, Python, and every other language that made it easy for beginners to publish code.

Rust's relative youth and high entry bar may have given LLMs a cleaner dataset to train on. Add the compiler strictness on top of that: you get better output because the model learned from better input, and the compiler rejects whatever still doesn't pass.

So when people ask "why does LLM-generated Rust feel surprisingly good," the answer might be two things at once: strict compiler plus clean training corpus. Neither alone fully explains it.

What this means for language choice

If we accept that LLMs will write most code, language choice shifts. As one CTO in the thread put it:

"Choice of language is going to depend more on typing and constraint system capabilities than some of our previous concerns like library availability, because we will want to limit the variety of programs we can express and will care much less about rework and writing novel code."

This doesn't make Rust the right choice for everything. The thread made that clear. But it does suggest that the criteria for picking a language are changing, and "how well does the compiler supervise an LLM" is now a legitimate question to ask.

Comments