Back to blog
Rust and LLMs: The Compiler Does What Code Review Shouldn't Have To

Rust and LLMs: The Compiler Does What Code Review Shouldn't Have To

March 9, 2026By Alex Rezvov

This is what I learned running a Rust team that builds with AI coding tools daily. It is also, in full transparency, the reasoning behind why we are hiring Rust interns at the end of this post.

Rust's biggest barrier to adoption was always the learning curve. Ownership, lifetimes, the borrow checker. Months before a developer became productive. That was the main obstacle, though not the only one: compile times are slow, the async story is complex, and half the crate ecosystem is still pre-1.0.

LLMs changed the equation. The model handles the syntax. The compiler still checks everything it always checked. The learning curve dropped from months to weeks, but the safety guarantees did not weaken.

Rust is no longer "great but expensive to adopt." It is one of the strongest languages you can pair with an AI coding tool today. Here is why I think so, and where the argument has limits.

Where this opinion comes from

I have been writing code for over 18 years. C++ for 6 years. Python for 5 years of daily work and 15 years of reaching for it periodically. JS/TS/Node.js for 2 years full-time and 10 years on and off. Go for 3 years of committed production work. .NET and Java I tried on a couple of projects but never lived in.

When I say Rust replaced Go for me across the board, that comes from direct comparison on real projects, not from reading someone else's benchmarks.

Rust before LLMs

I adopted Rust in production before LLM coding tools existed, so I experienced both eras. Back then, adopting Rust was expensive. Onboarding took months. The hiring pool was small. Every new team member was an investment.

I made that investment anyway. Rust produced code that was fast, safe, and precise. The cost was human time, and I accepted it.

What Rust gets right

Rust is as fast as C++ with fewer footguns, though async Rust has its own complexity (Pin<Box<dyn Future<Output = Result<...>>>> is nobody's idea of elegance). More expressive than Go without the verbosity. Memory-safe without a garbage collector.

Every language I used before solved some problems and introduced others. C++ gave me performance but handed me segfaults. Python gave me speed of development but took away runtime safety. Go was a practical middle ground, and it has improved since (generics arrived in 1.18), but it still lacks enums, exhaustive pattern matching, and enforced error handling by default. Rust addressed all of these.

The main downside

Rust's learning curve is real. Ownership and borrowing are concepts that do not exist in other mainstream languages. Lifetimes confuse experienced developers for weeks. The borrow checker rejects code that would compile fine in any other language.

I used to consider this a feature, not a bug. If someone mastered Rust, they could deliver code that teams on other stacks could not match for reliability. First you have to learn Rust, and that is not a weekend project.

The strategic bet

My reasoning was simple. A Rust team ships reliable, high-performance code. The barrier to entry means you cannot replicate this capability by hiring fast. It requires preparation: months of onboarding, mentoring, pair programming.

The downside of this bet was real too: bus factor risk, a tiny hiring pool, and the constant question of whether the investment would pay off before the team burned out on complexity.

LLMs changed the rules

Then Copilot appeared, then Cursor, then Claude and Windsurf. The way code gets written changed.

But let me be precise about what changed and what did not. The developer's role shifted from typing code to specifying intent, decomposing problems for the AI agent, and verifying the output. The LLM generates. The compiler checks. The developer decides what to build and whether the result is correct.

This is not "Monte Carlo development" where you throw random tokens at a wall. The developer writes specs, reviews every output, and makes architectural decisions. The LLM accelerates the mechanical part. The compiler catches the mechanical errors. The human handles everything the compiler cannot: logic, architecture, business rules.

The main downside got smaller

The complexity that kept people away from Rust became less of a human burden. The LLM handles most of the syntax. The compiler rejects incorrect code until every type is resolved, every borrow is valid, every error case is handled.

The feedback loop works like this: LLM generates code, cargo check rejects it, LLM reads the error message, LLM fixes the code. Rust's compiler errors are detailed, specific, and often include a suggested fix. In my experience, LLMs self-correct after a compiler error on the first or second attempt in most cases. I do not have a rigorous benchmark for this. It is a working observation, not a published metric.

In Python or JavaScript, the equivalent mistake passes silently (there is no compilation step that catches type or null errors), and surfaces at runtime.

The downside did not disappear entirely. Compile times are still slow. The target/ directory still eats disk space. Async Rust is still complex. But the biggest barrier shrank enough to stop being a dealbreaker.

Why Rust specifically

Not every advantage of Rust matters for LLM-assisted development. Some are general language strengths. Below are the ones that directly improve the quality of LLM-generated code, with honest notes on where the argument has limits.

Ownership and borrowing: memory safety without a garbage collector

Managed languages (Go, Python, Java, C#) also prevent use-after-free and double-free. This is not unique to Rust. What is unique: Rust does it without a garbage collector. No GC pauses, no unpredictable memory overhead, deterministic resource cleanup via Drop.

For LLM-generated code this matters because the compiler catches memory-related mistakes at build time, not at runtime through a GC or through crashes. When an LLM generates a function that violates ownership rules, the feedback is immediate and specific.

Thread safety at compile time

Rust's Send and Sync traits enforce thread safety during compilation. When an LLM writes concurrent code, the compiler prevents data races. Not at runtime with a race detector that catches some of them (Go's approach), and not with manual synchronization that you hope is correct (Java's approach).

This extends to async code. Rust's async/await carries the same ownership guarantees. An LLM can generate async handlers, and the compiler ensures no two tasks access the same data without synchronization.

A real caveat: an LLM can suggest unsafe impl Send for MyType {} to make the compiler stop complaining. This is dangerous. Project-level #[forbid(unsafe_code)] in application code prevents this. Reserve unsafe for isolated, audited modules.

Lifetimes: explicit data flow

Lifetimes force every reference to declare how long it lives. When an LLM generates a function that returns a reference to a local variable, the compiler catches it. When it generates a struct that holds a reference longer than the data it points to, the compiler catches that too.

No other mainstream language has this mechanism. It catches a class of bugs that in other languages only surface as intermittent runtime failures.

No null, no unhandled errors

Option<T> replaces null. The LLM cannot forget to handle the absent-value case. There is no NullPointerException, no TypeError: undefined is not a function. The compiler forces explicit handling.

Result<T, E> replaces exceptions. Every function that can fail declares it in its signature. The LLM must handle or propagate the error. There is no hidden throw that a caller does not know about.

Go has a similar pattern with error, but Go does not force you to handle it. You can write result, _ := doSomething() and ignore the error silently.

Rust has its own escape hatch: .unwrap(). LLMs love generating .unwrap() on every Result, which panics at runtime on failure. This is the Rust equivalent of Go's _ :=. The difference is in the default. Rust forces you to acknowledge the Result type; .unwrap() is a conscious opt-out that is easy to grep for and ban. In Go, ignoring the error is the path of least resistance and looks identical to normal code.

The practical fix: add #[deny(clippy::unwrap_used)] to your lib.rs or main.rs and enforce it in CI. One line of configuration. After that, the LLM cannot use .unwrap() and must handle errors explicitly. This is not a perfect parallel to Go, where no standard mechanism exists to enforce error handling project-wide.

Exhaustive pattern matching

Add a variant to an enum. The compiler flags every match block in the codebase that does not handle the new variant. The LLM cannot forget a case.

Swift and Kotlin have similar features with sealed classes. Python, Go, and JavaScript do not. In those languages, adding a new case to a union type is a manual search through the codebase. In Rust, the compiler does it for you.

Algebraic data types for precise domain modeling

Rust enums carry data, not just labels.

enum PaymentStatus {
    Pending,
    Completed(Receipt),
    Failed(PaymentError),
}

You cannot access Receipt without first matching Completed. You cannot ignore Failed. When an LLM models a domain in Rust, the type system prevents entire categories of "forgot to check the status" bugs that are common in languages with stringly-typed or integer-coded statuses.

Traits as contracts (with limits)

Rust has no class inheritance. Traits define contracts. When an LLM implements a trait, the compiler verifies every required method is present, every type parameter is satisfied.

This is composition over inheritance by language design, not by convention. The LLM cannot produce a God Object in the traditional OOP sense.

But Rust does not prevent bad architecture entirely. An LLM can produce a trait with 47 methods, or a single module with 3000 lines. The compiler enforces type contracts, not design taste. Architecture is still the developer's job.

Compiler error quality

Rust's compiler messages are specific, include the relevant code, and often suggest the fix. This matters for the LLM feedback loop. When cargo check fails, the error message is structured enough for the LLM to parse and correct automatically.

Compare with C++, where a template error can produce 200 lines of unreadable output. Or Go, where the error is correct but terse. Rust hits a useful middle point: enough detail for automated correction, not so much noise that the signal gets buried.

Clippy: a second layer of automated review

cargo clippy runs 700+ lints beyond what the compiler checks. Idiomatic patterns, performance issues, common mistakes. It is integrated into the standard toolchain.

This is not unique to Rust. ESLint and golangci-lint exist. But clippy catches categories of issues that are specific to Rust's type system and ownership model, and it works without configuration out of the box.

Undefined behavior: contained, not eliminated

Safe Rust eliminates undefined behavior in your application code. An LLM writing safe Rust cannot produce UB. This is a real guarantee.

But let me be honest about the boundary. The standard library and many foundational crates contain unsafe internally. That unsafe code is isolated, audited, and tested. Your application code should use #[forbid(unsafe_code)] to ensure the LLM does not introduce unsafe blocks. When you need unsafe, it goes into a separate, reviewed module with clear invariants documented.

In C and C++, UB is everywhere and unavoidable. In Rust, it is contained behind explicit boundaries. That is a meaningful difference, not an absolute one.

What the compiler does not catch

The compiler checks types, memory, ownership, error handling, and pattern completeness. It does not check:

  • Business logic correctness
  • Architectural decisions
  • Whether the code actually solves the right problem
  • Performance (this one deserves its own section)

These require tests, code review, and specifications. The compiler handles a large, specific class of bugs. Everything else is still on the developer.

The .clone() problem

When an LLM hits a borrow checker error, its fastest fix is .clone(). The code compiles, but you accumulate unnecessary allocations across the codebase. The compiler will not flag this. Watch for it during review.

The distance has shrunk

A Rust team used to be a competitive edge because so few developers knew the language. LLMs narrowed that gap. The advantage of knowing Rust syntax is fading.

What remains is the compiler itself. The same strict, detailed compiler that catches bugs regardless of who wrote the code. That advantage does not fade. It does not depend on whether a human or an LLM typed the characters. It checks everything the same way.

The strategic advantage shifted from "we know a hard language" to "we use a language with the strictest automated checks available." The first is about people. The second is about tooling. Tooling scales.

Not using Rust with LLMs is leaving safety on the table

Rust is not the only compiled language with a strong type system. But among production-ready options, it checks the broadest set of correctness properties at compile time: memory safety, thread safety, null safety, error handling completeness, exhaustive pattern matching.

When the code is written by an LLM, these checks become more valuable, not less. The LLM is fast but does not reason about correctness the way a careful developer does. The Rust compiler catches a specific, large class of mistakes that would otherwise require manual review or runtime monitoring.

The trade-off is real: slower compilation, larger build artifacts, steeper initial setup. Whether that trade-off is worth it depends on your project. For long-lived backend services where reliability matters, I believe it is.

We are hiring

We are looking for interns for our AI-first Rust team at ForEach Partners.

You need to understand Rust at the concept level: what ownership means, what a trait is, why Result exists. Deep language expertise is not the entry requirement, because the compiler teaches you as you go and the LLM handles the syntax while you learn. The internship is where you build that expertise.

What matters more: thinking systematically, breaking tasks down for an AI agent, and verifying whether the output is correct.

Each intern gets a Rust mentor and an AI tooling mentor. You start on sandbox projects with real codebases and real specifications. The path leads from sandbox to paid commercial work.

Details and application: jl.foreachpartners.com/positions/rust-dev-ai

Comments