Next Few Years

rust-analyzer is a new "IDE backend" for the Rust programming language. Support rust-analyzer on Open Collective.

During the past several months, I’ve been swamped with in-the-trenches rust-analyzer work. Today, I spontaneously decided to take a step back and think about longer-term "road map" for rust-analyzer.

What follows is my (@matklad) personal thoughts on the matter, they not necessary reflect the consensus view of ide or compiler teams :-)

Unexpected Success

One of the most surprising aspects of rust-analyzer for me is how useful it already is. Today, I write Rust code enjoying fast code-completion, mostly correct go to definition and plethora of assists. Even syntax highlighting inside macros works!

My original plan for rust-analyzer was to write a quick one-to-two-year hack to demonstrate a proof-of-concept IDE support, something to strive for rather than a finished product. Obviously, we have massively overshot this goal: people depend on rust-analyzer for productive Rust programming today. This creates its own opportunities and dangers, which inform this planning document.

Official LSP Server

People write a ton of Rust today, and they deserve at least a baseline level of IDE support. I think our immediate goal is to make rust-analyzer easier to use in its current state, effectively implementing RFC2912.

The amount of programming work on rust-analyzer side is relatively small here: we need to fix various protocol conformance issues, clean up various defaults to be less experimental, write documentation which doesn’t require a lot of enthusiasm to understand, etc. The amount of org stuff is much bigger — we need to package rust-analyzer with rustup, merge the RLS and rust-analyzer VS Code extensions, figure out repository structure, etc.

Separately, I want to make sure that rust-analyzer is usable inside large non-Cargo based monorepos. We have some initial support for this already, but there’s a bunch of details we need to iron out.

Dangers of Accidental Architecture

The main danger I see is that rust-analyzer can ossify in its present state. This would be bad, because, although current rust-analyzer architecture is right in broad strokes, a lot of important and hard-to-change details are wrong. After we push rust-analyzer to the general public, we should focus on boring implementation & design work, with relatively few shiny gifs and a lot of foundational work for the next decade.

Bringing Chalk to Rustc

rust-analyzer has been using chalk as its trait solver for a long time now. It would be good to finish the work, and integrate it into rustc as well, and give people their GATs.

Single Parser and Syntax Tree

We should share the parser between rustc and rust-analyzer already. Parsing is one of the most interesting bits of the compiler, from the IDE point of view. By transitioning rustc to a lossless syntax we’ll cross the most important barrier, and it will be a downhill road from that point on. The design space here I think is well-understood, but the amount of work to do is large. At some point, I should take a break from actively hacking on rust-analyzer and focus on sharing the parser.

Virtual File System

The most fundamental data structure in rust-analyzer, even more fundamental than a syntax tree, is the VFS, or Virtual File System. It serves two dual goals:

  • providing consistent immutable snapshots of the file system,

  • applying transactional changes to the state of the file system.

This abstraction is the boundary between the pure-functional universe of rust-analyzer, and the messiness of the external world. It needs to bridge case-insensitive file systems, symlinks and cycles to a simpler model of "tree with utf8 paths" we want inside. Additionally it should work with non-path files: there are use-cases where we want to do analysis of Rust code, which doesn’t really reside on the file system.

One specific aspect I am struggling with is dynamism. On the one hand, it seems that a good design is to require to specify the set of files in VFS upfront, as a set of globs. This is important because, to properly register file watchers without losing updates, you need to crawl the file-system eagerly. However, specifying a set of globs up-front makes changing this set later messy.

I would be curious to hear about existing solutions in this area. One specific question I have is: "How does watchman handle dynamic addition/removal of projects?". If you have any experience to share, please comment on the VFS issue in rust-analyzer. Ideally, we turn VFS into just a crates.io crate, as it seems generally useful, and can encapsulate quite a bit of complexity.

The current VFS is …​ not great, I don’t feel comfortable building rust-analyzer on top of it.

WASM proc macros

At the moment, proc-macros are implemented as dynamic libraries, loadable into the compiler process. This works ok-enough for the compiler, but is a pretty bad fit for an IDE:

  • if a proc-macro crashes, it brings down the whole process,

  • it’s hard to limit execution time of proc-macro,

  • proc-macros can be non-deterministic, which breaks internal IDE invariants.

At the moment, we paper over this by running proc-macros in a separate process and never invalidating proc-macro caches, but this feels like a hack and has high implementation complexity. it would be much better if proc-macros were deterministic and controllable by definition, and WASM can give us that.

I am slightly worried that this will get push-back from folks who want to connect to databases over TCP at compile time :) Long term, I believe that guaranteeing deterministic compilation is hugely important, irrespective of IDE story.

Language Design for Locality

There’s a very important language property that an IDE can leverage to massively improve performance:

What happens inside a function, stays inside the function

If it is possible to type-check the body of a function without looking at the bodies of other functions, you can speed up an IDE by drastically reducing the amount of work it needs to do.

Rust mostly conforms to this property, but there are a couple of annoying violations:

  • local inherent impls with publicly visible methods.

  • local trait impls for non-local types.

  • #[macro_export] local macros.

  • local out-of-line modules.

If we want to have fast & correct IDE support, we should phase out those from the language via edition mechanism.

Note that auto-trait leakage of impl Trait is not nearly as problematic, as you only need to inspect a function’s body if you call the function. Of course, as an IDE author I’d love to require specifying auto-traits, but, as a language user, I much prefer the current design.

Compact Data Structures

rust-analyzer uses a novel and rather high-tech query-based architecture for incremental computation. Today, it is clear that this general approach fits an IDE use-case really well. However, I have a lot of doubts about specific details. I feel that today rust-analyzer lacks mechanical sympathy and leaves a ton of performance on the table. A lot of internal data structures are heap-allocated Arc-droplets, we overuse hashing and underuse indexing, we don’t even intern identifiers!

To get a feeling of how blazingly fast compiler front-ends can be, I highly recommend checking out Sorbet, type checker for Ruby. You can start with these two links:

I am very inspired by this work, but also embarrassed by how far rust-analyzer is from that kind of raw performance and simplicity.

Part of that I think is essential complexity — Rust’s name resolution and macro expansion are hard. But I also wonder if we can change salsa to use Vec-based arenas, rather than Arcs in HashMaps.

Parallel and Fast > Persistence

One of the current peculiarities of rust-analyzer is that it doesn’t persist caches to disk. Opening project in rust-analyzer means waiting a dozen seconds while we process standard library and dependencies.

I think this "limitation" is actually a very valuable asset! It forces us to keep the non-incremental code-path reasonably fast.

I think it is plausible that we don’t actually need persistent caches at all. rust-analyzer is basically text processing, and the size of input is in tens of megabytes (and we ignore most of those megabytes anyway). If we just don’t lose performance here and there, and throw the work onto all the cores, we should be able to load projects from scratch within a reasonable time budget.

The first step here would be establishing the culture of continuous benchmarking and performance tuning.

We’ve already successfully used rust-analyzer for forging an architecture which works in IDE at all. Now it’s time to experiment with architecture which works, fast, just as all Rust code should :-)

Optimizing Build Times

In my opinion the two important characteristics that determine long-term success of a project are:

  • How long does it take to execute most of the tests?

  • How long does it take to build a release version of the project for testing?

I am very happy with the testing speed of rust-analyzer. One of my mistakes in IntelliJ was adding a lot of tests that use Rust’s standard library and are slow for that reason. In rust-analyzer, there are only three uber-integrated tests that need the real libstd, all others work from in-memory fixtures which contain only the relevant bits of std.

But the build times leave a lot to be desired. And this is hugely important — the faster you can build the code, the faster you can do everything else. Heck, even for improving build times you need fast build times! I was trying to do some compile-time optimizations in rust-analyzer recently, and measuring “is it faster to compile now?” takes a lot of time, so one has to try fewer different optimizations!

The biggest problem here is that Rust, as a language, is hard to compile fast. One specific issue I hit constantly is that changing a deep dependency recompiles the world. This is in contrast to C/C++ where, if you don’t touch any .h files, changing a dependency requires only re-linking. In theory, we can have something like this in Rust, by automatically deriving public headers from crates. Though I fear that without explicit, physical “this is ABI” boundary, it will be much less effective at keeping compile times under control.

As an aside, if Rust stuck with .crate files, implementing IDE support would have been much easier :-)

Optimizing rustc Build

Nonetheless, rust-analyzer is much easier to build than rustc. I believe there’s a lot we can do for rustc build as well.

I’ve written at length about this on irlo. The gist is that I think we can split rustc into a front-end "text-processing" part, and backend "LLVM, linkers and real world" part. The front bit then could, in theory, be a bog standard Rust project, which doesn’t depend on IO, external tools or C++ code at all.

One wrinkle here is that rustc test suite at the moment consists predominantly of UI and run-pass tests integration, which work by building the whole compiler. Such a test suite is ideal for testing conformance and catching regressions, but is not really well suited for rapid TDD. I think we should make an effort to build a unit test suite a-la rust-analyzer, such that it’s easy, for example, to test name resolution without building the type checker, and which doesn’t require stdlib.

Scaling Maintainance

Finally, all changes here represent deep cuts into an existing body of software. Pushing such ambitious projects to completion require people, who can dedicate significant amounts of their time and energy. To put it bluntly, we need more dedicated folks working on the IDE tooling as a full time, paid job. I am grateful to my colleagues at Ferrous Systems who put a lot of energy into making this a reality.

If you find rust-analyzer useful and use it professionally, please consider asking your company to sponsor rust-analyzer via our Open Collective. Sponsorships from individuals are also accepted (and greatly appreciated!).