Saturday, June 12, 2021

A few thoughts on Fuchsia security

I want to say a few words about my current adventure. I joined the Fuchsia project at its inception and worked on the daunting task of building and shipping a brand new open-source operating system.

As my colleague Chris noted, pointing to this comparison of a device running a Linux-based OS vs Fuchsia, making Fuchsia invisible was not an easy feat.

Of course, under the hood, a lot is different. We built a brand new message-passing kernel, new connectivity stacks, component model, file-systems, you name it. And yes, there are a few security things I'm excited about.

Message-passing and capabilities

I wrote a few posts on this blog about the sandboxing technologies a few of us were building in Chrome/ChromeOS at the time. A while back, the situation was challenging on Linux to say the least. We had to build a special a setuid binary to sandbox Chrome and seccomp-bpf was essentially created to improve the state of sandboxing on ChromeOS, and Linux generally.

With lots of work, we got into a point where the Chrome renderer sandbox was *very* tight in respect to the rest of the system [initial announcement]. Most of the remaining attack surface was in IPC interfaces and the remaining available system interfaces were as essential as it could get on Linux.

A hard problem in particular was to make sure that existing code, not written with sandboxing in mind, would "just" work under a very tight sandbox (I'm talking about zero file-system access - chroot-ed into an empty, deleted directory -, different namespaces, a small subset of syscalls available, etc.). One had to allow for "hooking" into some of the system calls that we would deny, so that we could dynamically rewrite them into IPCs (this is why the SIGSYS mechanism of seccomp was built). It was hard, and I dare say, pretty messy.

On Fuchsia, we have solved many of those issues. Sandboxing is trivial. In fact a new process with access to no capabilities can do exceedingly little (also see David Kaplan's exploration). FIDL, our IPC system, is a joy. I often smile when debating designs, because whether or not something is in-process or out-of-process can sometimes feel like a small implementation detail to people.

Verified execution

We will eventually write some good documentation about this. I believe that we have meaningfully expanded on ChromeOS' verified boot design.

The gist is that we store immutable code and data on a content-addressed file-system called BlobFS. You access what you want by specifying its hash (really, the root of a Merkle tree, for fast random access). Then we have an abstraction layer on top, which components can use to access files by names and which, under the hood can verify signatures for those hashes. File-systems are of course in user-land, can layer nicely, and it's easy to create the right environment for any component.

A key element is that we have made the ability to create executable pages a real permission, without disturbing the loading of BlobFS-backed, signed, dynamic libraries. For any process which doesn't need a JIT, it'll force attackers to ROP/JOP their way to the next stage of their attack.

Rust

For system-level folks, Rust is one of the most exciting security developments of the past few decades. It elegantly solves problems which smart people were saying could not be solved. Fuchsia has a lot of code, and we made sure that much of it (millions of LoC) was in Rust.

Our kernel, Zircon, is not in Rust. Not yet anyway. But it is in a nice, lean subset of C++ which I consider a vast improvement over C.

Various

There is much more, which I may get to at some point. And there is a lot more to do. I am optimistic that we have created a sensible security foundation to iterate on. Time will tell. What did we miss? Fuchsia is covered by the Google VRP, so you can get payed by telling us!