tag:blogger.com,1999:blog-89928114973231212332024-03-08T03:38:11.095-08:00cr0 bloga blog about IT security and other geek interestsUnknownnoreply@blogger.comBlogger21125tag:blogger.com,1999:blog-8992811497323121233.post-41445342799838624012021-06-12T13:31:00.028-07:002021-06-14T18:37:24.974-07:00A few thoughts on Fuchsia security<p>I want to say a few words about my current adventure. I joined the <a href="https://fuchsia.dev">Fuchsia</a> project at its inception and worked on the daunting task of building and shipping a brand new open-source operating system.</p><p>As my colleague <a href="https://twitter.com/chrismckillop/status/1403494700925939714">Chris noted</a>, pointing to <a href="https://9to5google.com/2021/06/11/google-nest-hub-fuchsia-video-comparison/">this comparison</a> of a device running a Linux-based OS vs Fuchsia, making Fuchsia invisible was not an easy feat.</p><p>Of course, under the hood, a lot is different. We built a brand new message-passing kernel, new connectivity stacks, component model, file-systems, you name it. And yes, there are a few security things I'm excited about.</p><p><b>Message-passing and capabilities</b></p><p>I wrote a few posts on this blog about the sandboxing technologies a few of us were building in Chrome/ChromeOS at the time. A while back, the situation was <a href="https://blog.cr0.org/2009/10/security-in-depth-for-linux-software.html">challenging on Linux</a> to say the least. We had to build a special a setuid binary to sandbox Chrome and <a href="https://www.kernel.org/doc/html/v4.16/userspace-api/seccomp_filter.html">seccomp-bpf</a> was essentially created to improve the state of sandboxing on ChromeOS, and Linux generally.</p><p>With lots of work, we got into a point where the Chrome renderer sandbox was *very* tight in respect to the rest of the system [<a href="https://blog.chromium.org/2012/11/a-safer-playground-for-your-linux-and.html">initial announcement</a>]. Most of the remaining attack surface was in IPC interfaces and the remaining available system interfaces were as essential as it could get on Linux.</p><p>A hard problem in particular was to make sure that existing code, not written with sandboxing in mind, would "just" work under a very tight sandbox (I'm talking about zero file-system access - chroot-ed into an empty, deleted directory -, different namespaces, a small subset of syscalls available, etc.). One had to allow for "hooking" into some of the system calls that we would deny, so that we could dynamically rewrite them into IPCs (this is why the SIGSYS mechanism of seccomp was built). It was hard, and I dare say, pretty messy.</p><p>On Fuchsia, we have solved many of those issues. Sandboxing is trivial. In fact a new process with access to no capabilities <a href="https://twitter.com/adambarth/status/1398308060313968644">can do exceedingly little</a> (<a href="https://www.depletionmode.com/zircon-process.html">also see David Kaplan's exploration</a>). <a href="https://fuchsia.dev/fuchsia-src/development/languages/fidl">FIDL</a>, our IPC system, is a joy. I often smile when debating designs, because whether or not something is in-process or out-of-process can sometimes feel like a small implementation detail to people.</p><p><b>Verified execution</b></p><p>We will eventually write some good documentation about this. I believe that we have meaningfully expanded on <a href="https://www.chromium.org/chromium-os/chromiumos-design-docs/verified-boot">ChromeOS' verified boot design</a>.</p><p>The gist is that we store immutable code and data on a content-addressed file-system called <a href="https://fuchsia.dev/fuchsia-src/concepts/filesystems/blobfs">BlobFS</a>. You access what you want by specifying its hash (really, the root of a Merkle tree, for fast random access). Then we have an abstraction layer on top, which components can use to access files by names and which, under the hood can verify signatures for those hashes. File-systems are of course in user-land, can layer nicely, and <a href="https://mobile.twitter.com/adambarth/status/1400126939340296194">it's easy to create the right environment</a> for any component.</p><p>A key element is that we have made the ability to create executable pages a real permission, without disturbing the loading of BlobFS-backed, signed, dynamic libraries. For any process which doesn't need a JIT, it'll force attackers to ROP/JOP their way to the next stage of their attack.</p><p><b>Rust</b></p><p>For system-level folks, <a href="https://www.rust-lang.org/">Rust</a> is one of the most exciting security developments of the past few decades. It elegantly solves problems which smart people were saying could not be solved. Fuchsia has a lot of code, and we made sure that much of it (millions of LoC) was in Rust.</p><p>Our kernel, <a href="https://twitter.com/cpuGoogle/status/1397265884251525122">Zircon, is not in Rust</a>. Not yet anyway. But it is in a <a href="https://fuchsia.dev/fuchsia-src/development/languages/c-cpp/cxx">nice, lean subset of C++</a> which I consider a vast improvement over C.</p><p><b>Various</b></p><p></p><ul style="text-align: left;"><li><a href="https://twitter.com/crypt0ad">Kostya</a> wrote <a href="https://llvm.org/docs/ScudoHardenedAllocator.html">a security-minded memory allocator, Scudo</a>, which has been one of his contributions to Fuchsia (and also now to <a href="https://source.android.com/devices/tech/debug/scudo">Android</a>!)</li><li>We took the opportunity to have a <a href="https://twitter.com/adambarth/status/1397586940841533446">proper PRNG interface</a>. It's backed by <a href="https://twitter.com/hashbreaker">D.J. Bernstein</a>'s excellent <a href="https://cr.yp.to/chacha.html">ChaCha20</a> with seeding from hardware (and <a href="https://fuchsia.dev/fuchsia-src/concepts/system/jitterentropy/config-basic">JitterEntropy</a> for security in depth); hardware-backed AES-CTR is too slow because of the context saving/restoring.</li><li>Lots of <a href="https://fuchsia.dev/fuchsia-src/concepts/testing/fuzz_testing">fuzzing</a> and <a href="https://fuchsia.dev/fuchsia-src/concepts/testing/sanitizers">sanitizers</a> of course. In particular, we adapted <a href="https://twitter.com/dvyukov">Dmitry</a>'s <a href="https://github.com/google/syzkaller">SyzKaller</a> to work on Zircon.</li><li>Your favorite exploit mitigations where they make sense, including <a href="https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/docs/concepts/kernel/safestack.md">SafeStack</a>, <a href="https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/docs/concepts/kernel/shadow_call_stack.md">ShadowCallStack</a> and ASLR.</li><li>Enforcement that system calls <a href="https://fuchsia.dev/fuchsia-src/concepts/kernel/vdso#enforcement">go through our vDSO</a> and the potential for <a href="https://fuchsia.dev/fuchsia-src/concepts/kernel/vdso#variants">vDSO variants</a>.</li></ul><p></p><p>There is much more, which I may get to at some point. And there is a lot more to do. I am optimistic that we have created a sensible security foundation to iterate on. Time will tell. What did we miss? Fuchsia is covered by the <a href="https://www.google.com/about/appsecurity/reward-program/">Google VRP</a>, so you can get payed by <a href="https://bugs.fuchsia.dev/p/fuchsia/issues/entry?template=Fuchsia+Security+external+bug+report">telling us</a>!</p>Julienhttp://www.blogger.com/profile/17755946137261861886noreply@blogger.comtag:blogger.com,1999:blog-8992811497323121233.post-56663104928480768542012-09-06T17:21:00.001-07:002012-09-06T17:50:00.209-07:00Introducing Chrome's next-generation Linux sandbox<span style="font-family: inherit;">Starting with Chrome <span id="internal-source-marker_0.9181853355839849" style="font-size: 12px; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">23.0.1255.0, recently released to the <a href="http://dev.chromium.org/getting-involved/dev-channel#TOC-Linux">Dev Channel</a>, you will see Chrome making use of our next-generation sandbox on Linux and ChromeOS for renderers.</span></span></span><br />
<span style="font-family: inherit;"><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">We are using a new facility, introduced in Linux 3.5 and developed by Will Drewry called <a href="http://lwn.net/Articles/475043/">Seccomp-BPF</a>.</span></b></span><br />
<span style="font-family: inherit;"><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">Seccomp-BPF builds on the ability </span></b><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">to send small BPF (for <a href="http://www.tcpdump.org/papers/bpf-usenix93.pdf">BSD Packet Filter</a>) <a href="http://www.unix.com/man-page/FreeBSD/4/bpf/">programs</a> that can be interpreted by the kernel. This feature was originally designed for tcpdump, so that filters could directly run in the kernel for performance reasons.</span></b></span><br />
<span style="font-family: inherit;"><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">BPF programs are untrusted by the kernel, so they are limited in a number of ways. Most notably, they can't have loops, which </span></b><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">bounds their execution time by a monotonic function of their size and </span></b><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">allows the kernel to know they will always terminate.</span></b></span><br />
<span style="font-family: inherit;"><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">With Seccomp-BPF, BPF programs can now be used to evaluate system call numbers and their parameters.</span></b></span><br />
<span style="font-family: inherit;"><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">This is a huge change for sandboxing code in Linux, which, as you may recall, has been very limited in this area. It's also a change that recognizes and innovates in two important dimensions of sandboxing:</span></b></span><br />
<ul>
<li><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="font-family: inherit; vertical-align: baseline; white-space: pre-wrap;">Mandatory access control versus "discretionary privilege dropping". Something I always felt strongly about and <a href="http://blog.cr0.org/2009/10/security-in-depth-for-linux-software.html">have discussed before</a>.</span></b></li>
<li><span style="font-family: inherit; font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">Access control semantics, versus attack surface reduction.</span></li>
</ul>
<div>
<span style="font-family: inherit; font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">Let's talk about the second topic. Having a nice, high level, access control semantics is appealing and, one may argue, necessary. When you're designing a sandbox for your application, you may want to say things such as:</span></div>
<div>
<ul>
<li><span style="font-family: inherit; font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">I want this process to have access to this subset of the file system.</span></li>
<li><span style="font-family: inherit; font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">I want this process to be able to allocate or de-allocate memory.</span></li>
<li><span style="font-family: inherit; font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">I want this process to be able to interfere (debug, send signals) with this set of processes.</span></li>
</ul>
<div>
<span style="font-family: inherit; font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">The capabilities-oriented framework <a href="http://www.cl.cam.ac.uk/research/security/capsicum/">Capsicum</a> takes such an approach. This is very useful.</span></div>
<div>
<span style="font-family: inherit; font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;"><br /></span></div>
<div>
<span style="font-family: inherit; font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">However, with such an approach it's difficult to assess the kernel's attack surface. When the whole kernel is in your <a href="http://en.wikipedia.org/wiki/Trusted_computing_base">trusted computing base</a> "you're going to have a bad time", as a colleague recently put it.</span></div>
</div>
<span style="font-family: inherit;"><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">Now, in that same dimension, at the other end of the spectrum, is the "attack surface reduction" oriented approach. The approach where you're close to the ugly guts of implementation details, the one taken by Seccomp-BPF.</span></b></span><br />
<span style="font-family: inherit;"><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">In that approach, read()+write() and vmsplice() are completely different beasts, because you're not looking at their semantics, but at the attack surface they open in the kernel. They perform similar things, but perhaps <a href="http://www.isec.pl/">ihaquer</a> will have a harder time exploiting read()/write() on pipes than vmsplice(). Semantically, uselib() seems to be a subset of open() + mmap(), but similarly, the attack surface is different.</span></b></span><br />
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="font-family: inherit; vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="font-family: inherit; vertical-align: baseline; white-space: pre-wrap;">The drawback of course is that implementing particular sandbox semantics with such a mechanism looks ugly. For instance, let's say you want to allow opening any file in /public from within the sandbox, how would you implement that in seccomp-BPF?</span></b><br />
<span style="font-family: inherit;"><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">Well, first you need to understand what set of system calls would be concerned by such an operation. That's not just open(), but also openat() (an ugly implementation-level detail, some libc will happily use openat() with AT_FDCWD instead of open(). Then you realize that a BPF program in the kernel will only see a pointer to the file name, so you can't filter on that (even if you could dereference pointers in BPF programs, it wouldn't be safe to do so, because an attacker could create another thread that would modify the file name after it was evaluated by the BPF program, so the kernel would also need to copy it in a safe location).</span></b></span><br />
<span style="font-family: inherit;"><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">In the end, what you need to do is have a trusted helper process (or broker) that runs unsandboxed for this particular set of system calls and have it accept requests to open files over an IPC channel, have it make the security decision and send the file descriptor back over an IPC.</span></b></span><br />
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="font-family: inherit; vertical-align: baseline; white-space: pre-wrap;">(If you're interested in that sort of approach, pushed to the extreme, look at Markus Gutschke's <a href="http://code.google.com/p/seccompsandbox/">original seccomp mode 1 sandbox</a>.)</span></b><br />
<span style="font-family: inherit;"><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">That's tedious but doable. In comparison, Capsicum would make this a breeze.</span></b></span><br />
<span style="font-family: inherit;"><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;">There are other issues with such a low-level approach. By filtering system calls, you're breaking the kernel API. This means that third party code (such as libraries) you include in your address space can break. For this reason, I suggested to Will to implement an "exception" mechanism through signals, so that special handlers can be called when system calls are denied. Such handlers are now used and can for instance "broker out" system calls such as open().</span></b></span><br />
<span style="font-family: inherit;"><b style="font-size: 12px; font-weight: normal; line-height: 15.600000381469727px;"><span style="vertical-align: baseline; white-space: pre-wrap;"><br /></span></b>
<span style="font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">In my opinion, the Capsicum and Seccomp-BPF approach are trade-offs, each on the other end of the spectrum. Having both would be great. We could stack one on top of the other and have the best of both worlds.</span></span><br />
<span style="font-family: inherit;"><span style="font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;"><br /></span>
<span style="font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">In a similar, but very limited, fashion, this is what we have now in Chrome: we stacked the seccomp-bpf sandbox on top of the <a href="http://code.google.com/p/setuid-sandbox/">setuid sandbox</a>. The setuid sandbox gives a few easy to understand semantic properties: no file system access, no process access outside of the sandbox, no network access. It makes it much easier to layer a seccomp-bpf sandbox on top.</span></span><br />
<span style="font-family: inherit;"><span style="font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;"><br /></span>
<span style="font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">Several people besides myself have worked on making this possible. In particular: Chris Evans, Jorge Lucangeli Obes, Markus Gutschke, Adam Langley (and others who made Chrome sandboxable under the setuid sandbox in the first place) and of course, for the actual kernel support, Will Drewry and Kees Cook.</span></span><br />
<span style="font-family: inherit;"><span style="font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;"><br /></span>
<span style="font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">We will continue to work on improving and tightening this new sandbox, this is just a start. </span><span style="font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">Please give it a try, and report any bugs to crbug.com (feel free to cc: jln at chromium.org directly).</span></span><br />
<span style="font-family: inherit;"><span style="font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;"><br /></span>
<span style="font-size: 12px; line-height: 15.600000381469727px; white-space: pre-wrap;">PS: to make sure that you have kernel support for seccomp BPF, use Linux 3.5 or Ubuntu 12.04. Check about:sandbox in Chrome 22+ and see if Seccomp-BPF is enabled). Also make sure you're using the 64 bits version of Chrome.</span></span>Unknownnoreply@blogger.com6tag:blogger.com,1999:blog-8992811497323121233.post-90162927810345808652010-04-09T06:47:00.000-07:002012-09-06T15:29:50.282-07:00Javocalypse<span style="font-style: italic;">EDIT: Following its full disclosure </span><a href="http://blogs.oracle.com/security/2010/04/security_alert_for_cve-2010-08.html" style="font-style: italic;">Sun fixed</a><span style="font-style: italic;"> Tavis' Java deployment toolkit bug (CVE-2010-0886 and CVE-2010-0887)</span><span style="font-style: italic; font-weight: bold;"> </span><span style="font-style: italic;">in a matter of days, wow! No doubts this will be used in the future as an argument for full disclosure.</span><br />
<span style="font-style: italic;">However, this does not bring much security! An attacker can still automatically downgrade your version of Java (<a href="http://twitter.com/taviso/status/11900526653">using installJRE</a>)</span><span style="font-family: monospace;"> </span><span style="font-style: italic;">and exploit this bug or any other he likes!</span><br />
<br />
<a href="http://blog.cr0.org/2009/05/write-once-own-everyone.html">Almost one year ago, I blogged</a> about one of my favorite security bug, found by <a href="http://slightlyrandombrokenthoughts.blogspot.com/">Sami Koivu</a>.<br />
<br />
More specifically, I blogged about a class of Java bugs exposed by Sami Koivu and I mentioned this was the first instance of it.<br />
<br />
Not only was it interesting from a technical perspective, but also high impact, allowing perfectly reliable (and relatively simple) cross platform exploitation of any system supporting Java applets (and that's a lot of systems). And this, through a widely deployed, but notoriously poorly updated component.<br />
<br />
One year later [1], Sami strikes again. This time should be the final nail in Java applets' coffin for anyone with security expectations:<br />
<ul>
<li>Another instance of the privileged deserialization class of bugs (<a href="http://www.zerodayinitiative.com/advisories/ZDI-10-051/"><span style="text-decoration: underline;">CVE-2010-0094</span></a>)</li>
<li>A new class of bugs: <a href="http://slightlyrandombrokenthoughts.blogspot.com/2010/04/java-trusted-method-chaining-cve-2010.html">Java trusted method chaining</a>. With one instance as a free sample (<a href="http://www.zerodayinitiative.com/advisories/ZDI-10-056/">CVE-2010-0840</a>). (This one is beautful by the way, be sure to read!)</li>
<li>Free goodies for web security researchers: a flaw that completely breaks the web security model. The "<a href="http://code.google.com/p/browsersec/wiki/Part2#Same-origin_policy_for_Java">Java-SOP</a>" security was done in the compiler, not the runtime (<a href="http://www.zerodayinitiative.com/advisories/ZDI-10-055/">CVE-2010-0095</a>). Normally this would translate to "really bad", but why would one need your cookies when one can have your computer?</li>
</ul>
<blockquote>
</blockquote>
But Tavis would not let Sami have his party alone and between two kernel bugs took a quick look at the Java deployment toolkit and found this <a href="http://www.mail-archive.com/full-disclosure@lists.grok.org.uk/msg40571.html">embarrassingly trivially exploitable issue</a>. It's not corrected yet. And it's exploitable even if you have Java disabled in IE or Firefox, you only need to have Java installed.<br />
<br />
It's so simple that it was obvious that many people had found (and were exploiting) this one. And we've already had confirmation of this, which led Tavis to release his advisory with mitigation instructions before a patch was available. Read his advisory for interesting thoughts on disclosure.<br />
<br />
So, dear reader, if you don't want to get owned multiple times:<br />
<ul>
<li>Disable Java in your web browsers</li>
<li>Uninstall Java completely or follow Tavis' mitigation instructions on Windows</li>
</ul>
Updating Java does not work, Sami has already mentioned that he would be very surprised if there weren't 10 other cases of "Java trusted method chaining" bugs. There are probably other deserialization ones too.<br />
And anyway, a lazy attacker can just <a href="http://twitter.com/taviso/status/11900526653"><span style="font-style: italic;">silently downgrade</span> his up-to date target</a> to whatever vulnerable Java version he wants to exploit, using the aforementioned Java deployment toolkit. Really, it's a feature.<br />
<br />
Moreover, not everyone can update Java. Let's see how long it takes for Apple to patch these ones this time. My bet is that up-to-date default MacOS X installations are going to be vulnerable for a while to even the publicly reported bugs.<br />
<br />
This is Javocalypse.<br />
<br />
[1] well technically, only a few months later, but it took 5 months before the public advisory. A delay that I would call "reasonable".Unknownnoreply@blogger.com8tag:blogger.com,1999:blog-8992811497323121233.post-89326912572910131172010-03-28T04:21:00.000-07:002010-03-28T05:10:44.886-07:00There's a party at Ring0, and you're invitedTavis and I have just come back from <a href="http://www.cansecwest.com">CanSecWest</a>. The title of our talk was "There's a party at Ring0, and you're invited".<br /><br />We went through some of the bugs that we have worked on this past year and mentioned some of our thoughts on kernel security in general:<br /><br /><ul><li>We see an increasing attack surface, both locally and remotely (@font-face, <a href="http://en.wikipedia.org/wiki/WebGL">webgl</a>...) </li><li>The recent focus on sandboxes (<a href="http://dev.chromium.org/developers/design-documents/sandbox">Chrome</a>, <a href="http://blogs.technet.com/office2010/archive/2009/08/13/protected-view-in-office-2010.aspx">Office</a>) makes the kernel an even more interesting target</li><li>Modern operating systems still generally lack facilities for discretionary privilege dropping or to reduce the kernel's attack surface (with the notable exception of <a href="http://en.wikipedia.org/wiki/Seccomp">SECCOMP</a> on Linux)</li><li>While most OS have some degree of userland memory corruption exploitation prevention, kernel exploitation prevention is immature. On Linux, <a href="http://www.grsecurity.net/">PaX/grsecurity</a> leads the effort and Microsoft added <a href="http://blogs.technet.com/srd/archive/2009/05/26/safe-unlinking-in-the-kernel-pool.aspx">safe unlinking</a> in the Windows 7 kernel.<br /></li></ul>If you're interested, you can download our slides <a href="http://www.cr0.org/paper/to-jt-party-at-ring0.pdf">here</a>.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-8992811497323121233.post-52430387555737833812010-01-21T07:48:00.000-08:002014-03-04T23:21:21.256-08:00CVE-2010-0232: Microsoft Windows NT #GP Trap Handler Allows Users to Switch Kernel StackTwo days ago, Tavis Ormandy <a href="http://lists.grok.org.uk/pipermail/full-disclosure/2010-January/072549.html">has published</a> one of the most interesting vulnerabilities I've seen so far.<br />
<div>
<br /></div>
<div>
It's one of those rare, but fascinating design-level errors dealing with low-level system internals. Its exploitation requires skills and ingenuity.</div>
<div>
<br /></div>
<div>
The vulnerability lies in Windows' support for Intel's hardware 8086 emulation support (virtual-8086, or VM86) and is believed to have been there since Windows NT 3.1 (1993!), making it 17 years old.</div>
<div>
<br /></div>
<div>
It uses two tricks that we have already published on this blog before, the <a href="http://blog.cr0.org/2009/09/cve-2009-2793-iret-gp-on-pre-commit.html">#GP on pre-commit handling failure</a> and the<a href="http://blog.cr0.org/2009/10/cve-2009-2267-mishandled-exception-on.html"> forging of cs:eip in VM86 mode</a>.</div>
<div>
<br /></div>
<div>
This was intended to be mentioned in our talk at PacSec about virtualization this past November, but Tavis had agreed with Microsoft to postpone the release of this advisory.</div>
<div>
<br /></div>
<div>
Tavis was kind enough to write a blog post about it, you can read it below:</div>
<div>
<br /></div>
<div>
<i><b>From Tavis Ormandy:</b></i></div>
<div>
<br /></div>
<div>
<div>
I've just published one of the most interesting bugs I've ever encountered, a simple authentication check in Windows NT that can incorrectly let users take control of the system. The bug exists in code hidden deep enough inside the kernel that it's gone unnoticed for as long as NT has existed.</div>
<div>
<br /></div>
<div>
If you've ever tried to run an MS-DOS or Win16 application on a modern NT machine, the chances are it worked. This is an impressive feat, these applications were written for a completely different execution environment and operating system, and yet still work today and run at almost native speed.</div>
<div>
<br /></div>
<div>
The secret that makes this possible behind the scenes is Virtual-8086 mode. Virtual-8086 mode is a hardware emulation facility built into all x86 processors since the i386, and allows modern operating systems to run 16-bit programs designed for real mode with very little overhead. These 16-bit programs run in a simulated real mode environment within a regular protected mode task, allowing them to co-exist in a modern multitasking environment.</div>
<div>
<br /></div>
<div>
Support for Virtual-8086 mode requires a monitor, the collective name for the software that handles any requests the program makes. These requests range from handling sensitive instructions to mapping low-level services onto system calls and are implemented partially in kernel mode and partially in user mode.</div>
<div>
<br /></div>
<div>
In Windows NT, the user mode component is called the NTVDM subsystem, and it interacts with the kernel via a native system service called NtVdmControl. NtVdmControl is unusual because it's authenticated, only authorised programs are permitted to access it, which is enforced using a special process flag called VdmAllowed which the kernel verifies is present before NtVdmControl will perform any action; if you don't have this flag, the kernel will always return STATUS_ACCESS_DENIED.</div>
<div>
<br /></div>
<div>
The bug we're talking about today involves how BIOS service calls are handled, which are a low level way of interacting with the system that's needed to support real-mode programs. The kernel implements BIOS service calls in two stages, the second stage begins when the interrupt handler for general protection faults (often shortened to #GP in technical documents) detects that the system has completed the first stage.</div>
<div>
<br /></div>
<div>
The details of how BIOS service calls are implemented are unimportant, what is important is that the two stages must be perfectly synchronised, if the kernel transitions to the second stage incorrectly, a hostile user can take advantage of this confusion to take control of the kernel and compromise the system. In theory, this shouldn't be a problem, Microsoft implemented a check that verifies that the trap occurred at a magic address (actually, a cs:eip pair) that unprivileged users can't reach.</div>
<div>
<br /></div>
<div>
The check seems reasonable at first, the hardware guarantees that unprivileged code can't arbitrarily make itself more privileged without a special request, and even if it could, only authorised programs are permitted to use NtVdmControl() anyway.</div>
<div>
<br /></div>
<div>
Unfortunately, it turns out these assumptions were wrong. The problem I noticed was that although unprivileged code cannot make itself more privileged arbitrarily, Virtual-8086 mode makes testing the privilege level of code more difficult because the segment registers lose their special meaning. This is because In protected mode, the segment registers (particularly ss and cs) can be used to test privilege level, however in Virtual-8086 mode they're used to create far pointers, which allow 16-bit programs to access the 20-bit real address space.</div>
<div>
<br /></div>
<div>
However, I still couldn't abuse this fact because NtVdmControl() can only be accessed by authorised programs, and there's no other way to request pathological operation on Virtual-8086 mode tasks. I was able to solve this problem by invoking the real NTVDM subsystem, and then loading my own code inside it using a combination of CreateRemoteThread(), VirtualAllocEx() and WriteProcessMemory().</div>
<div>
<br /></div>
<div>
Finally, I needed to find a way to force the kernel to transition to the vulnerable code while my process appeared to be privileged. My solution to this was to make the kernel fault when returning to user mode from kernel mode, thus creating the appearance of a legitimate trap for the fabricated execution context that I had installed. These steps all fit together perfectly, and can be used to convince the kernel to execute my code, giving me complete control of the system.</div>
<div>
<br /></div>
<div>
<b>Conclusion</b></div>
<div>
<br /></div>
<div>
Could Microsoft have avoided this issue? It's difficult to imagine how, errors like this will generally elude fuzz testing (In order to observe any problem, a fuzzer would need to guess a 46-bit magic number, as well as setup an intricate process state, not to mention the VdmAllowed flag), and any static analysis would need an incredibly accurate model of the Intel architecture.</div>
<div>
<br /></div>
<div>
The code itself was probably resistant to manual audit, it's remained fairly static throughout the history of NT, and is likely considered forgotten lore even inside Microsoft. In cases like this, security researchers are sometimes in a better position than those with the benefit of documentation and source code, all abstraction is stripped away and we can study what remains without being tainted by how documentation claims something is supposed to work. </div>
<div>
<br /></div>
<div>
If you want to mitigate future problems like this, reducing attack surface is always the key to security. In this particular case, you can use group policy to disable support for Application Compatibility (see the Application Compatability policy template) which will prevent unprivileged users from accessing NtVdmControl(), certainly a wise move if your users don't need MS-DOS or Windows 3.1 applications.</div>
</div>
Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-8992811497323121233.post-14206479965860049742009-11-28T05:59:00.000-08:002009-11-28T08:54:53.314-08:00Virtualization security and the Intel privilege modelEarlier this month, Tavis and I spoke at <a href="http://www.pacsec.jp/">PacSec 2009</a> in Tokyo about virtualisation security on Intel architectures, with a focus on CPU virtualisation.<br /><br />During this talk, we briefly explained various techniques used for CPU virtualisation such as dynamic translation (QEmu), VMware-style binary translation or paravirtualisation (Xen) and we went through bugs found by us and others:<br /><br />- We released some details about <a href="http://www.microsoft.com/technet/security/Bulletin/MS09-033.mspx">MS09-33</a> (CVE-2009-1542), a bug we found in VirtualPC's instructions decoding<br />- We mentioned two of the <a href="http://www.securityfocus.com/archive/1/498150">awesome bugs found by Derek Soeder</a> in VMware, CVE-2008-4915 and CVE-2008-4279<br />- We explained and demo-ed the exploitation of the mishandled exception on page fault bug in VMware that I <a href="http://blog.cr0.org/2009/10/cve-2009-2267-mishandled-exception-on.html">previously blogged about</a>.<br />- We released information on CVE-2009-3827, a bug we discovered in Virtual PC's hardware virtualisation.<br />A funny fact is that the exact same bug was independently uncovered and <a href="http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=0a79b009525b160081d75cef5dbf45817956acf2">corrected in KVM later by Avi Kivity</a> (CVE-2009-3722). The reason may be a not perfectly clear Intel documentation about the differences between MOV_DR and MOV_CR events in hardware virtualisation.<br />This bug has already been addressed by Microsoft in Windows 7 and will get corrected in the next service pack for Virtual PC and Virtual Server.<br /><br />If you are interested, you can download the slides <a href="http://www.cr0.org/paper/jt-to-virtualisation_security.pdf">here</a>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8992811497323121233.post-60823839844943097342009-10-30T13:43:00.000-07:002009-10-30T14:55:02.832-07:00CVE-2009-2267: Mishandled exception on page fault in VMwareTavis Ormandy and myself have recently released an advisory for CVE-2009-2267.<br /><br />This is a vulnerability in VMware's virtual CPU which can lead to privilege escalation in a guest. All VMware virtualisation products were affected, including in hardware virtualisation mode.<br /><br />In a VMware guest, in the general case, unprivileged (Ring 3) code runs without VMM intervention until an exception or interrupt occurs. An exception to this is Virtual-8086 mode (VM86) where VMware will perform CPU emulation.<br /><br />When VMware was emulating a far call instruction in VM86 mode, it was using supervisory access to push the CS and IP registers. Because of this, if this operation raisee a Page Fault (#PF) exception, the resulting exception code would be invalid and would have it's user/supervisor flag incorrectly set.<br /><br />This can be used to confuse a Guest kernel. Moreover, VM86 mode can be used to further confuse the guest kernel because it allows an attacker to load an arbitrary value in the code segment (CS) register.<br /><br />We wrote a reliable proof of concept to elevate privileges on Linux guests. It turned out to be very easy because of the <a href="http://lxr.linux.no/#linux+v2.6.24/arch/x86/mm/extable_32.c">PNP BIOS recovery code</a>.<br /><br />For further details, check our <a href="http://www.cr0.org/misc/CVE-2009-2267.txt">advisory</a>, <a href="http://www.vmware.com/security/advisories/VMSA-2009-0015.html">VMware's advisory</a> and the non weaponized PoC (<a href="http://www.cr0.org/misc/vmware86.c">vmware86.c</a>, <a href="http://www.cr0.org/misc/vmware86.tar.gz">vmware86.tar.gz</a>), including Tavis' cool CODE32 macro.<br /><br />Note that VMware silently patches their products until all all of them are updated and then releases an advisory. If you have updated VMware Workstation a few month ago, you were already protected against this vulnerability.<br /><br />In theory, VMware's Virtual CPU flaws could be treated like Intel or AMD errata and worked around in operating systems. In practice, since VMware's software can be updated, this is unlikely to happen. Moreover, VMware doesn't release full details that could be used to produce work arounds.<br /><br />If you like virtual CPU vulnerabilities, I suggest that you have a look at <a href="http://www.securityfocus.com/archive/1/498150">Derek Soeder's awesome advisory</a> from last year.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8992811497323121233.post-87315759971967471062009-10-14T02:49:00.000-07:002010-03-10T08:04:08.624-08:00Security in Depth for Linux Software<a href="http://scarybeastsecurity.blogspot.com/">Chris Evans</a> and myself have presented last week at <a href="http://conference.hitb.org/hitbsecconf2009kl/">Hack In The Box Malaysia</a> about "<a href="https://conference.hackinthebox.org/hitbsecconf2009kl/?page_id=486">Security in Depth for Linux software</a>". You can find the slides <a href="http://www.cr0.org/paper/jt-ce-sid_linux.pdf">here</a>.<br /><br />The talk was focused on writing good code and <span style="font-style: italic;">sandboxing</span>.<br /><br />The writing goode code part was using vsftpd as an example, since Chris has <a href="http://vsftpd.beasts.org/#security">got this right</a> for ten years now.<br /><br />In the second part, we defined <span style="font-style: italic;">sandboxing</span>, which we also call <span style="font-style: italic;">discretionary privilege dropping</span>, as the ability to drop privileges programmatically and without administrative authority on the machine.<br /><br />We explained some of the conceptual differences between <span style="font-style: italic;">sandboxing</span> in this sense, where the application writer chooses to make part of his code run without certain privileges, and<span style="font-style: italic;"> Mandatory Access Control</span> systems, where the application itself doesn't make the policy.<br /><br />From an application writer perspective, <span style="font-style: italic;">sandboxing</span> facilities are desirable since they will allow your code to run with lower privileges on all machines. On the other hand, MAC is desirable from a system administrator or distribution maintainer perspective as it will allow one policy to rule over many applications and to enforce certain security properties on the system.<br /><br />While Linux has a fair number of MAC systems available, <span style="font-style: italic;">sandboxing</span> options are for now very limited. There is some hope that the ftrace framework or SELinux bounded types may allow this in the future (see also <a href="http://www.imperialviolet.org/2009/06/07/lsmsb.html">Adam Langley's post</a> on LSMSB), but this will not be widely available anytime soon.<br /><br />We demonstrated different ways of overcoming those limitations on readily available Linux kernels, focusing on three designs experimented or used in vsftpd and Chromium.<br /><ul><li>Using ptrace(), vsftpd experiment<br /></li><li><a href="http://code.google.com/p/setuid-sandbox/">The setuid sandbox design</a> (<span style="font-style: italic;">Julien Tinnes, Tavis Ormandy</span>), Chromium<br /></li><li>The <a href="http://www.imperialviolet.org/2009/08/26/seccomp.html">SECCOMP sandbox</a> design (<em>Markus Gutschke, Adam Langley</em>), Chromium<br /></li></ul>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-8992811497323121233.post-33701166627661049972009-09-16T10:20:00.000-07:002009-09-16T11:38:24.026-07:00CVE-2009-2793: Iret #GP on pre-commit handling failure: the NetBSD caseA few months ago, Tavis Ormandy and myself have used the fact that iret can fail with a General Protection (#GP) exception before the processor "commits" to user-mode (switches privileges by setting CS) on multiple occasions (more on this at upcoming PacSec)<br /><br />It's not necessarily obvious that an inter-privilege iret (typically from kernel mode to user mode) can fail before the privilege switch occurs. It's however the case if the restored EIP is past the code segment limits: a #GP exception will be raised while in kernel mode.<br /><br />When this occurs, an exception is raised from kernel mode with a handler in kernel mode: since there is no privilege level switch, no stack switch occurs and the trap frame will not contain saved stack information.<br /><br />If an operating system's kernel does not expect this to happen, it may assume a full trap frame with saved stack registers. This is what happens in NetBSD.<br /><br />An interesting point in the NetBSD case is that due to the lazy handling of the non executable stack emulation, a legitimate program could trigger the bug:<br /><ol><li>The legitimate program has code on the stack. For instance due to a GCC-genereated trampoline for a nested function.</li><li>The stack with be marked as executable but the code segment limit will not be raised yet: on stack execution, the kernel will handle the #GP exception and raise the limit (lazy handling).</li><li>A signal handler gets set to this nested function</li><li>The kernel delivers a signal to the process and iret to the code on the stack, such raising #GP pre-commit.</li></ol>You can read our full NetBSD related advisory <a href="http://www.cr0.org/misc/CVE-2009-2793.txt">here (CVE-2009-2793)</a>.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-8992811497323121233.post-40423180836504537402009-08-28T04:43:00.000-07:002009-09-02T18:27:47.893-07:00CVE-2009-2698: udp_sendmsg() vulnerability<span style="font-style: italic;">EDIT: p0c73n1 </span><a style="font-style: italic;" href="http://milw0rm.com/exploits/9542">has posted an exploit</a><span style="font-style: italic;"> for this to milw0rm</span><span style="font-style: italic;"> as did </span><a style="font-style: italic;" href="http://www.milw0rm.com/exploits/9575">andi@void.at</a><span style="font-style: italic;">, and spender wrote </span><a style="font-style: italic;" href="http://grsecurity.org/%7Espender/therebel.tgz">"the rebel"</a><br /><br />Tavis Ormandy and myself have recently reported CVE-2009-2698 which has been disclosed at the beginning of the week.<br /><br />This flaw affects at least Linux 2.6 with a version < 2.6.19.<br /><br />When we ran into this, we realized the newest kernel versions were not affected by the PoC code we had. The reason for this was that Herbert Xu had <a href="http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=1e0c14f49d6b393179f423abbac47f85618d3d46">found and corrected a closely related bug</a>. Linux distributions running on 2.6.18 and earlier kernels did not realize the security impact of this fix and did not backport it.<br />This is a good example on how hard it is to backport relevant fixes to maintained stable versions of the kernel.<br /><br />If you look at <a href="http://lxr.linux.no/#linux+v2.6.18.8/net/ipv4/udp.c#L483">udp_sendmsg</a>, you will see that the rt routing table is initialized as NULL and some code paths can lead to call <a href="http://lxr.linux.no/#linux+v2.6.18.8/net/ipv4/ip_output.c#L771">ip_append_data</a> with a NULL rt. ip_append_data() obviously doesn't handle this case properly and will cause a NULL pointer dereference.<br /><br />Note that this is a data NULL pointer dereference and mapping code at page zero will not lead to immediate privileged code execution for a local attacker. However, controlling the rtable structure seems to give enough control to the attacker to elevate privileges.<br /><br />Since it's hard to guarantee that ip_append_data will never be called with a NULL *rtp, we believe that this function should be made more robust <a href="http://patchwork.kernel.org/patch/44268/">by using this patch</a>.<br /><br />Here's one way to trigger this vulnerability locally:<br /><br />$ cat croissant.c<br />#include <sys/types.h><br />#include <sys/socket.h><br />#include <string.h><br /><br />int main(int argc, char **argv)<br />{<br />int fd = socket(PF_INET, SOCK_DGRAM, 0);<br />char buf[1024] = {0};<br />struct sockaddr to = {<br /> .sa_family = AF_UNSPEC,<br /> .sa_data = "TavisIsAwesome",<br />};<br /><br />sendto(fd, buf, 1024, MSG_PROXY | MSG_MORE, &to, sizeof(to));<br />sendto(fd, buf, 1024, 0, &to, sizeof(to));<br /><br />return 0;<br />}<br /><br />An effective implementation of mmap_min_addr or the UDEREF feature of PaX/GrSecurity would prevent local privilege escalation through this issue.Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-8992811497323121233.post-48845596228680114172009-08-13T10:41:00.000-07:002014-03-04T23:22:03.720-08:00Linux NULL pointer dereference due to incorrect proto_ops initializations (CVE-2009-2692)<span style="font-style: italic;">EDIT2: </span><a href="https://bugzilla.redhat.com/show_bug.cgi?id=516949#c10" style="font-style: italic;">Here</a><span style="font-style: italic;"> is RedHat's official mitigation recommendation</span><br />
<span style="font-style: italic;">EDIT3: </span><a href="http://www.grsecurity.net/" style="font-style: italic;">Brad Spengler</a><span style="font-style: italic;"> also wrote an exploit for this </span><a href="http://grsecurity.net/~spender/wunderbar_emporium.tgz" style="font-style: italic;">and published it</a><span style="font-style: italic;">. The bug triggering is based on our exploit which leaked to Brad though the private vendor-sec mailing list. He implements the </span><a href="http://blog.cr0.org/2009/06/bypassing-linux-null-pointer.html" style="font-style: italic;">personality trick</a><span style="font-style: italic;"> Tavis and I published in June to bypass mmap_min_addr and also makes use of a feature that allows any unconfined user to gain the right to </span><a href="http://danwalsh.livejournal.com/30084.html" style="font-style: italic;">map at address zero</a><span style="font-style: italic;"> in Redhat's default SELinux policy. He wrote a reliable shellcode for this one that should work pretty much anywhere on x86 and x86_64 machines.</span><br />
<span style="font-style: italic;">EDIT4: if you use Debian or Ubuntu on your machine, I have specifically updated the </span><a href="http://kernelsec.cr0.org/" style="font-style: italic;">kernelsec Debian/Ubuntu GrSecurity packages</a><span style="font-style: italic;"> to protect against this bug and others.</span><br />
<span style="font-style: italic;">EDIT5: Zinx wrote an </span><a href="http://milw0rm.com/sploits/android-root-20090816.tar.gz" style="font-style: italic;">ARM Android root exploit</a><br />
<span style="font-style: italic;">EDIT6: Ramon de Carvalho Valle wrote a </span><a href="http://www.risesecurity.org/entry/illustrating-linux-sock_sendpage-null-pointer/" style="font-style: italic;">PPC/PPC64/x86_64/i386 exploit</a><br />
<br />
Tavis Ormandy and myself have recently found and investigated a Linux kernel vulnerability (CVE-2009-2692). It affects all 2.4 and 2.6 kernels since 2001 on all architectures. We believe this is the public vulnerability affecting the greatest number of kernel versions.<br />
<br />
The issue lies in how Linux deals with unavailable operations for some protocols. <a href="http://lxr.linux.no/linux+v2.6.30.4/net/socket.c#L727">sock_sendpage</a> and others don't check for NULL pointers before dereferencing operations in the ops structure. Instead the kernel relies on correct initialization of those proto_ops structures with stubs (such as <a href="http://lxr.linux.no/linux+*/net/core/sock.c#L1651">sock_no_sendpage</a>) instead of NULL pointers.<br />
<br />
At first sight, the <a href="http://lxr.linux.no/linux+v2.6.30.4/net/ipx/af_ipx.c#L1935">code in af_ipx.c</a> looks correct and seems to initialize .sendpage properly. However, due to a bug in the <a href="http://lxr.linux.no/linux+v2.6.30.4/include/linux/net.h#L289">SOCKOPS_WRAP</a> macro, sock_sendpage will not be initialized. This code is very fragile and there are many other protocols where proto_ops are not correctly initialized at all (vulnerable even without the bug in SOCKOPS_WRAP), see <a href="http://lxr.linux.no/linux+v2.6.30.4/net/bluetooth/bnep/sock.c#L169">bluetooth for instance</a>.<br />
<br />
So it was decided that instead of patching all those protocols and continue to rely on this very fragile code, sock_sendpage would get patched to check against NULL. This was already the way <a href="http://lxr.linux.no/linux+v2.6.30.4/net/socket.c#L742">sock_splice_read</a> and others were handled.<br />
<br />
Since it leads to the kernel executing code at NULL, the vulnerability is as trivial as it can get to exploit (edit: that's for local privilege escalation and on Intel architectures): an attacker can just put code in the first page that will get executed with kernel privileges. Our exploit took a few minutes to adapt from a previous one:<br />
<br />
<span style="font-family: courier new; font-size: 85%;">$ ./leeches</span><span style="font-size: 85%;"><br /></span><span style="font-family: courier new; font-size: 85%;">// ------------------------------------------------------</span><span style="font-size: 85%;"><br /></span><br />
<div class="ii gt" id=":1jy" style="font-family: courier new;">
<span style="font-size: 85%;"> // sendpage linux local ring0<br />// ---------------- <a href="mailto:taviso@sdf.lonestar.org">taviso@sdf.lonestar.org</a>, <a href="mailto:julien@cr0.org">julien@cr0.org</a><br />// leeches.c:Aug 11 2009<br />// GreetZ: LiquidK, lcamtuf, Spoonm, novocainated, asiraP, ScaryBeasts, spender, pipacs, stealth, jagger, redpig, Neel and all the other leeches we forgot to mention!<br />Enjoy some photography while at ring0 @ <a href="http://flickr.com/meder" target="_blank">http://flickr.com/meder</a><br />For our webapp friends, here is an XSS executing at ring 0: javascript:alert(1);<br />shellcode now executing chmod("/bin/sh", 04755), welcome to ring0<br />Killed<br />$ sh<br /># id<br />uid=1000(julien) gid=1000(julien) euid=0(root)</span></div>
On x86/x86_64, this issue could be mitigated by three things:<br />
<ul>
<li>the recent mmap_min_addr feature. Note that this feature has <a href="http://blog.cr0.org/2009/06/bypassing-linux-null-pointer.html">known issues</a> until at least 2.6.30.2. See also this <a href="http://lwn.net/Articles/342330/">LWN article</a>.</li>
<li>on IA32 with <a href="http://www.grsecurity.net/">PaX/GrSecurity</a>, the KERNEXEC feature (x86 only)</li>
<li>not implementing affected protocols (a.k.a., reducing your attack surface by disabling what you don't need): PF_APPLETALK, PF_IPX, PF_IRDA, PF_X25, PF_AX25, PF_BLUETOOTH, PF_IUCV, IPPROTO_SCTP/PF_INET6, PF_PPPOX, PF_ISDN, but there may be more. (Update: See <a href="https://bugzilla.redhat.com/show_bug.cgi?id=516949#c10">RedHat's mitigation</a>)</li>
</ul>
<a href="http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=e694958388c50148389b0e9b9e9e8945cf0f1b98">This patch</a> should be applied to fix this issue.<br />
<br />
You can read our advisory <a href="http://www.cr0.org/misc/CVE-2009-2692.txt">here</a>.<br />
<br />
<span style="font-style: italic;">Note: this has been featured on </span><a href="http://linux.slashdot.org/story/09/08/13/2022212/Local-Privilege-Escalation-On-All-Linux-Kernels" style="font-style: italic;">Slashdot</a><span style="font-style: italic;">, </span><a href="http://www.osnews.com/story/21993/Eight_Years_of_Linux_Kernel_Vulnerable" style="font-style: italic;">OSNews</a><span style="font-style: italic;">, </span><a href="http://www.theregister.co.uk/2009/08/14/critical_linux_bug/" style="font-style: italic;">TheRegister</a><span style="font-style: italic;">, </span><a href="http://news.zdnet.co.uk/security/0,1000000189,39716623,00.htm" style="font-style: italic;">ZDNet</a><span style="font-style: italic;"> and others</span>Unknownnoreply@blogger.com32tag:blogger.com,1999:blog-8992811497323121233.post-5218408805015143972009-07-16T07:05:00.000-07:002014-03-04T23:22:29.006-08:00Old school local root vulnerability in pulseaudio (CVE-2009-1894)Today was chosen as disclosure day for CVE-2009-1894.<br />
<br />
Tavis Ormandy and myself have recently used the fact that <a href="http://www.pulseaudio.org/">pulseaudio</a> was set-uid root to <a href="http://blog.cr0.org/2009/06/bypassing-linux-null-pointer.html">bypass Linux' NULL pointer dereference prevention</a>. This technique is relying on a limitation in the Linux kernel and not on a bug in pulseaudio. But we also found one unrelated bug in pulseaudio.<br />
<br />
Since it's set-uid root, we thought we would give pulseaudio a quick look. In the very first lines of main(), you can find the following:<br />
<br />
<span style="font-family: courier new;"> if (!getenv("LD_BIND_NOW")) {<br />char *rp;<br /><br />putenv(pa_xstrdup("LD_BIND_NOW=1"));<br />pa_assert_se(rp = pa_readlink("/proc/self/exe"));<br />pa_assert_se(execv(rp, argv) == 0);<br />}</span><br />
So, pulseaudio is re-executing itself through /proc/self/exe, so that the dynamic linker performs all relocation immediately at load-time.<br />
<br />
There is an obvious race condition here. /proc/self/exe is a symbolic link to the actual pathname of the executed command: by creating a hard link to /usr/bin/pulseaudio, we control this pathname, and consequently the file under this pathname. Knowing this, the exploitation is trivial (Note that rename() is atomic, or alternatively note how <a href="http://lxr.linux.no/linux+v2.6.30/fs/dcache.c#L1891">__d_path()</a> works with deleted entries).<br />
<br />
It's also interesting to note that any operation performed on /proc/self/exe is guaranteed by the kernel to be performed on the same inode than the one that got executed (see <a href="http://lxr.linux.no/linux+v2.6.24/fs/proc/task_mmu.c#L75">proc_exe_link</a>), something that two of my colleagues have recently pointed out to me. So if they had re-executed themselves by using /proc/self/exe directly, without going through readlink() first, they would not have been vulnerable. And actually they weren't before, if you read the Changelog, you'll find:<br />
<br />
<span style="font-family: arial;">2007-10-29 15:33 lennart</span> <span style="font-family: arial;"> * : use real path of binary instead of /proc/self/exe to execute ourselves</span><br />
<br />
Oops! (Thanks to my colleague Mike Mammarella for digging this)<br />
<br />
Like the <a href="http://blog.cr0.org/2009/04/interesting-vulnerability-in-udevd.html">vulnerability in udevd</a>, this is a very good example of a non memory corruption vulnerability which is trivial to exploit very reliably and in a cross-architecture way.<br />
<br />
So, why does pulseaudio have the set-uid bit set, you may ask ? For real-time performances reasons it wants to keep CAP_SYS_NICE but will drop all other privileges.<br />
<br />
This vulnerability could have been avoided if the principle of least privilege had been followed: Since privileges are not required to re-exec yourself, dropping privileges should have been the first thing pulseaudio did. Here it's only the second thing it does, and it was enough to make most Linux Desktops vulnerable.<br />
<br />
If your distribution of choice did not patch this yet, or if you want to reduce your attack surface, you're advised to <span style="font-style: italic;">chmod u-s /usr/bin/pulseaudio</span>. Also note that as with every setuid binary update, you should also check that your users didn't create "backup" vulnerable copies (hardlink), waiting to own your box with known vulnerabilities while you think you are safe from those.<br />
<span style="font-size: 85%;"><br />PS: Here are two brain teasers for you:<br /><br />1. Find a cool way to perform an action after execve() has succeeded in another process, but before main() executes. First, I've used a FD_CLOEXEC read descriptor in a pipe and a SIGPIPE handler, but while it gives good results in practice, there is not guarantee as to when the signal will get delivered. I've finally found (with a hint from Tavis) a 100% reliable way to do it that is always guaranteed to work at first try. Of course, such a level of sophistication is absolutely not needed for this exploit.<br /><br />2. Since pulseaudio allows you to load arbitrary libraries, it allows you to run arbitrary code with CAP_SYS_NICE as a feature. In the light of <a href="http://en.wikipedia.org/wiki/Non-Uniform_Memory_Architecture" style="font-family: arial;">NUMA</a> coming to the desktop through <a href="http://en.wikipedia.org/wiki/Intel_QuickPath_Interconnect" style="font-family: arial;">QPI</a>, can you do something more interesting than what you would first expect with this?</span>Unknownnoreply@blogger.com5tag:blogger.com,1999:blog-8992811497323121233.post-53823345997478325802009-06-26T11:37:00.000-07:002014-03-04T23:21:43.565-08:00Bypassing Linux' NULL pointer dereference exploit prevention (mmap_min_addr)EDIT3: <a href="http://it.slashdot.org/story/09/07/18/0136224/New-Linux-Kernel-Flaw-Allows-Null-Pointer-Exploits">Slashdot</a>, the <a href="http://isc.sans.org/diary.html?storyid=6820">SANS Institute</a>, <a href="http://threatpost.com/blogs/researcher-uses-new-linux-kernel-flaw-bypass-selinux-other-protections">Threatpost</a> and others have a story about <a href="http://www.grsecurity.net/~spender/exploit.txt">an exploit by Bradley Spengler</a> which uses our technique to exploit a null pointer dereference in the Linux kernel.<br />
EDIT2: As of July 13th 2009, the Linux kernel <a href="http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=f9fabcb58a6d26d6efde842d1703ac7cfa9427b6">integrates our patch</a> (2.6.31-rc3). Our patch also made it into -stable.<br />
EDIT1: This is now referenced as a vulnerability and tracked as CVE-2009-1895<br />
<br />
NULL pointers dereferences are a <a href="http://www.google.com/search?hl=en&q=linux+null+pointer+dereference">common</a> security issue in the Linux kernel.<br />
<br />
In the realm of userland applications, <a href="http://cansecwest.com/core05/memory_vulns_delalleau.pdf">exploiting them</a> usually requires being able to somehow control the target's allocations until you get page zero mapped, and this can be very hard.<br />
<br />
In the paradigm of locally exploiting the Linux kernel however, nothing (before Linux 2.6.23) prevented you from mapping page zero with mmap() and crafting it to suit your needs before triggering the bug in your process' context. Since the kernel's data and code segment both have a base of zero, a null pointer dereference would make the kernel access page zero, a page filled with bytes in your control. Easy.<br />
<br />
This used to not be the case, back in Linux 2.0 when the kernel's data segment's base was above PAGE_OFFSET and the kernel had to explicitely use a segment override (with the fs selector) to access data in userland. The same rough idea is now used in <a href="http://www.grsecurity.net/">PaX/GRSecurity</a>'s UDEREF to prevent exploitation of "unexpected to userland kernel accesses" (it actually makes use of an expand down segment instead of a PAGE_OFFSET segment base, but that's a detail).<br />
<br />
Kernel developpers tried to solve this issue too, but without resorting to segmentation (which is considered deprecated and is mostly not available on x86_64) and in a portable (cross architectures) way. In 2.6.23, they introduced a new sysctl, called vm.mmap_min_addr, that defines the minimum address that you can request a mapping at. Of course, this doesn't solve the complete issue of "to userland pointer dereferences" and it also breaks the somewhat useful feature of being able to map the first pages (this breaks <a href="http://dosemu.sourceforge.net/">Dosemu</a> for instance), but in practice this has been effective enough to make exploitation of many vulnerabilities harder or impossible.<br />
<br />
Recently, <a href="http://taviso.decsystem.org/">Tavis Ormandy</a> and myself had to exploit such a condition in the Linux kernel. We investigated a few ideas, such as:<br />
<ul>
<li>using brk()</li>
<li>creating a MAP_GROWSDOWN mapping just above the forbidden region (usually 64K) and segfaulting the last page of the forbidden region</li>
<li>obscure system calls such as remap_file_pages</li>
<li>putting memory pressure in the address space to let the kernel allocate in this region</li>
<li>using the MAP_PAGE_ZERO personality</li>
</ul>
All of them without any luck at first. The LSM hook responsible for this security check was correctly called every time.<br />
<br />
So what does the default security module do in cap_file_mmap? This is the relevant code (in <a href="http://lxr.linux.no/linux+v2.6.30/security/capability.c#L333">security/capability.c</a> on recent versions of the Linux kernel):<br />
<pre><span style="font-family: courier new; font-size: 130%;">if ((addr < mmap_min_addr)
&& !capable(CAP_SYS_RAWIO))</span><span style="font-size: 130%;">
</span><span style="font-family: courier new; font-size: 130%;"> return -EACCES;</span><span style="font-size: 130%;">
</span><span style="font-family: courier new; font-size: 130%;">return 0;</span></pre>
Meaning that a process with CAP_SYS_RAWIO can bypass this check. How can we get our process to have this capability ? By executing a setuid binary of course! So we set the <a href="http://lxr.linux.no/linux+v2.6.30/fs/binfmt_elf.c#L976">MMAP_PAGE_ZERO personality</a> and execute a setuid binary. Page zero will get mapped, but the setuid binary is executing and we don't have control anymore.<br />
So, how do we get control back ? Using something such as "/bin/su our_user_name" could be tempting, but while this would indeed give us control back, su will drop privileges before giving us control back (it'd be a vulnerability otherwise!), so the Linux kernel will make exec fail in the cap_file_mmap check (due to the MMAP_PAGE_ZERO personality).<br />
<br />
So what we need is a setuid binary that will give us control back without going through exec. We found such a setuid binary that is installed on many Desktop Linux machines by default: pulseaudio. pulseaudio will drop privileges and let you specify a library to load though its -L argument. Exactly what we needed!<br />
<br />
Once we have <span style="font-style: italic;">one page</span> mapped in the forbidden area, it's game over. Nothing will prevent us from using mremap to grow the area and mprotect to change our access rights to <span style="font-family: courier new;">PROT_READ|PROT_WRITE|PROT_EXEC</span>. So this completely bypasses the Linux kernel's protection.<br />
<br />
Note that apart from this problem, the mere fact that MMAP_PAGE_ZERO is not in the <a href="http://lxr.linux.no/linux+v2.6.30/include/linux/personality.h#L43"><span style="font-family: courier new;">PER_CLEAR_ON_SETID</span></a> mask and thus is allowed when executing setuid binaries can be a security issue: being able to map page zero in a process with euid=0, even without controlling its content could be useful when exploiting a null pointer vulnerability in a setuid application.<br />
<br />
We believe that the<a href="http://patchwork.kernel.org/patch/32598/"> correct fix for this issue</a> is to add <span style="font-family: courier new;">MMAP_PAGE_ZERO</span> to the <span style="font-family: courier new;"><a href="http://lxr.linux.no/linux+v2.6.30/include/linux/personality.h#L43">PER_CLEAR_ON_SETID</a> mask.</span><br />
PS: Thanks to Robert Swiecki for some help while investigating this.Unknownnoreply@blogger.com7tag:blogger.com,1999:blog-8992811497323121233.post-26348560518359191272009-05-28T04:47:00.000-07:002009-05-28T14:58:42.154-07:00Time-stamp counter disabling oddities in the Linux kernelThe time-stamp counter (TSC) is part of the performance monitoring facilities provided on Intel processors. It's stored in a 64-bits MSR. Except for 64-bit wraparound (and of course reset), the TSC is guaranteed to be monotonically increasing by Intel, but not necessarily at a constant rate.<br />Historically, the TSC increased with every internal processor clock cycle, but now the rate is usually constant (even if the processor changes frequency) and usually equals the maximum processor frequency.<br /><br />There are multiple ways of reading the value of the TSC MSR, a popular one is the RDTSC instruction. This instruction will load the value into EDX:EAX and is not privileged unless the Time-Stamp Disable (TSD) bit is set in CR4. Most operating systems will not set CR4.TSD on any thread, so programmers are free to use RDTSC in their Ring3 code.<br /><br />The problem is that the TSC has been used as a tool in the past to mount side channel attacks. Two examples are <a href="http://people.csail.mit.edu/tromer/papers/cache.pdf">"Cache Attacks and Countermeasures: the Case of AES"</a> by Osvik, Shamir and Tromer and <a href="http://www.daemonology.net/papers/htt.pdf">"Cache missing for fun and profit"</a> by Colin Percival. (Less importantly, it has also been used to create exploits against race conditions in the Linux kernel <a href="http://downloads.securityfocus.com/vulnerabilities/exploits/expand_stack.c">such as this one</a>)<br /><br />In an attempt to kill RDTSC as a tool to conduct various mischiefs, Andrea Arcangeli, author of the SECCOMP <a href="http://www.kernel.org/doc/man-pages/online/pages/man2/prctl.2.html">prctl</a> (that allows a thread to enter a sandboxed "computing mode" where only read, write, exit and sigreturn syscalls are allowed) tried to disable RDTSC by setting CR4.TSD in any thread that runs under seccomp (in 2.6.12).<br /><br />That's where the oddities begin: I was recently surprised to see that a process I ran under seccomp had actually access to rdtsc. A quick look at the source code of my kernel revealed this:<br /><br /><span style="font-family:courier new;">#ifdef TIF_NOTSC</span><br /><span style="font-family:courier new;">disable_TSC();</span><br /><span style="font-family:courier new;">#endif</span><br /><br />Note that TIF_NOTSC is not a config option! So I took a look at both thread_info_64.h and thread_info_32.h to discover that in the 64 bits version, TIF_NOTSC was not defined. As a consequence, on a 32 bits kernel, seccomp will disable the TSC in seccomp threads but will not on a 64 bits kernel (even for 32 bits processes). Chris Evans <a href="http://scarybeastsecurity.blogspot.com/2009/02/linux-kernel-minor-seccomp.html">blogged previously</a> about how a seemingly simple security technology such as seccomp could still have bugs. "Here's another one" I thought.<br /><br />While tracking this bug, I found out that it wasn't a bug but a conscious decision by Andi Kleen to <a href="http://kernel.org/hg/linux-2.6/?cs=2fd4e5f089df">not disable TSC, but only on x86_64</a> (patch applied in 2.6.14) for performance reasons. I consider this a really odd decision: seccomp behaving differently on 64 bits and 32 bits kernels is a non sense! If you consider TSC disabling a security feature, it has to behave consistently or you should just remove it altogether. Here is <a href="http://lkml.org/lkml/2005/11/5/73">a thread</a>, started in November 2005 by Andrea Arcangeli who also regretted the lack of consistency.<br /><br />But then, in Linux 2.6.23, this feature became impact-free, performance-wise. So at that point, I really consider not having it on x86_64 kernels a bug, not only a strange decision. As I mentioned previously, the bug is due to TIF_NOTSC not being defined for 64 bits kernels.<br />I wondered if this bug would still be there in recent Linux kernels despite the ongoing<a href="http://lwn.net/Articles/243704/"> i386 and x86_64 merge</a>. It wasn't in 2.6.27 where thread_info_64.h and thread_info_32.h have been merged into one thread_info.h file. But in fact, it was already corrected in 2.6.26 at the same time as a new feature, prctl(PR_SET_TSC), was introduced.<br /><br />PR_SET_TSC lets you control the CR4.TSD flag for your thread: you can make your thread SIGSEGV on rdtsc. And this feature is another big oddity for me: if you consider rdtsc harmful, it would make sense to let a process drop the privilege to use RDTSC, but the weird thing here is that they don't forbid you to call prctl(PR_SET_TSC) again to clear the TSD flag and restore your privilege to use rdtsc! So I can't imagine what this is for, the only use case I can see would be in <a href="http://scarybeastsecurity.blogspot.com/2009/02/vsftpd-210-and-ptrace-sandboxing.html">a ptrace sandbox</a>.<br /><br />Another use case would have been SECCOMP of course. By removing the automatic TSC disabling from seccomp, a thread could use PR_SET_TSC prior to using PR_SETCOMP if it wanted to disable rdtsc, thus making this behavior configurable. Since a thread under seccomp cannot call prctl(), the thread wouldn't have been able to re-enable it. But the problem would have been that existing code relying on SECCOMP might be expecting to drop TSC access without having to use PR_SET_TSC. But wait! This feature had never worked in the first place, it was the perfect time to change the behavior and finally fix this bug. Another oddity!<br /><span style="font-family:monospace;"><span style="font-weight: bold;"></span></span><br />I should also discuss the whole idea of forbidding access to the useful rdtsc instruction in the first place. Could an attacker emulate this with a thread on another processor incrementing a counter manually anyway? Are <a href="http://en.wikipedia.org/wiki/Real-time_clock">the RTC</a>, <a href="http://en.wikipedia.org/wiki/HPET">HPET</a> or the gtod_data counter in the vsyscall page useable? How realistic are those side channel attacks in the first place? If they are, when could it become the easiest attack you can perform on a system ? That will be for another post.Unknownnoreply@blogger.com2tag:blogger.com,1999:blog-8992811497323121233.post-77586593160845744702009-05-19T07:45:00.000-07:002009-07-25T12:06:24.405-07:00Write once, own everyone, Java deserialization issues<span style="font-style: italic;">EDIT3: This vulnerability has been <a href="http://pwnie-awards.org/2009/nominees.html#bestclientbug">nominated for a Pwnie Award</a> for "best client-side bug"!<br />EDIT 2: On June 15th 2009, <a href="http://support.apple.com/downloads/Java_for_Mac_OS_X_10_5_Update_4">Apple has updated Java</a> on MacOS X with a version that fixes this issue.<br />EDIT 1: this has been featured on </span><a style="font-style: italic;" href="http://it.slashdot.org/article.pl?sid=09/05/19/2344239">Slashdot</a><span style="font-style: italic;">, </span><a style="font-style: italic;" href="http://arstechnica.com/apple/news/2009/05/apple-has-yet-to-patch-critical-java-vulnerabilitya-vulnerability-in-the-java-virtual-machine-which.ars">Ars Technica</a><span style="font-style: italic;">, </span><a style="font-style: italic;" href="http://news.zdnet.co.uk/security/0,1000000189,39654392,00.htm">ZDNet</a><span style="font-style: italic;">, </span><a style="font-style: italic;" href="http://www.osnews.com/story/21522">OSnews</a><span style="font-style: italic;"> and many others. The focus was on the fact that this is still not fixed on MacOS X. However, keep in mind that you may still be at risk if you use another operating system (Windows, Linux), especially with an outdated version of Java, which is very common. You should disable Java applets in your browser if you can, or at least consider using </span><a style="font-style: italic;" href="http://noscript.net/">NoScript</a><span style="font-style: italic;">.</span><br /><br />It is time to talk about my favorite client-side vulnerability ever. Surprisingly (if you know me), this is a Java vulnerability, or rather a class of Java vulnerabilities that allows to completely bypass the Java sandbox and execute arbitrary code remotely in Java enabled web browsers.<br />This was found by <a href="http://slightlyrandombrokenthoughts.blogspot.com/">Sami Koivu</a>. He reported <a href="http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-5353">the first instance of it (CVE-2008-5353) </a>to Sun on August 1st 2008 and this instance has been <a href="http://sunsolve.sun.com/search/document.do?assetkey=1-66-244991-1">fixed by Sun</a> on December 3rd 2008. These vulnerabilities are both technically interesting and have a lot of impact.<br /><br />Since they share core classes, OpenJDK, GIJ, icedtea and Sun's JRE were all vulnerable at some point. And unfortunately, this vulnerability is still not fixed everywhere yet.<br /><br />I've been wanting to talk about this for a while. I was holding off, while Apple was working to patch this vulnerability. Unfortunately, it is still not patched in <a href="http://support.apple.com/kb/HT3549">their latest security update</a> from just a few days ago. I believe that since this vulnerability has already been public for almost 6 months, making MacOS X users aware that Java needs to be disabled in their browser is the good thing to do.<br /><br />As a side note, Sami Koivu and I paired at latest Pwn2own (his vulnerability, my exploit) and owned both Firefox and Safari on MacOS X on day one (Java is there and enabled by default on MacOS X). Unfortunately it fell out of the challenge criterions because the vulnerability had already been reported to Sun and I had already pinged Apple in January about it.<br /><br />So let's talk about the first reported instance of this class of vulnerabilities, the Calendar deserialization vulnerability.<br /><br />For legacy reasons, the deserialization of the sun.util.calendar.ZoneInfo object in a java.util.Calendar has to be fine tuned, so <a href="http://www.docjar.com/html/api/java/util/Calendar.java.html">the readObject() method in the Calendar class</a> will handle it. However, an applet cannot access sun.util.calendar.ZoneInfo because it is inside "sun" and anything in "sun" has to be trusted for the Java Applet security model to hold.<br />For this reason the code responsible for the ZoneInfo deserialization has to run with privileges. The code in java.util is trusted and can get more privileges by using a doPrivileged block:<br /><pre style="font-family: courier new;"><a name="2651"> try{<br />ZoneInfo zi = (ZoneInfo) AccessController.doPrivileged(<br />new PrivilegedExceptionAction() {<br /></a><a name="2661"> public Object run() throws Exception {<br />return input.readObject();<br />}<br />});<br /></a><a name="2661"> if (zi != null) {<br />zone = zi;<br />}<br />} catch (Exception e) {}<br /></a><a name="2661"></a><a name="2661"></a><br /></pre>So what does this buy us ? We can craft an input and deserialize objects from it. By deserializing a Calendar, we can get a ZoneInfo object deserialized in a privileged context. Wait! How do they check this is a ZoneInfo object? They let Java's type checking do this for them. So if we carefully craft our input, we can get an arbitrary object deserialized but it'll not get affected to zi unless it's a valid ZoneInfo.<br /><br />To exploit this, let's find a class that we would be forbidden to instantiate in an Applet because it would allow us to escape from the Java sandbox. The <a href="http://java.sun.com/j2se/1.5.0/docs/api/java/lang/RuntimePermission.html">RuntimePermission</a> class is a great source of inspiration. A ClassLoader seems to be exactly what we are looking for! Let's make our own ClassLoader sub-class and override the readObject() method. This method will be called during deserialization. In this method we can affect ourselves (this) to a static variable so that our shiny new ClassLoader doesn't get garbage collected and so that we can use it later.<br /><br /><div style="text-align: left;"><div style="text-align: left;">With our own ClassLoader we can <a href="http://java.sun.com/j2se/1.3/docs/api/java/lang/ClassLoader.html#defineClass%28java.lang.String,%20byte%5B%5D,%20int,%20int,%20java.security.ProtectionDomain%29">define classes</a> with our own ProtectionDomain (with arbitrary privileges). That's it!<br /></div><div style="text-align: left;"><br />There is more work to do. The overall exploit can be quite complex (mine is over 500 lines but you can make a simpler version) but you get the basic idea.<br />Also there is the problem of manually crafting the malicious serialized file. In a first version I did this manually by re-implementing the <a href="http://java.sun.com/javase/6/docs/platform/serialization/spec/protocol.html">Serialization protocol</a>. Later I found a nice trick: by overriding replaceObject(), you can let Java do all the work for you.<br /><br />I've mentioned that this was a class of vulnerabilities: the reason is that with this design, every time Java code deserializes an attacker-controlled input in a privileged context, it's a security vulnerability. Sun fixed the Calendar vulnerability (<a href="http://www.cr0.org/misc/calendarpatch.patch">see this patch</a>) by creating a new <code></code>accessClassInPackage.sun.util.calendar privilege and restricting the doPrivileged block to this, so they didn't fix the whole class of them (more on this in a later post).<br /><br />That's it for the technical part.<br /><br />Now why do I think this client-side arbitrary remote code execution vulnerability is more interesting that most others?<br /><br />First, according to Adobe and Sun, Java is available in <a href="http://www.adobe.com/products/player_census/flashplayer/">80%</a> to <a href="http://dobbscodetalk.com/index.php?option=com_myblog&show=JavaOne-The-Keynote-Summary.html&Itemid=29">90%</a> of all web browser, which makes it a nice target.<br /><br />Secondly, for various reasons, Java is usually poorly updated:<ul><li>The Sun Java update mechanism isn't tied to the operating system update system on the Windows platform. Personal users and companies don't update it often, some of them do have processes in place to deal with Microsoft's patch Tuesdays but don't for other software updates.<br /></li><li>Many companies are using web applications or Java software that rely on a specific Java version. It may be tedious to update Java because it would break many things. This may be the reason why Apple's Java updates are so infrequent.<br /></li><li>Some Linux distributions don't support Sun's JRE (proprietary software) despite making it available. When I asked Ubuntu to fix this vulnerability, they fixed OpenJDK quickly but told me the Sun JRE was not supported (despite being available by default on the latest LTS Ubuntu release).<br /></li></ul>Third, and this is the important point: most other client-side vulnerabilities that can lead to arbitrary code execution, including other Java vulnerabilities are memory corruption vulnerabilities in a component written in native code. Exploiting those reliably can be hard. Especially if you have to deal with multiple operating system versions or with PaX-like protections such as DEP and ASLR.<br />This one is a pure Java vulnerability. This means you can write a 100% reliable exploit in pure Java. This exploit will work on all the platforms, all the architectures and all the browsers! Mine has been tested on Firefox, IE6, IE7, IE8, Safari and on MacOS X, Windows, Linux and OpenBSD and should work anywhere.<br /><br />This is close to the holy grail of client-side vulnerabilities.<br /><br />So MacOS X users, please disable Java in your web browser.<br />Others: make sure you have updated Java and still disable it in your web browser: it's a huge attack surface and it suffers from many other security vulnerabilities.<br /></div><div style="text-align: left;">Moreover, even without taking into consideration Java vulnerabilities themselves, since the Java plugin allocates all memory as RWX and doesn't opt-in for randomization, a Java applet can be used to bypass ASLR and non executability (DEP on Windows) in browser exploits.<br /><br />You can also get some information about this vulnerability on Sami Koivu's blog, <a href="http://slightlyrandombrokenthoughts.blogspot.com/2008/12/calendar-bug.html">here</a> and <a href="http://slightlyrandombrokenthoughts.blogspot.com/2009/02/correction-on-how-sun-fixed-calendar.html">here</a> and a time line for some of the bugs he reported to Sun <a href="http://slightlyrandombrokenthoughts.blogspot.com/2009/04/timeline-of-sun-microsystems-fixing.html">here</a>.<br /></div></div>Unknownnoreply@blogger.com30tag:blogger.com,1999:blog-8992811497323121233.post-72253196565955243862009-04-22T16:21:00.000-07:002009-07-15T08:14:08.832-07:00Local bypass of Linux ASLR through /proc information leaks<span style="font-style: italic;">EDIT2: Thanks to the efforts of Jake Edge who noticed our presentation, /proc/pid/stat information leak is now at least partially </span><a style="font-style: italic;" href="http://patchwork.kernel.org/patch/21766/">patched in mainline kernel</a>, since <a href="http://www.kernel.org/pub/linux/kernel/v2.6/ChangeLog-2.6.27.23">2.6.27.23</a><br /><span style="font-style: italic;">EDIT1: This is featured in an </span><a style="font-style: italic;" href="http://lwn.net/Articles/329787/">LWN article</a><span style="font-style: italic;"> by Jake Edge</span><br /><br /><a href="http://taviso.decsystem.org/">Tavis Ormandy</a> and myself talked about locally bypassing address space layout randomization (ASLR) in Linux in a lightning talk at CanSecWest.<br /><br />From Linux 2.6.12 to Linux 2.6.21, you could completely bypass ASLR when targeting local processes by reading /proc/pid/maps. Since Linux 2.6.22, if you cannot ptrace "pid", then you will see an empty /proc/pid/maps.<br /><br />It has been known for at least 7 years now that /proc/pid/stat and /proc/pid/wchan could also leak sensitive information. Reading this information has been prevented in <a href="http://www.grsecurity.net/">GRSecurity</a> since the beginning as well as in <a href="http://www.cr0.org/pax-obscure/">this patch</a>.<br /><br />The question was: could you exploit this information to bypass ASLR in practice?<br />If you want to find out, it's easy: we've just published <a href="http://www.cr0.org/paper/to-jt-linux-alsr-leak.pdf">the slides</a> and <a href="http://code.google.com/p/fuzzyaslr/">Tavis' tool</a>!Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-8992811497323121233.post-74430618500310509072009-04-16T08:47:00.000-07:002009-05-28T07:08:23.356-07:00Interesting vulnerability in udevdI used to love exploiting memory corruption vulnerabilities. It usually requires some reverse engineering, good knowledge of the underlaying operating system and some <span onclick="dr4sdgryt(event)">ingenuity to write reliable exploits. And if you try to circumvent clever protections such as PaX, it can get very tricky.<br /><br />But besides kernel vulnerabilities, exploitable memory corruption vulnerabilities these days are mostly buffer overflows. It's a bit monotonous.<br /><br />I get more excited by other kind of vulnerabilities such as Solaris' <a href="http://erratasec.blogspot.com/2007/02/trivial-remote-solaris-0day-disable.html">telnet -froot</a> or the <a href="http://cr0.org/progs/sshfun/">Debian/OpenSSL</a> fiasco.<br /><br /></span><div style="text-align: left;"><div style="text-align: left;"><span onclick="dr4sdgryt(event)">Last night, my friend Raph pointed me to this <a href="http://secunia.com/advisories/34750/">udev flaw</a>. If you read <a href="http://launchpadlibrarian.net/25497464/udev_079-0ubuntu34_079-0ubuntu35.1.diff.gz">this patch</a></span><span onclick="dr4sdgryt(event)"> you can notice an extra check in <span style="font-family:courier new;">get_netlink_msg()</span>:<br /><span style="font-family:courier new;">if ((snl.nl_groups != 1) || (snl.nl_pid != 0))</span></span></div><span onclick="dr4sdgryt(event)"></span></div><span onclick="dr4sdgryt(event)"><br />This checks if the message recieved by udevd had been sent to a specific multicast group (sending to netlink multicast groups is privileged and can only be done with CAP_NET_ADMIN) and also if it was sent from the kernel's unicast address.<br /><br />From now on, the vulnerability is pretty obvious: before the patch, udevd didn't check the origin of messages it was recieving through netlink.<br /><br />So can we spoof the kernel and send arbitrary messages to udevd? Yes! And it's easy, it suffices to create a NETLINK socket with the NETLINK_KOBJECT_UEVENT protocol and to send a unicast message to the correct unicast address. In udevd, this address will be the pid of the process who bound the NETLINK socket (udevd's parent). You can easily find it in /proc/net/netlink (thanks Phil)). Et voilà!<br /><br />My idea to exploit this was to create a 666 device node that would give direct access to a mounted partition and to <span style="font-family:courier new;">chmod +s</span> some binary file we control by directly writing to the block device (there are userland tools and lib to do this easily, see <a href="http://e2fsprogs.sourceforge.net/">debugfs</a> for instance).<br /><br />Phil also came-up with the idea of replacing /dev/urandom and /dev/random with /dev/zero (so called "debian emulation" backdoor).<br />Raph then found an even better way: on Ubuntu, Debian and others, you can exploit "95-udev-late.rules" and run arbitrary commands by using the "remove" action.<br /><br />And that's it for a slick exploit. 40 lines of C (5 lines of Python for Phil). Pretty simple, cross architecture, reliable.</span> And it can escape chroots and some MAC-constrained environments (as long as you can create netlink sockets).Unknownnoreply@blogger.com14tag:blogger.com,1999:blog-8992811497323121233.post-20227024297574598382009-04-04T06:27:00.000-07:002009-04-04T18:03:02.952-07:0026Yesterday, a friend of mine turned 26. I know what you're thinking, this is very exciting. Indeed, not every year your age is between a square (5^2) and a cube (3^3)!<br /><br />How often does this happen? Well actually, <a href="http://en.wikipedia.org/wiki/26_%28number%29">Wikipedia states</a> that 26 is <span style="font-style: italic;">the only</span> number between a square and a cube (which is not exactly true, but read on). I thought this was cool, let my friend know in a creepy happy birthday e-mail and got back to work.<br /><br />But the same day, I was dragged to a Polish club by friends. It was horrible: the music was awful, absolutely nobody was dancing, nobody was talking and nothing happened. I was very bored, so I started working on the demonstration that 26 was the only number between a square and a cube. Excluding the fact that the bouncer seemed worried that I was standing still (and alone, remember) on the dance floor, it was the perfect activity to have in this club.<br />I first thought it would be easy, but as it turned out the demonstration ended up involving quadratic integer rings and unique factorization domains.<br /><br />So let's start by demonstrating that 26 is the only number preceded by a square and succeeded by a cube. We want to find all integers a and b such as b^3=a^2+2.<br />You can easily prove that a and b are odd: if b is even, 2 divides a^2, so 2 divides a and 4 divides a^2. Consequently, 4 divides b^3 - a^2 so 4 divides 2. Impossible. So b is odd, which implies a^2 is odd and a is odd.<br /><br />Then, my first intuition was to use the known solution to this equation to prove there was no other solution. a^2-5^2=b^3-3^3, so (a-5)(a+5)=(b-3)(b^2+3b+9). But this is tedious, there isn't much you can do with this annoying (b^2+3b+9).<br />Well this is as far as I've got in the club. I attempted to make others in the club party one more time and then decided to head home and started working on the proof again. Sad Friday night.<br /><br />When I was in college, I really liked the kind of demonstrations where we used a superset of a given set to prove properties in the first set. Here, we see b^3=a^2+2 and feel hopeless. If only a^2+2 could be factorized... Well, it can be factorized. I didn't spend my youth learning about Cauchy sequences and how to construct R and his algebraic closure C for nothing! So let s be i*sqrt(2) and we have b^3=(a-s)(a+s). But what can we do now ?<br />I wanted to play with prime numbers, divisors and gcds and now we're stuck with complex numbers. Hold on! It turns out that the set of numbers written in the form x+y*s (with x and y integers), written Z[s] with the usual operations is not only a ring (called a quadratic integer ring), but also an Euclidian domain and that its units are 1 and -1 (proof of this another time). We can still have some fun (for some definitions of fun, including any that would qualify the aforementioned Polish club as fun).<br /><br />So we now have (a-s)(a+s)=b^3. Let's prove that a-s and a+s are mutually prime. Let g be their gcd. g must divide (a+s) - (a-s) = 2s = -s^3. s is prime in Z[s], so g=+- s^x with x being 0, 1, 2 or 3. But g also divides a+s, if x>0, then s divides a+s and so s divides a. But we already know (from the club, remember), that a is odd. And s (i*sqrt(2)) cannot divide an odd number in Z[s]. So x=0 and a-s and a+s are mutually prime.<br /><br />Since Z[s] is an Euclidian domain, the fundamental theorem of arithmetic holds (Z[s] is a unique factorization domain): any number in Z[s] can be written as the product of the elements of a unique set of prime numbers (and units). So we can write a-s, a+s and b^3 as products of prime numbers (and units). Since a-s and a+s are mutually prime, a-s and a+s are cubes multiplied by some units. Since 1 and -1 are both cubes, and the only units of Z[s], a-s and a+s are cubes.<br /><br />So let's write a+s=(m+ns)^3 with m and n integers. We get: a+s=m^3-6mn^2+n*(3m^2-2n^2)s. The unicity of m' and n' such as x = m'+n'*s in Z[s] (with m' and n' in Z) gives: n*(3m^2-2n^2)=1. So n=+-1. If n = 1, we have 3m^2-2=1 and m = +-1. If n =-1 there is no solution for m. So n=1 and m = +-1. We also have a=m^3-6mn^2. So a = 5 or a = -5 wich in turn gives b=3.<br /><br />So the only integer solutions to b^3=a^2+2 are (a,b)=(5,3) and (a,b)=(-5,3) and 26 is the only integer preceded by a square and followed by a cube.<br />Happy birthday Parisa!<br /><br />Now what about an integer being preceded by a cube and followed by a square? If Wikipedia is right there is no integer solution to b^3=a^2-2. Well there is actually one trivial solution (b=-1 and a=+-1), so Wikipedia is wrong, but is it the only solution? We could be tempted to follow a similar approach, let s' be sqrt(2) and use Z[s'], which is also a ring. But -1 and 1 are not the only units: s-1 and s+1 are also units since (s'-1)(s'+1)=1 and so we have an infinite number of units written +-(s'-1)^m and +-(s'+1)^m.<br />Moreover, is Z[s'] still a unique factorization domain ? Not sure. But you may have to find out if you want to prove 0 is the only number preceded by a cube and followed by a square (for example to celebrate your 0-aged new born baby).Unknownnoreply@blogger.com8tag:blogger.com,1999:blog-8992811497323121233.post-6581292774246903462009-04-01T03:47:00.000-07:002009-06-09T16:08:10.418-07:00Massive exploitation of instant messaging applications proved feasible<span style="font-style: italic;">EDIT: While most realized this was an April fool's joke, only a few figured out that it was also a genuine smiley shellcode encoder. However, the security implications are of course non existent. And we have been </span><a style="font-style: italic;" href="http://it.slashdot.org/article.pl?sid=09/04/01/1935214">slashdoted</a><span style="font-style: italic;">!</span><br /><br />Yoann Guillot and myself have been assessing the security of instant communication applications for a couple of years.<br />For quite some time now, we have both suspected that it was possible to conduct both stealth and massive attacks on popular chat clients such as MSN, AIM, Trillian or mIRC.<br /><br />Today, we have verified our intuition by creating an encoder that can make any shellcode look like a smiley. It is possible to encode malicious shellcodes in emoticons, leaving exploits indistinguishable from genuine chat messages.<br /><br />This would make massive attacks against instant messaging applications impossible to catch by anti-virus, IDS or similar signature based technologies. Moreover, it is possible to conduct attacks with plausible deniability.<br /><br />The potential for mass exploitation is undeniable. We are urging Microsoft, AOL and other administrators of popular chat networks to ban smileys (especially animated ones) until all the consequences of this attack have been understood. Twitter and Facebook are likely vulnerable too, although we didn't conduct specific research yet on those networks.<br /><br /><a href="http://www.cr0.org/misc/smile.rb">This proof of concept program</a> will compile the sample included shellcode, encode it into a valid MSN smiley and compile a test C program by using metasm. While the example shellcode and the compiled test program are both targeting Linux, you can supply any shellcode you want, including a Windows one, via the command line.<br /><br />Please, use as follow:<br /><br /><span style="font-family:courier new;">"apt-get install libc6-dev-i386 mercurial ruby" if required</span><br /><span style="font-family:courier new;"> "hg clone </span><a style="font-family: courier new;" href="https://metasm.cr0.org/hg/metasm/" target="_blank">https://metasm.cr0.org/hg/<wbr>metasm/</a><span style="font-family:courier new;">"</span><br /><span style="font-family:courier new;"> "cd metasm"</span><br /><span style="font-family:courier new;"> put smile.rb in the metasm directory</span><br /><span style="font-family:courier new;"> "ruby ./smile.rb"</span><br /><span style="font-family:courier new;"> "./test.lol"</span>Unknownnoreply@blogger.com24tag:blogger.com,1999:blog-8992811497323121233.post-70143985239064632672009-03-29T08:35:00.000-07:002009-03-30T17:34:39.590-07:00CanSecWest 2009 reportI am back from <a href="http://www.cansecwest.com/">CanSecWest</a>. Like every year, it was interesting and great fun. And for the first year, presentation material has been put <a href="http://cansecwest.com/csw09archive.html">online</a> in a matter of days!<br /><br />I would definitely recommend to check out the following talks:<br /><ul><li>Immunity's talk about <a href="http://www.immunityinc.com/downloads/skylar_cansecwest09.pdf">exploiting bugs smoothly</a>, without unwanted side effects. Interesting, but this talk could have used a few real-world examples.</li><li>Loic Duflot's talk about <a href="http://cansecwest.com/csw09/csw09-duflot.pdf">attacking SMM via CPU cache poisoning</a>. Something that has apparently been independantly discovered a few months later <a href="http://theinvisiblethings.blogspot.com/2009/03/independent-attack-discoveries.html">by Joanna</a>. Be sure to attend <a href="http://www.sstic.org/SSTIC09/programme.do#DUFLOT">the follow-up talk</a> at SSTIC if you can understand French!<br /></li><li><a href="http://zynamics.com/downloads/csw09-slides.pdf">Halvar's talk</a> about static binary analysis and the <a href="http://zynamics.com/downloads/csw09.pdf">accompanying paper</a>. Yes, he really does binary-level abstract interpretation.<br /></li><li><b></b> Matt Miller (skape) and Tim Burrell's talk about the evolution of exploit mitigation in Microsoft's products. Some insight about what has been done and what may be done in the future. A good way to check that you're still up-to-date.</li><li>Microsoft's Jason Shirk and Dave Weinstein <a href="http://download.microsoft.com/download/7/2/8/728FE40F-93B6-47BD-B67D-78D04B63E27D/Automated%20Security%20Crash%20Dump%20Analysis.pptx">presentation</a> about their <a href="http://www.codeplex.com/msecdbg">!exploitable crash analyser</a>.<br /></li><li>Alexander Sotirov and Mike Zusman's talk about EV certificates. The general idea is based on <a href="http://crypto.stanford.edu/websec/origins/fgo.pdf">Adam Barth and Collin Jackson’s paper</a> which showed how browsers fail to draw a clear barrier between EV SSL and non-EV SSL, including when applying the same origin policy. This is expected behavior since both are served under the https:// scheme, but the result is that EV is, as currently implemented, useless against MITM attacks (but still useful against fishing attacks). Alexander and Mike showed various ways of exploiting this, and with cool demos!<br /></li></ul>There were other good talks, such as Andrea and Daniele's on power line leakage (very entertaining, but a bit less than last year's talk).<br /><br />Nevertheless, this year I've been quite disappointed with the lightning talks, only a handful of peoples bothered giving one. Most probably, most wanted to run to Grouse Mountain quickly for the awesome party!<br /><br /><ul><li>The highlight of the lightning talks was someone showing relationship between old school and nowadys' technologies (finger <-> twitter, talk <-> chat etc..), with cool <a href="http://www.ngolde.de/tpp.html">pure ASCII slides</a>.</li><li><a href="http://www.secdev.org/">Philippe Biondi</a> talked about stateful protocol modelization in Scapy (with a TCP example).</li><li><a href="http://syscall.eu/progs/">Raphaël Rigo</a> presented his Nintendo DS Wifi scanner.</li><li><a href="http://taviso.decsystem.org/">Tavis Ormandy</a> and I talked about bypassing Linux' recent hiding of /proc/pid/maps file to make ASLR useful locally. The idea is to monitor the stack and instruction pointers in /proc/pid/stat to infer the address space layout (Tavis wrote cool PoC code for this!). Funny to see info leaking prevention done wrong 6 years after <a href="http://www.grsecurity.net/">grsecurity</a> and <a href="http://cr0.org/pax-obscure/">PaX+obs</a> did it right.<br /></li><li>I presented my <a href="http://cr0.org/progs/ttytools/">subtty backdoor</a>.</li><li>Charlie Miller told us how bad it is to report bugs for free. I wonder if he might be biased on this.<br /></li></ul>Another interesting event was the 2009 edition of pwn2own. Everything exciting <a href="http://dvlabs.tippingpoint.com/blog/2009/03/18/pwn2own-2009-day-1---safari-internet-explorer-and-firefox-taken-down-by-four-zero-day-exploits">happened on day 1</a>, since not many peoples were interested in the phone challenges and those who were had been annoyed by the lack of specifications before the challenge and couldn't get ready on time.<br /><br />Charlie Miller owned Safari, Nils owned Safari, Firefox and IE8 and I owned Safari and Firefox. For those of you who are asking, I actually paired with someone (more information on this in a later post) and we didn't qualify for a price because the vulnerabilities had already been reported.<br />The reason for competing was that technically this would still qualify to keep the machine (and also, I must admit, because it's always fun to pop some shells). Though, Charlie was lucky and was the first to give a try (I was second) and so kept the Mac.<br />Well, I guess that's what you get for <a href="http://www.securityfocus.com/news/11549">not being good researchers and not sitting on issues</a> ;)<br /><br />On Friday, many peoples left for Whistler for a great ski trip and further interesting security discussions. It was the perfect sequel to a great CanSecWest edition!<br /><span style="display: block;" id="formatbar_Buttons"><span class="down" style="display: block;" id="formatbar_CreateLink" title="Link" onmouseover="ButtonHoverOn(this);" onmouseout="ButtonHoverOff(this);" onmouseup="" onmousedown="CheckFormatting(event);FormatbarButton('richeditorframe', this, 8);ButtonMouseDown(this);"></span></span>Unknownnoreply@blogger.com3tag:blogger.com,1999:blog-8992811497323121233.post-35720829005868730442009-03-22T12:29:00.000-07:002009-03-29T12:43:17.220-07:00Blog boot!I have finally decided to open a blog. I am not exactly an early adopter, it took me a long time to feel the need of having one.<br />IT security is a long-time interest for me. I've usually been sharing thoughts, ideas and opinions in bars, restaurants and conferences or on IRC. I'll use this blog to reach a broader audience.<br />To publish new tools, I hope it will be more user-friendly than raw updates to <a href="http://www.cr0.org/">http://www.cr0.org</a>.<br /><br />So, here's my first post from Whistler, Canada, just after the CanSecWest security conference!Unknownnoreply@blogger.com1