Black Hat 2008 Wrap-up

This year I had the chance to present some security-related findings that I had made earlier during the year inside Win32k.sys, the Windows GUI Subsystem and Window Manager. I presented a total of four bugs, all local Denial of Service (DoS) attacks. Two of these attacks could be done on any system up to Vista SP0 (one up to Server 2008) from any account, while the other two were NULL-pointer dereferences that required administrative access to exploit (however, since they are NULL-pointer dereferences, the risk for user->kernel elevation exists in one of them).

Because the first two attacks (from any guest account) relied on highly internal behavior of the Windows NT Kernel (used in all of Microsoft’s modern operating systems today), I thought it was interesting to talk a bit about the internals themselves, and then present the actual bug, as well as some developer guidance on how to avoid hitting this bug.

I’d like to thank everyone who came to see the talk, and I really appreciated the questions and comments I received. I’ve also finally found out why Win32k functions are called “xxx” and “yyy”. Now I just need to find out about “zzz”. I’ll probably share this information in a later post, when I can talk more about Win32k internals.

As promised, you’ll find below a link to the slides. As for proof of concept code, it’s currently sitting on my home machine, so it’ll take another week. On the other hand, the slides make three of the bugs very clear in pseudo-code. I’ll re-iterate them at the bottom of the post.

One final matter — it seems I have been unable to make the email work due to issues with my host. For the moment, please use another e-mail to contact me, such as my Google Mail account. My email address is the first letter of my first name (Alex) followed by my entire last name (Ionescu).

Slides here: Keynote and PDF.

Bug 1:
1) Handle = OpenProcess(Csrss)
2) CreateRemoteThread(Handle, …, NtUserGetThreadState, 15, …)

Bug 2:
1) Handle = CreateWindowStation(…)
2) SetHandleInformation(Handle, ProtectFromClose)
3) CloseWindowStation(Handle)

Bug 3:
1) Handle = CreateWindowStation(…)
2) Loop all handles in the process with the NtQuerySystemInformation API, or, just blindly loop every handle from 4 to 16 million and mark it as protected, or, use Process Explorer to see what the current Window Station handle is, and hard-code that for this run. The point is to find the handle to the current window station. Usually this will be a low number, typically under 0x50.
3) SetHandleInformation(CurrentWinstaHandle, ProtectFromClose)
4) SwitchProcessWindowStation(Handle);

Inside Session 0 Isolation and the UI Detection Service – Part 2

In part 1 of the series, we covered some of the changes behind Vista’s new Session 0 Isolation and showcased the UI Detection Service. Now, we’ll look at the internals behind this compatibility mechanism and describe its behavior.

First of all, let’s take a look at the service itself — although its file name suggests the name “UI 0 Detection Service”, it actually registers itself with the Service Control Manager (SCM) as the “Interactive Services Detection” service. You can use the Services.msc MMC snap-in to take a look at the details behind the service, or use Sc.exe along with the service’s name (UI0Detect) to query its configuration. In both cases, you’ll notice the service is most probably “stopped” on your machine — as we’ve seen in Part 1, the service is actually started on demand.

But if the service isn’t actually running yet seems to catch windows appearing on Session 0 and automatically activate itself, what’s making it able to do this? The secret lies in the little-known Wls0wndh.dll, which was first pointed out by Scott Field from Microsoft as a really bad name for a DLL (due to its similarity to a malicious library name) during his Blackhat 2006 presentation. The version properties for the file mention “Session0 Viewer Window Hook DLL” so it’s probably safe to assume the filename stands for “WinLogon Session 0 Window Hook”. So who loads this DLL, and what is its purpose? This is where Wininit.exe comes into play.

Wininit.exe too is a component of the session 0 isolation done in Windows Vista — because session 0 is now unreachable through the standard logon process, there’s no need to bestow upon it a fully featured Winlogon.exe. On the other hand, Winlogon.exe has, over the ages, continued to accumulate more and more responsibility as a user-mode startup bootstrapper, doing tasks such as setting up the Local Security Authority Process (LSASS), the SCM, the Registry, starting the process responsible for extracting dump files from the page file, and more. These actions still need to be performed on Vista (and even more have been added), and this is now Wininit.exe’s role. By extension, Winlogon.exe now loses much of its non-logon related work, and becomes a more agile and lean logon application, instead of a poorly modularized module for all the user-mode initialization tasks required on Windows.

One of Wininit.exe’s new tasks (that is, tasks which weren’t simply grandfathered from Winlogon) is to register the Session 0 Viewer Window Hook Dll, which it does with an aptly named RegisterSession0ViewerWindowHookDll function, called as part of Wininit’s WinMain entrypoint. If the undocumented DisableS0Viewer value isn’t present in the HKLM\System\CurrentControlSet\Control\WinInit registry key, it attempts to load the aforementioned Wls0wndh.dll, then proceeds to query the address of the Session0ViewerWindowProcHook inside it. If all succeeds, it switches the thread desktop to the default desktop (session 0 has a default desktop and a Winlogon desktop, just like other sessions), and registers the routine as a window hook with the SetWindowsHookEx routine. Wininit.exe then continues with other startup tasks on the system.

I’ll assume readers are somewhat familiar with window hooks, but this DLL providesjust a standard run-of-the-mill systemwide hook routine, whose callback will be notified for each new window on the desktop. We now have a mechanism to understand how it’s possible for the UI Detection Service to automatically start itself up — clearly, the hook’s callback must be responsible! Let’s look for more clues.

Inside Session0ViewerWindowProcHook, the logic is quite simple: a check is made on whether or not this is a WM_SHOWWINDOW window message, which signals the arrival of a new window on the desktop. If it is, the DLL first checks for the $$$UI0Background window name, and bails out if this window already exists. This window name is created by the service once it’s already started up, and the point of this check is to avoid attempting to start the service twice.

The second check that’s done is how much time has passed since the last attempt to start the service — because more than a window could appear on the screen during the time the UI Detection Service is still starting up, the DLL tries to avoid sending multiple startup requests.
If it’s been less than 300 seconds, the DLL will not start the service again.

Finally, if all the checks succeed, a work item is queued using the thread pool APIs with the StartUI0DetectThreadProc callback as the thread start address. This routine has a very simple job: open a handle to the SCM, open a handle to the UI0Detect service, and call StartService to instruct the SCM to start it.

This concludes all the work performed by the DLL — there’s no other code apart from these two routines, and the DLL simply acknowledges window hook callbacks as they come through in cases where it doesn’t need to start the service. Since the service is now started, we’ll turn our attention to the actual module responsible for it — UI0Detect.exe.

Because UI0Detect.exe handles both the user (client) and service part of the process, it operates in two modes. In the first mode, it behaves as a typical Windows service, and registers itself with the SCM. In the second mode, it realizes that is has been started up on a logon session, and enters client mode. Let’s start by looking at the service functionality.

If UI0Detect.exe isn’t started with a command-line, then this means the SCM has started it at the request of the Window Hook DLL. The service will proceed to call StartServiceCtrlDispatcher with the ServiceStart routine specific to it. The service first does some validation to make sure it’s running on the correct WinSta0\Default windowstation and desktop and then notifies the SCM of success or failure.

Once it’s done talking to the SCM, it calls an internal function, SetupMainWindows, which registers the $$$UI0Background class and the Shell0 Window window. Because this is supposed to be the main “shell”
window that the user will be interacting with on the service desktop (other than the third-party or legacy service), this window is also registered as the current Shell and Taskbar window through the SetShellWindow and SetTaskmanWindow APIs. Because of this, when Explorer.exe is actually started up through the trick mentioned in Part 1, you’ll notice certain irregularities — such as the task bar disappearing at times. These are caused because Explorer hasn’t been able to properly register itself as the shell (the same APIs will fail when Explorer calls them). Once the window is created and a handler is setup (BackgroundWndProc), the new Vista “Task” Dialog is created and shown, after which the service enters a typical GetMessage/DispatchMessage window message handling loop.

Let’s now turn our attention to BackgroundWndProc, since this is where the main initialization and behavioral tasks for the service will occur. As soon as the WM_CREATE message comes through, UI0Detect will use the RegisterWindowMessage API with the special SHELLHOOK parameter, so that it can receive certain shell notification messages. It then initializes its list of top level windows, and registers for new session creation notifications from the Terminal Services service. Finally, it calls SetupSharedMemory to initialize a section it will use to communicate with the UI0Detect processes in “client” mode (more on this later), and calls EnumWindows to enumerate all the windows on the session 0 desktop.

Another message that this window handler receives is WM_DESTROY, which, as its name implies, unregisters the session notification, destroys the window lists, and quits.

This procedure also receives the WM_WTSSESSION_CHANGE messages that it registered for session notifications. The service listens either for remote or local session creates, or for disconnections. When a new session has been created, it requests the resolution of the new virtual screen, so that it knows where and how to display its own window. This functionality exists to detect when a switch to session 0 has actually been made.

Apart from these three messages, the main “worker” message for the window handler is WM_TIMER. All the events we’ve seen until now call SetTimer with a different parameter in order to generate some sort of notification. These notifications are then parsed by the window procedure, either to handle returning back to the user’s session, to measure whether or not there’s been any input in session 0 lately, as well as to handle dynamic window create and destroy operations.

Let’s see what happens during these two operations. The first time that window creation is being acted upon is during the afore-mentionned EnumWindows call, which generates the initial list of windows. The function only looks for windows without a parent, meaning top-level windows, and calls OnTopLevelWindowCreation to analyze the window. This analysis consists of querying the owner of the window, getting its PID, and then querying module information about this process. The version information is also extracted, in order to get the company name. Finally, the window is added to a list of tracked windows, and a global count variable is incremented.

This module and version information goes into the shared memory section we briefly described above, so let’s look into it deeper. It’s created by SetupSharedMemory, and actually generates two handles. The first handle is created with SECTION_MAP_WRITE access, and is saved internally for write access. The handle is then duplicated with SECTION_MAP_READ access, and this handle will be used by the client.

What’s in this structure? The type name is UI0_SHARED_INFO, and it contains several informational fields: some flags, the number of windows detected on the session 0 desktop, and, more importantly, the module, window, and version information for each of those windows. The function we just described earlier (OnTopLevelWindowCreation) uses this structure to save all the information it detects, and the OnTopLevelWindowDestroy does the opposite.

Now that we understand how the service works, what’s next? Well, one of the timer-related messages that the window procedure is responsible for will check whether or not the count of top level windows is greater than 0. If it is, then a call to CreateClients is made, along with the handle to the read-only memory mapped section. This routine will call the WinStationGetSessionIds API and enumerate through each ID, calling the CreateClient routine. Logically, this is so that each active user session can get the notification.

Finally, what happens in CreateClient is pretty straightforward: the WTSQueryUserToken and ImpersonateLoggedOnUser APIs are used to get an impersonation logon token corresponding to the user on the session ID given, and a command line for UI0Detect is built, which contains the handle of the read-only memory mapped section. Ultimately, CreateProcessAsUser is called, resulting in the UI0Detect process being created on that Session ID, running with the user’s credentials.

What happens next will depend on user interaction with the client, as the service will continue to behave according to the rules we’ve already described, detecting new windows as they’re being created, and detection destroyed windows as well, and waiting for a notification that someone has logged into session 0.

We’ll follow up on the client-mode behavior on Part 3, where we’ll also look at the UI0_SHARED_INFO structure and an access validation flaw which will allow us to spoof the client dialog.

Inside Session 0 Isolation and the UI Detection Service – Part 1

One of the many exciting changes in Windows Vista’s service security hardening mechanisms (which have been aptly explained and documented in multiple blogs and whitepapers , so I’ll refrain from rehashing old material) is Session 0 Isolation. I’ve thought it would be useful to talk about this change and describe the behaviour and implementation of the UI Detection Service (UI0Detect), an important part of the infrastructure in terms of providing compatible behaviour with earlier versions of Windows.

As a brief refresher or introduction to Session 0 Isolation, let’s remember how services used to work on previous versions of Windows: you could run them under various accounts (the most common being System, Local Service and Network Service), and they ran in the same session the console user, which was logged-on to session 0 as well. Services were not supposed to display GUIs, but, if they really had to, they could be marked as interactive, meaning that they could display windows on the interactive window station for session 0.

Windows implemented this by allowing such services to connect to the Winsta0 Windowstation , which is the default interactive Windowstation for the current session — unlike non-interactive services, which belonged to a special “Service-0x0-xxx$” Windowstation, where xxx was a logon session identifer (you can look at the WDK header ntifs.h for a list of the built-in account identifiers (LUIDs)). You can see the existence of these windowstations by enumerating them in the object manager namespace with a tool such as Sysinternals’ WinObj.


Essentially, this meant three things: applications could either do Denial of Service attacks against named objects that the service would expect to own and create, they could feed malicious data to objects such as sections which were incorrectly secured or trusted by the service, and , for interactive services, they could also attempt shatter attacks — sending window messages with executable payloads in their buffer, exploting service bugs and causing the payload code to execute at higher privileges.

Session 0 Isolation puts an end to all that, by first having a separate session for the console user (any user session starts at 1, thus protecting named objects), and second, by disabling support for interactive services — that is, even though services may still display a UI, it won’t appear on any user’s desktop (instead, it will appear on the default desktop of the session 0 interactive windowstation).

That’s all fine and dandy for protecting the objects, but what if the service hasn’t been recompiled not to directly show a UI (but to instead use a secondary process started with CreateProcessAsUser, or to use the WTSSendMessage API) and depends on user input before continuing? Having a dialog box on the session 0 desktop without the user’s awareness would potentially have significant application compatibility issues — this is where the UI Detection Service comes into play.

If you’re like most Vista users, you’ve actually probably never seen the default desktop on session 0’s interactive windowstation in your life (or in simpler words, never “logged-on” or “switched to” session 0)! Since you can’t log on to it, and since interactive services which displayed UIs are thankfully rare, it remains a hidden mystery of Windows Vista, unless you’ve encountered such a service. So let’s follow Alice down the rabbit hole into session 0 wonderland, with some simple Service Controller (Sc.exe) commands to create our very own interactive service.

Using an elevated command prompt, create a service called RabbitHole with the following command:

sc create RabbitHole binpath= %SYSTEMROOT%\system32\notepad.exe type= interact type= own

Be careful to get the right spaces — there’s a space after each equal sign! You should expect to get a warning from Sc.exe, notifying you of changes in Windows Vista and later (the ones I’ve just described).

Now let’s start the service, and see what happens:

sc start RabbitHole

If all went well, Sc.exe should appear as if it’s waiting on the command to complete, and a new window should appear on your taskbar (it does not appear in the foreground). That window is a notification from the UI Detection Service, the main protagonist of this story.


Get ready to click on “Show me the Message” as soon as you can! Starting an essentialy fake service through Sc.exe will eventually annoy the Service Control Manager (SCM), causing it to kill notepad behind your back (don’t worry if this happens, just use the sc start RabbitHole command again).

You should now be in Session 0 (and probably unable to read the continuation of this blog, in which case the author hopes you’ve been able to find your way back!) As you can notice, Session 0 is a rather deserted place, due to the lack of any sort of shell or even the Theme service, creating a Windows 2000-like look that may bring back tears of joy (or agony) to the more nostalgic of users.


On the other hand, this desolate session it does contain our Notepad, which you should’ve seen disappear if you stayed long enough — that would be the SCM reaching its timeout of how long it’s willing to wait around hoping for Notepad to send a “service start” message back (which it never will).

Note that you can’t start any program you want on Session 0 — Cmd.exe and Explorer.exe are examples of programs that for one reason or another won’t accept to be loaded this way. However, if you’re quick enough, you can use an old trick common to getting around early 90ies “sandbox” security applications found in many libraries and elementary schools — use the common dialog control (from File, Open) to browse executable files (switch the file type to *.*, All Files), go to the System32 folder, right-click on Explorer.exe, and select Open. Notepad will now spawn the shell, and even if the SCM kills Notepad, it will remain active — feel free to browse around (try to be careful not to browse around in IE too much, you are running with System privileges!)

That’s it for this introduction to this series. In part 2, we’ll look at what makes this service tick, and in part 3, we’ll look at a technique for spoofing the dialog to lie to the user about which service is actually requesting input. For now, let’s delete the RabbitHole, unless you want to keep it around for impressing your colleagues:

sc delete RabbitHole

Building the Lego Millennium Falcon: A Lesson in Security?

Not all of a reverse engineer’s life has to be about undoing — sometimes it is equally as fun to build something from scratch, whether that means a new tool… or the Star Wars 30 Year Anniversary Lego Ultimate Collector’s Millennium Falcon! Over the course of the last three weeks, my best friend and myself have spent countless hours building this magnificent model, which has over 5000 pieces, 91 “major” construction steps (with each step taking up to 30 sub-steps, sometimes 2x’d or 4x’d) and a landscape, 8×14″, 310 page instruction manual.

Late last night, we completed the final pieces of the hull, the radar dish, and the commemorative plaque (itself made of Lego). We had previously built the Imperial Star Destroyer (ISD) last year, but nothing in our Lego-building lives had ever quite come close to the work we put into this set. Complete full-size pictures after the entry.

Along the way, we both learnt some important facts about the Lego manufacturing process — for example, we had already noticed that in our ISD set had some extra pieces, and that other people’s sets had different extra pieces, however, we weren’t too sure what to make of it. This year however, due to the fact we were missing exactly half the number of lever pieces and 1×2 “zit” pieces, we did some extra digging.

The first that boggles many Lego builders of such large sets, is the arrangement of pieces within the bags. These pieces are not arranged in construction order. If you break all the bags and sort the pieces, there is nothing wrong with doing so! Whether or not that will save you time however, is up for discussion. We did do a small sorting, mostly to separate hull pieces from thick pieces from greebling pieces, and that did seem to help a lot. However, I wouldn’t recommend spending 5 hours sorting the pieces as was done on Gizmodo.

By step 21 unfortunately, we noticed we were missing a piece — something that Lego says should never happen (more on this shortly). Step 22 also required that piece, and the next few steps required even more. I came up with an interesting idea — a missing bag. We counted the number of pieces we’d still need, and it came up to be exactly the number of pieces we had used. In other words, half of the pieces were missing. Those pieces were in the same bag as the level-looking pieces, and those too, after calculations, were missing half their number. By this point, we were sure that we were missing a bag, and went to check Lego’s site for help.

It turns out that Lego’s site does have a facility available for ordering missing set pieces, and for free! After putting in the appropriate information, including the set number, and piece number, I filled out my address… and received a “Thank you”. Unfortunately, we never got any further confirmation, and the missing pieces have yet to arrive. Most interesting however, was a notice on Lego’s website, claiming that missing pieces are quite rare, because each set is precision-weighted, so missing sets get flagged for extra, human QA. At first sight, this makes it very unlikely for a set to be missing a piece — so how did we end up missing nearly 60 pieces? (Note to readers: the missing pieces are all hull greeblin (decorative, not structural), and we dully marked down the steps at which they were required, so we can add them once we receive them).

The answer to that question only came to us once we completed construction of the Falcon. There it was, on our workshop table (my kitchen table): an opened bag, full of Lego pieces… which we had already used up, and which weren’t required! It then became pretty obvious to us that Lego has a major flaw in their weight-based reasoning: replacement! We couldn’t scientifically verify it, but the extra bag we had was of similar size and weight (being small pieces) to the second bag of levels and 1×2 pieces we needed. Evidently, a machine error (most probably) or human error caused an incorrect bag of pieces to be added to the set. At the QA phase, the set passed the weight tests, because this bag was of the same (or nearly the same) weight as the missing bag!

Furthermore, due to the fact both the ISD and the Falcon had pieces that were not included in the Appendix of piece counts (again, probably due to machine error while composing the set), their combined weight may have even pushed the weight of our set past the expected weight. It is unlikely that Lego would flag heavier sets for QA — at worst, the customer would get some free pieces of Lego. However, when that weight helps offset the weight of missing pieces, it can certainly become a problem. And when a bag of pieces is accidentally replaced by a similar bag, then weight measuring doesn’t do much at all.

Granted, a lot of our analysis is based on assumptions, but they certainly do check out. Lego says their primary method of checking sets is to weigh them, and we have an extra bag, and a missing bag, both of similar size and composure. The hypothesis seems valid, and perhaps a phone call to Lego will confirm it (if I don’t get the pieces soon, I certainly plan on doing that).

Ironically, this breakage of a QA test through replacement was similar to an interesting security question I received by mail recently: why is it a bad idea to use the Owner SID of an object as a way to authenticate the object, or its creator? It turns out this can leave you vulnerable to rename operations, such as on a file with write and delete access, allowing someone to impersonate the Owner SID but have their own data in the object. Breaking a security test by renaming an object, or breaking a quality assurance test by replacing a bag — the two stem from the same problem: bad design and simplistic assumptions.

And now, without further ado, here’s a link to our construction pictures.

ScTagQuery: Mapping Service Hosting Threads With Their Owner Service

Today I want to introduce another utility for Vista and Windows Server 2008 called ScTagQuery (short for Service Controller Tag Query), a tool which will allow you identify to which running service a certain thread inside a service hosting process (e.g Svchost.exe) belongs to, in order to help with identifying which services may be using up your CPU, or to better understand the stack trace of a service thread.

If you’ve ever had to deal with a service process on your system taking up too much CPU usage, memory, or other kinds of resources, you’ll know that Task Manager isn’t particularly helpful at finding the offending service because it only lists which services are running inside which service hosting process, but not which ones are consuming CPU time (in fact, some Svchosts host more than a dozen services!)

Task Manager’s Services Tab

Process Explorer can also display this information, as well as the names of the DLLs containing the services themselves. Combined with the ability to display threads inside processes, including their starting address (which for some service threads, identifies the service they are associated with by corresponding to their service DLL) and call stack (which helps identify additional threads that started in a generic function but entered a service-specific DLL), Process Explorer makes it much easier to map each thread to its respective service name. With the cycles delta column, the actual thread causing high CPU usage can reliably be mapped to the broken or busy service (in the case of other high resource usage or memory leaks, more work with WinDBG and other tools may be required in order to look at thread stacks and contexts).

However, with each newer Windows release, the usage of worker pool threads, worker COM or RPC communication threads, and other generic worker threads has steadily increased (this is a good thing – it makes services more scalable and increases performance in multiprocessor environments), making this technique a lot less useful. These threads belong to their parent wrapper routines (such as those in ole32.dll or rpcrt4.dll) and cannot be mapped to a service by looking at the start address. The two screenshots below represent a service hosting process on my system which contains 6 services — yet only one of the DLLs identified by Process Explorer is visible in the list of threads inside the process (netprofm.dll).

Only one service DLL is visible
Only one service DLL is visible

Again, there’s nothing wrong with using worker threads as part of your service, but it simply makes identifying the owner of the actual code doing the work a lot harder. To solve this need, Windows Vista adds a new piece of information called the Service Tag. This tag is contained in the TEB of every thread (internally called the Sub-Process Tag), and is currently used in threads owned by service processes as a way to link them with their owning service name.

WinDBG showing the Service Tag

When each service is registered on a machine running Windows Vista or later, the Service Control Manager (SCM) assigns a unique numeric tag to the service (in ascending order). Then, at service creation time, the tag is assigned to the TEB of the main service thread. This tag will then be propagated to every thread created by the main service thread. For example, if the Foo service thread creates an RPC worker thread (note: RPC worker threads don’t use the thread pool mechanism – more on that later), that thread will have the Service Tag of the Foo service.

ScTagQuery uses this Sub-Process Tag as well as the SCM’s tag database and undocumented Query Mapping API (I_ScQueryTagInformation) to display useful information in tracking down rogue services, or simply to glean a better understanding of the system’s behavior. Let’s look at the option it supports and some scenarios where they would be useful.

ScTagQuery options

During a live session, the -p, -s and -d options are most useful. The former will display a list of all service threads which have a service tag inside the given process, then map that tag to the service name. It can also function without a process ID, causing it to display system-wide data (enumerating each active process). The -s option, on the other hand, dumps which service tags are active for the process, but not the actual threads linked to them — it then links these tags to their service name. Finally, the -d option takes a DLL name and PID and displays which services are referencing it. This is useful when a thread running code inside a DLL doesn’t have an associated service tag, but the SCM does know which service is using it.

The -a and -n options are particularly useful when you’ve obtained a service tag from looking at a crash dump yourself, and run the tool on the same system after a reboot. The -t option will let you map the service name if you know the PID of the service. If the PID changed or is otherwise unrecoverable, the -a option will dump the entire SCM tag database, which includes services which are stopped at the moment. Because these mappings are persistent across reboots, you’ll be able to map the thread with the service this way.

On the other hand, if all you’re dealing with is one thread, and you want to associate it to its service, the -t option lets you do just that.

Back to that same svchost.exe we were looking at with Process Explorer, here are some screenshots of ScTagQuery identifying the service running code inside rpcrt4.dll, as well as the service referencing the fundisc.dll module.

Identifying FunDisc.dll

Identifying rpcrt4.dll

Note that Thread Pool worker threads do not have a Service Tag associated to them, so the current version of ScTagQuery on Vista cannot yet identify a service running code inside a worker pool thread inside ntdll.dll.

Finally, I should mention that ScTagQuery isn’t the only tool which uses Service Tags to help with troubleshooting and system monitoring: Netstat, the tool which displays the state of TCP/IP connections on a local machine, has also been improved to use servicre tags to better identify who owns an open port (Netstat uses new information returned by various Iphlpapi.dll APIs which now store the service tag of every new connection). With the -b option, Netstat is now able to display the actual service name which owns an active connection, and not just the process name (which would likely just be svchost.exe).

New Netstat -b behavior on Vista

The tool page for ScTagQuery is located here. You can download ScTagQuery in both 32-bit and 64-bit versions from this link. Windows Vista or higher is required.

MemInfo: Peer Inside Memory Manager Behavior on Windows Vista and Server 2008

After my departure from the ReactOS project and subsequent new work for David Solomon, it wasn’t clear how much research and development on Windows internals I would still be able to do on a daily basis. Thankfully, I haven’t given up my number one passion — innovating, pushing the boundaries of internals knowledge, and educating users through utilities and applications.

In this vein, I have been working during my spare time on various new utilities that use new undocumented APIs and expose the internals behind Windows Vista to discover more about how the operating system works, as well as to be able to provide useful information to administrators, developers, students, and anyone else in between. In this post, I want to introduce my latest tool, MemInfo. I’ll show you how MemInfo can help you find bad memory modules (RAM sticks) on your system, track down memory leaks and even assist in detecting rootkits!

One of the major new features present in Windows Vista is Superfetch. Mark Russinovich did an excellent writeup on this as part of his series on Windows Vista Kernel Changes in TechNet Magazine. Because Superfetch’s profiling and management does not occur at the kernel layer (but rather as a service, by design choice), there had to be a new system call to communicate with the parts of Superfetch that do live in the kernel, just like Windows XP’s prefetcher code, so that the user-mode service could request information as well as send commands on operations to be performed.

Here’s an image of Process Explorer identifying the Superfetch service inside one of the service hosting processes.

Because Superfetch goes much deeper than the simple file-based prefetching Windows XP and later offer, it requires knowledge of information such as the internal memory manager lists, page counts and usage of pages on the system, memory range information, and more. The new SuperfetchInformationClass added to NtQuery/SetInformationSystem provides this data, and much more.

MemInfo uses this API to query three kinds of information:

  • a list of physical address ranges on the system, which describe the system memory available to Windows
  • information about each page on the system (including its location on the memory manager lists, its usage, and owning process, if any)
  • a list system/session-wide process information to correlate a process’ image name with its kernel-mode object 
  • MemInfo ultimately provides this information to the user through a variety of command line options described when you run the utility. Some of its various uses include:

    Seeing how exactly Windows is manipulating your memory, by looking at the page list summaries.

    The Windows memory manager puts every page on the system on one of the many page lists that it manages (i.e. the standby list, the zeroed page list, etc). Windows Internals covers these lists and usage in detail, and MemInfo is capable of showing their sizes to you (including pages which are marked Active, meaning currently in-use and by the operating system and occupying physical memory (such as working sets) and not on any of the lists). This information can help you answer questions such as “Am I making good use of my memory?” or “Do I have damaged RAM modules installed?”.

    For example, because Windows includes a bad page list where it stores pages that have failed internal consistency checks, MemInfo is an easy way (but not 100% foolproof, since Windows’ internal checks might not have detected the RAM is bad) to verify if any memory hardware on the system is misbehaving. Look for signs such as a highly elevated count of pages in the zeroed page list (after a day’s worth of computer use) to spot if Windows hasn’t been fully using your RAM to its potential (you may have too much!) or to detect a large memory deallocation by a process (which implies large allocations previously done).

    Here’s MemInfo on my 32-bit Vista system, displaying summary page list information.

    Windows Vista also includes a new memory manager optimization called prioritized standby lists — the standby state is the state in which pages find themselves when they have been cached by Windows (various mechanisms are responsible for this of caching, including the cache manager and Superfetch) and are not currently active in memory. Mark covered these new lists in his excellent article as well.

    To expose this information to system administrators, three new performance counters were added to Windows, displaying the size of the prioritized standby lists in groupings: priorities 0 through 3 are called Standby Cache Reserve, 4 and 5 are called Standby Cache Normal Priority, and finally, 6 and 7 are called Standby Cache Core. MemInfo on the other hand, which can also display these new lists, is an even better tool to identify memory in the standby state, since it is able to display the size of these lists individually.

    While memory allocations on Windows XP (which could be part of application startup, the kernel-mode heap, or simple memory allocations coming from various processes) would consume pages from a single standby list and thus possibly steal away pages that more critical processes would’ve liked to have on standby, Windows Vista adds 8 prioritized lists, so that critical pages can be separated from less important pages and nearly useless pages. This way, when pages are needed for an allocation, the lower priority standby lists are used first (a process called repurposing). By making snapshots of MemInfo’s output over a period of time, you can easily see this behavior.

    Here’s MemInfo output before, during, and after a large allocation of process private memory. Notice how initially, the bulk of my memory was cached on the standby lists. Most of the memory then became Active due to the ongoing large allocation, emptying the standby lists, starting by the lowest priority. Finally, after the memory was freed, most of the memory now went on the zero page list (meaning the system just had to zero 1GB+ of data).

    Seeing to what use are your pages being put to by Windows

    Apart from their location on one of the page lists, Windows also tracks the usage of each page on the system. The full list includes about a dozen usages, ranging from non-paged pool to private process allocations to kernel stacks. MemInfo shows you the partitioning of all your pages according to their usage, which can help pinpoint memory leaks. High page counts in the driver locked pages, non-paged pool pages and/or kernel stack pages could be indicative of abnormal system behavior.

    The first two are critical resources on the system (much information is available on the Internet for tracking down pool leaks), while the latter is typically tightly maintained for each thread, so a large number may indicate leaked threads. Other usages should also expect to see a lower number of pages than ones like process private pages, which is usually the largest of the group.

    At the time of this writing, here’s how Windows is using my 4GB of memory:

    Looking at per-process memory usage, and detecting hidden processes

    Internally, Windows associates private process pages with the kernel executive object that represents processes as managed by the process manager — the EPROCESS structure. When querying information about pages, the API mentioned earlier returns EPROCESS pointers — not something very usable from user-mode! However, another usage of this API is to query the internal list of processes that Superfetch’s kernel-mode component manages. This list not only allows to take a look at how much memory, exactly, belongs to each process on the system, but also to detect some forms of hidden processes!

    Hidden processes are usually the cause of two things. The first is processes which have been terminated, but not yet fully cleaned up by the kernel, because of handles which are still open to them. Task Manager and Process Explorer will not show these processes, but MemInfo is the only tool apart from the kernel debugger which can (so you don’t have to reboot in debugging mode). See below on how MemInfo is showing a SndVol32.exe process, created by Windows Explorer when clicking on the speaker icon in the system tray — Explorer has a bug which leaks the handles, so the process is never fully deleted until Explorer is closed.

    The second cause of why a process may be hidden is a rootkit that’s hooking various system calls and modifying the information returned to user-mode to hide a certain process name. More advanced rootkits will edit the actual system lists that the process manager maintains (as well as try to intercept any alternate methods that rootkit detection applications may use), but MemInfo adds a new twist, by using a totally new and undocumented Superfetch interface in Windows Vista. It’s likely that no rootkit in the wild currently knows about Superfetch’s own process database, so MemInfo may reveal previously hidden processes on your system. Unfortunately, as with all information, it’s only a matter of time until new rootkits adapt to this new API, so expect this to be obsolete in the next two years.

    There’s many more uses for MemInfo that I’m sure you can find — including as a much faster replacement for !memusage 8 if you’ve used that command in WinDBG before. MemInfo is fully compatible with both 32-bit and 64-bit versions of Windows Vista (including SP1 and Windows Server 2008 RC1 – even though Server 2008 does not include Superfetch, because both client and server versions of the OS will use the same kernel as of Vista SP1, the API set is identical), but not any earlier version of Windows. Apart from these simple summary views, MemInfo is powerful enough to dump the entire database of pages on your system, with detailed information on each — valuable information for anyone that needs to deal with this kind of data. Furthermore, unlike using WinDBG to attach to the local kernel, it doesn’t require booting the system into debug mode.

    You can download a .zip file containing both versions from this link. Make sure to run MemInfo in an elevated command prompt — since it does require administrative privileges. The documentation for MemInfo is located on the following page (this page is part of an upcoming website on which I plan to organize and offer help/links to my tools and articles).

    Behind Windows x64’s 44-bit Virtual Memory Addressing Limit

    The era of 64-bit computing is finally upon the consumer market, and what was once a rare hardware architecture has become the latest commodity in today’s processors. 64-bit processors promise not only a larger amount of registers and internal optimizations, but, perhaps most importantly, access to a full 64-bit address space, increasing the maximum number of addressable memory from 32-bits to 64-bits, or from 4GB to 16EB (Exabytes, about 17 billion GBs). Although previous solutions such as PAE enlarged the physically addressable limit to 36-bits, they were architectural “patches” and not real solutions for increasing the memory capabilities of hungry workloads or applications.

    Although 16EB is a copious amount of memory, today’s computers, as well as tomorrow’s foreseeable machines (at least in the consumer market) are not yet close to requiring support for that much memory. For these reasons, as well as to simplify current chip architecture, the AMD64 specification (which Intel used for its own implementation of x64 processors, but not Itanium) currently only supports 48 bits of virtual address space — requiring all other 16 bits to be set to the same value as the “valid” or “implemented” bits, resulting in canonical addresses: the bottom half of the address space starts at 0x0000000000000000 with only 12 of those zeroes being part of an actual address (resulting in an end at 0x00007FFFFFFFFFFF), while the top half of the address space starts at 0xFFFF800000000000, ending at 0xFFFFFFFFFFFFFFFF.

    As you can realize, as newer processors support more of the addressing bits, the lower-half of memory will expand upward, towards 0x7FFFFFFFFFFFFFFF, while the upper-half of memory will expand downward, toward 0x8000000000000000 (a similar split to today’s memory space, but with 32 more bits). Anyone working with 64-bit code ought to be very familiar with this implementation, since it can have subtle effects on code when the number of implemented bits will grow. Even in the 32-bit world, a number of Windows applications (including system code in Windows itself) assume the most significant bit is zero and use it as a flag — clearly the address would become kernel-mode, so the application would mask this bit off when using it as an address. Now developers get a shot at 16 bits to abuse as flags, sequence numbers and other optimizations that the CPU won’t even know about (on current x64 processors), on top of the usual bits that can be assumed due to alignment or user vs kernel-mode code location. Compiling the 64-bit application for Itanium and testing it would reveal such bugs, but this is beyond the testing capabilities of most developers.

    Examples within Microsoft’s Windows are prevalent — pushlocks, fast references, Patchguard DPC contexts, and singly-linked lists are only some of the common Windows mechanisms which utilize bits within a pointer for non-addressing purposes. It is the latter of these which is of interest to us, due to the memory addressing limit it imposed on Windows x64 due to a lack of a CPU instruction (in the initial x64 processors) that the implementation required. First, let’s have a look at the data structure and functionality on 32-bits. If you’re unsure on what exactly a singly-linked list is, I suggest a quick read in an algorithm book or Google.

    Here is the SLIST_HEADER, the data structure Windows uses to represent an entry inside the list:

    typedef union _SLIST_HEADER {
        ULONGLONG Alignment;
        struct {
            SLIST_ENTRY Next;
            USHORT Depth;
            USHORT Sequence;

    Here we have an 8-byte structure, guaranteed to be aligned as such, composed of three elements: the pointer to the next entry (32-bits, or 4 bytes), and depth and sequence numbers, each 16-bits (or 2 bytes). Striving to create lock-free push and pop operations, the developers realized that they could make use of an instruction present on Pentium processors or higher — CMPXCHG8B (Compare and Exchange 8 bytes). This instruction allows the atomic modification of 8 bytes of data, which typically, on a 486, would’ve required two operations (and thus subjected these operations to race conditions requiring a spinlock). By using this native CPU instruction, which also supports the LOCK prefix (guaranteeing atomicity on a multi-processor system), the need for a spinlock is eliminated, and all operations on the list become lock-free (increasing speed).

    On 64-bit computers, addresses are 64-bits, so the pointer to the next entry must be 64-bits. If we keep the depth and sequence numbers within the same parameters, we require a way to modify at minimum 64+32 bits of data — or better yet, 128. Unfortunately, the first processors did not implement the essential CMPXCHG16B instruction to allow this. The developers had to find a variety of clever ways to squeeze as much information as possible into only 64-bits, which was the most they could modify atomically at once. The 64-bit SLIST_HEADER was born:

    struct {  // 8-byte header
            ULONGLONG Depth:16;
            ULONGLONG Sequence:9;
            ULONGLONG NextEntry:39;
    } Header8;

    The first sacrifice to make was to reduce the space for the sequence number to 9 bits instead of 16 bits, reducing the maximum sequence number the list could achieve. This still only left 39 bits for the pointer — a mediocre improvement over 32 bits. By forcing the structure to be 16-byte aligned when allocated, 4 more bits could be won, since the bottom bits could now always be assumed to be 0. This gives us 43-bits for addresses — we can still do better. Because the implementation of linked-lists is used *either* in kernel-mode or user-mode, but cannot be used across address spaces, the top bit can be ignored, just as on 32-bit machines: the code will assume the address to be kernel-mode if called in kernel-mode, and vice-versa. This allows us to address up to 44-bits of memory in the NextEntry pointer, and is the defining constraint of Windows’ addressing limit.

    44 bits is nothing to laugh at — they allow 16TB of memory to be described, and thus splits Windows into somewhat two even chunks of 8TB for user-mode and kernel-mode memory. Nevertheless, this is still 16 times smaller then the CPU’s own limit (48 bits is 256TB), and even farther still from the maximum 64-bits. So, with scalability in mind, there do exist some other bits in the SLIST_HEADER which define the type of header that is being dealt with — because yes, there is an official 16-bit header, written for the day when x64 CPUs would support 128-bit Compare and Exchange (which they now do). First, a look at the full 8-byte header:

     struct {  // 8-byte header
            ULONGLONG Depth:16;
            ULONGLONG Sequence:9;
            ULONGLONG NextEntry:39;
            ULONGLONG HeaderType:1; // 0: 8-byte; 1: 16-byte
            ULONGLONG Init:1;       // 0: uninitialized; 1: initialized
            ULONGLONG Reserved:59;
            ULONGLONG Region:3;
     } Header8;

    Notice how the “HeaderType” bit is overlaid with the Depth bits, and allows the implementation and developers to deal with 16-byte headers whenever they will be put into use. This is what they look like:

     struct {  // 16-byte header
            ULONGLONG Depth:16;
            ULONGLONG Sequence:48;
            ULONGLONG HeaderType:1; // 0: 8-byte; 1: 16-byte
            ULONGLONG Init:1;       // 0: uninitialized; 1: initialized
            ULONGLONG Reserved:2;
            ULONGLONG NextEntry:60; // last 4 bits are always 0’s
     } Header16;

    Note how the NextEntry pointer has now become 60-bits, and because the structure is still 16-byte aligned, with the 4 free bits, leads to the full 64-bits being addressable. As for supporting both these headers and giving the full address-space to compatible CPUs, one could probably expect Windows to use a runtime “hack” similar to the “486 Compatibility Spinlock” used in the old days of NT, when CMPXCHG8B couldn’t always be assumed to be present (although Intel implemented it on the 586 Pentium, Cyrix did not). As of now, I’m not aware of this 16-byte header being used.

    So there you have it — an important lesson not only on Windows 64-bit programming, but also on the importance of thinking ahead when making potentially non-scalable design decisions. Windows did a good job at maintaining capability and still allowing expansion, but the consequences of attempting to use parts of the non-implemented bits in current CPUs as secondary data may be hard to detect once your software evolves to those platforms — tread carefully.

    Some Vista Tips & Tricks

    Here’s a couple of various useful tips I’ve discovered (as I’m sure others have) which make my life easier on Vista, and saved me a lot of trouble.

    Fix that debugger!

    I had done everything right to get local kernel debugging to work: I added /DEBUG with bcdedit. I used WinDBG in Administrator mode, I even turned off UAC. I made sure that all the debugging permissions were correct for admin accounts (SeDebugPrivilege). I made sure the driver WinDBG uses was extracted. Nothing worked! For some reason, the system was declaring itself as not having a kernel debugger enabled. I searched Google for answers, and found other people were experiencing the same problem, including a certain “Unable to enable kernel debugger, NTSTATUS 0xC0000354 An attempt to do an operation on a debug port failed because the port is in the process of being deleted” error message.

    Here’s what I did to fix it:

    1. I used bcdedit to remove and re-add the /debug command: bcdedit /debug off then bcdedit /debug on
    2. I used bcdedit to force the debugger to be active and to listen on the Firewire port: bcdedit /dbgsettings 1394 /start active
    3. I used bcdedit to enable boot debugging: bcdedit /bootdebug
    4. I rebooted, and voila! I was now able to use WinDBG in local kernel debugging mode.

    Get the debugger state

    In Vista, one must boot with /DEBUG in order to do any local kernel debugging work, and as we all know, this disables DVD and high-def support to “ensure a reliable and secure playback environment”. After a couple of days work, I find myself sometimes forgetting if I had booted in debug mode or not, and I don’t want to start WinDBG all the time to check — or maybe you’re in a non-administrative environment and don’t want to use WinDBG. It’s even possible a new class of malware may soon start adding /DEBUG to systems in order to better infiltrate kernel-mode. Here’s two ways to check the state:

    1. Navigate to HKLM\System\CurrentControlSet\Config and look at the SystemStartOptions. If you see /DEBUG, the system was booted in debugging mode. However, because it may have been booted in “auto-enable” mode, or in “block enable” mode (in which the user can then enable/disable the debugger at his will), this may not tell you the whole story.
    2. Use kdbgctrl, which ships with the Debugging Tools for Windows. kdbgctrl -c will tell you whether or not the kernel debugger is active right now.

    What would be really worthwhile, is to see if DVD playback works with /DEBUG on, but kernel debugging disabled with kdbgctrl, and if it stops playing (but debugging works) if kdbgctrl is used to enable it.

    Access those 64-bit system binaries from your 32-bit app

    I regularly have to use IDA 64 to look at binaries on my Vista system, since I’m using the 64-bit edition. Unfortunately, IDA 64 itself is still a 32-bit binary, which was great when I had a 32-bit system, but not so great anymore. The reason is WOW64’s file redirection, which changes every access to “System32” to “SysWOW64”, the directory containing 32-bit DLLs. But IDA 64 is perfectly able (and has to!) handle 64-bit binaries, so there wasn’t an easy way to access those DLLs. Unfortunately, it looks like the IDA developers haven’t heard of Wow64DisableFsRedirection, so I resorted to copying the binaries I needed out of the directory manually.

    However, after reverse engineering some parts of WOW64 to figure out where the WOW64 context is saved, I came across a strange string — “sysnative”. It turns out that after creating (with a 64-bit app, such as Explorer) a “sysnative” directory in my Windows path, any 32-bit application navigating to it will now get the “real” (native) contents of the System32 directory! Problem solved!

    So there you have it! I hope at least one of these was useful (or will be), and feel free to share your own!

    My Summer At Apple

    Three months ago, I posted about my experience with some of the most exciting tech companies that I had a chance to interview with and explained my decision behind joining Apple. This week, my internship comes to a close, and it’s time to review that decision and share with you my intern experience at Apple.

    No amount of past experience and stories could’ve prepared me for the amazing time I had at Apple and I think I made one of the wisest decisions in my career this summer. My internship position at Apple was part of the Core OS group, which is mainly responsible for Mac OS X. However, this wasn’t just any Core OS internship — my job was at the forefront of Apple’s most anticipated product in decades — the iPhone.

    When Steve Jobs announced that the iPhone would be running OS X, many people had doubts at first, since OS X had never been talked about in the embedded market. But clearly, the phone is indeed running the OS, and most of the Core frameworks and foundations that developers were used to seeing on the desktop are on the phone. Several teams at Apple were responsible for this, and none more critical then the Core OS Embedded Team, which I worked on. Although the specifics of my job and others around me is not something I can disclose, it’s needless to say that I worked with some of the brightest engineers at Apple, who took an x86 Desktop OS and turned it into a powerful embedded OS (everyone now knows that the iPhone is running on ARM) that still supported the same applications and frameworks as its counterpart.

    As a Windows NT kernel expert and Intel x86 assembly guru and reverse engineer on a Dell laptop, the task of working with Darwin for ARM on a Mac Pro was new to me on all possible levels. And yet, I was able to succesfully leverage my knowledge of OS and kernel design, my attention to small details and interest in hardware-to-software relationships to surpass the goals of my internship, and pleasently surpise all levels of management as well as my co-workers. But how about the Apple internship experience?

    Interns at Apple get some great benefits that other companies don’t always bother with, including a health care plan, relocation assistance (paid round-trip airfare) as well as a monthly housing stipend. My expenses this summer were minimal thanks to the care Apple makes into releaving interns of some typical moving-related stress. One of the coolest things about being an intern however, is the chance to attend Executive Speaker Series, which are weekly lunchtime events in which one of Apple’s senior executive staff comes to talk and interact with interns. The first one, was of course Steve Jobs himself. It’s not just for show either — one of the interns got up and asked Steve how he can get his idea across to an executive. Steve smiled and replied: “Go ahead.” In total, I think there must’ve been about 12 different speakers this year.

    On the topic of selling ideas to executives, nothing works better then the iContest, in which teams of interns submit a feature or product design idea, improvement, or business plan for various judges to evaluate (made up of senior staff at Apple). The top 10 teams get to compete face-to-face, with a presentation to the judges (and other interns) followed by a harsh question and answer period in which the judges grill and roast the interns. What may make business sense to a 21 year old may sound like a bad idea to someone with 40 years of market experience, and they won’t be shy to let that be known. In the end, the comments are constructive, and like in any competition, the best ideas are usually scrutinized the most. The top three teams of this competition win a variety of prizes/awards, and most get their feature or idea implemented or marketed, so the contest results and presentations are some of the most confidential pieces of information an intern has to live with. My iContest idea was among the top ten chosen for the finale, and while we unfortunately did not win any of the top three prizes, I’m confident we came very close.

    Talking and interacting with the senior staff doesn’t end there for interns. After about 8-10 weeks at Apple, interns in each group give a presentation to the VP of their division, as well as to the managers of other interns in that group. These presentations are usually very important to a succesfull internship program, since the quality of the intern usually reflects on their manager. This is another competitive event, as the managers get to vote on each presentation, and the top intern of each group gets to present to the Senior VP or President of the division, usually a member of the executive staff. My presentation to the VP/managers was chosen as the best and most interesting one, so I was invited to present to Bertrand Serlet, the Vice President of Software Engineering at Apple. Unfortunately, I had to attend Blackhat that week, so I gave my place to the second best intern. It’s worthwhile to mention that all interns had amazing presentations, and they all put hard work into their summer.

    But Apple isn’t all about work and competing with other interns. Field trips and excursions, the food harvest, parties and welcome/goodbye dinners were some of the other fun activities offered to interns, who also got to attend WWDC 2007’s party for free. And of course, one can’t forget the corporate games, in which teams compete in a variety of games, from serious sports to water games and tug of war. Every day of the week, interns on the mailing list arranged various activities, from volleyball to movie and poker nights. Finally, interns had access to all the perks regular Apple employees have, including foosball and other game rooms, a sports court, the health and fitness center, as well as campus shuttles to the Caltrain station (with Wifi, of course!).

    Even though the work environment at Apple is relaxed and flexible, I had to work hard, but the rewards and results were worth all the effort. The team I worked on (and the entire company) did a great job on the iPhone, and the future for Apple continues to be exciting and new innovations are always on the way. There are few more rewarding experiences than walking into an Apple Store and seeing people of all ages gaze in awe at our products. One day, those same people will be looking at a product, feature or device that I participated on, and I’ll smile back at them, knowing I had a hand in getting it on that shelf.

    All in all, I’m defintely looking forward to returning to Cupertino once more.

    To find more about the internship program at Apple, visit the official site here. Please don’t ask any questions outside the scope of what is mentionned on that page; there are even more exciting things going on at Apple, but you’ll have to come work here to find out!

    Purple Pill: What Happened

    Two weeks ago, I posted and published about a tool I wrote called Purple Pill, which used a bug in the ATI Vista x64 Video Driver to allow unsigned drivers to load. Within an hour, I pulled off the tool and the post, on my own accord, due to the fact I discovered ATI was not made aware of this particular flaw, and I wanted to follow responsible disclosure and allow ATI to fix their driver.

    This flaw was especially severe, as it could be used for other purposes — including allowing a guest user access to a SYSTEM token, if the user was using a computer with an ATI Video Card. Therefore, it could have significant security implications beyond my original goal of bypassing Driver Signing on 64-bit Vista. On systems without the driver, administrative privileges would be required for any kind of attack. I originally thought this flaw had been reported to ATI since it was disclosed publically at Blackhat — something that’s usually only done once the presenter has notified the company. In this case, it seems this was not done, and I made an incorrect assumption. I should’ve checked the status of the flaw on my own, before posting the tool and I apologize for not having done so.

    As for the act of bypassing driver signing, while I still disagree with Microsoft’s policy of not allowing users to set a permanent policy to explicitly allow this (at their own risk, perhaps even voiding support/warranty options), I have come to realize that attacking this policy isn’t the way to go. Microsoft has decided that Code Integrity is a new part of their security model and it’s not about to go away. Using kernel bugs to subvert it means that these measures would eventually be fixed, while exploiting 3rd party drivers potentially allows malware to figure out how to do this as well, and use the 3rd party driver maliciously. It is also a method that can be protected against, since Vista does have the ability to do per-driver blocking, and once the 3rd party vendor has upgraded all the customers, the older driver can be killed (or even shimmed against, since various kernel infrastructure in Vista allows for this kind of real-time patching).

    I am currently exploring other avenues for allowing open source drivers to function on 64-bit Vista without requiring developers to pay for a certificate and deal with the  code signing authorities, while still respecting Vista’s KMCS policy, and continuing to protect against malicious drivers using such a method for their own gain. It is my hope to find a solution which will both please Microsoft and the KMCS policy, as well as make life easy for open source developers (and other non-commercial hobbyists) which for whatever reason don’t want to, or cannot, pay for a certificate.