What are Little PatchGuards Made Of?

A number of excellent PatchGuard articles have been written around what PatchGuard is, how to bypass it, what triggers it uses, its obfuscation techniques, and more.

But for some reason, nobody has published a full list of everything that PatchGuard actually verifies. Microsoft used to have a website that listed the initial first 7 checks, but nothing beyond that.

I asked around at conferences, and the answer I got was that the code was too complex to analyze, and nobody really wanted to take the time to figure out every single check. I had my own private list of checks I knew PatchGuard does (through runtime analysis), but I was surprised to see the real reason nobody’s bothered to analyze this…

… Microsoft’s own public debugger (known as WinDBG) tells you — why bother reversing? 🙂

Lo’ and behold, the 39 different checks in PatchGuard Windows 8.1 Update. There’s a few more in Windows 10, I guess they’re not yet documented.

Arg1: 00000000, Reserved
Arg2: 00000000, Reserved
Arg3: 00000000, Failure type dependent information
Arg4: 00000000, Type of corrupted region, can be
0 : A generic data region
1 : Modification of a function or .pdata
2 : A processor IDT
3 : A processor GDT
4 : Type 1 process list corruption
5 : Type 2 process list corruption
6 : Debug routine modification
7 : Critical MSR modification
8 : Object type
9 : A processor IVT
a : Modification of a system service function
b : A generic session data region
c : Modification of a session function or .pdata
d : Modification of an import table
e : Modification of a session import table
f : Ps Win32 callout modification
10 : Debug switch routine modification
11 : IRP allocator modification
12 : Driver call dispatcher modification
13 : IRP completion dispatcher modification
14 : IRP deallocator modification
15 : A processor control register
16 : Critical floating point control register modification
17 : Local APIC modification
18 : Kernel notification callout modification
19 : Loaded module list modification
1a : Type 3 process list corruption
1b : Type 4 process list corruption
1c : Driver object corruption
1d : Executive callback object modification
1e : Modification of module padding
1f : Modification of a protected process
20 : A generic data region
21 : A page hash mismatch
22 : A session page hash mismatch
23 : Load config directory modification
24 : Inverted function table modification
25 : Session configuration modification
102 : Modification of win32k.sys

I have to admit, there are some things I didn’t realize PatchGuard would actually think about protecting, such as the Local APIC. It’s also interesting to see some more esoteric hooks in the list as well, such as PsEstablishWin32Callout protection. I also did not realize PatchGuard now protects the DRIVER_OBJECT structure — indeed, hooking a major function will now give you code 0x1C. And finally, the protection of protected processes means that technically something such as Mimikatz’s “MimiDrv” may crash some machines in the wild.

I usually try to avoid talking about PatchGuard since I’m glad it’s giving AV companies hell, but I can’t have been the only person that never noticed that the checks were documented in the debugger all along, hidden behind a simple command (it makes sense that Microsoft wouldn’t want their own support engineers to be wondering what on Earth they’re looking at):

!analyze -show 109

I can’t even take credit for discovering this on my own. Reading Microsoft’s famous “NT Debugging” blog made me realize that this had been there all along.


Analyzing MS15-050 With Diaphora

One of the most common ways that I glean information on new and upcoming features on releases of Windows is obviously to use reverse engineering such as IDA Pro and look at changed functions and variables, which usually imply a change in functionality.

Of course, such changes can also reveal security fixes, but those are a lot harder to notice at the granular level of diff-analysis that I perform as part of understanding feature changes.

For those type of fixes, a specialized diffing tool, such as BinDiff is often used by reverse engineers and security experts. Recently, such tools have either become obsoleted, abandoned, or cost prohibitive. A good friend of mine, Joxean Koret (previously of Hex-Rays fame, un-coincidentally), has recently developed a Python plugin for IDA Pro, called “Diaphora“, (diaforá, the Greek word for “difference”).

In this blog post, we’ll analyze the recent MS15-050 patch and do a very quick walk-through of how to use Diaphora.


Installing the plugin is as easy as going over to the GitHub page, cloning the repository into a .zip file, and extracting the contents into the appropriate directory (I chose IDA’s plugin folder, but this can be anything you wish).

As long as your IDA Python is configured correctly (which has been a default in IDA for many releases), clicking on File, Script file…, should let you select a .py file


Generating the initial baseline

The first time you run Diaphora, you’ll be making the initial SQLite library. If you don’t have Hex-Rays, or disable the “Use the decompiler if available” flag, this process only takes a few seconds. Otherwise, with Hex-Rays enabled, you’ll be spending more of the time waiting for the decompiler to run on the entire project. Depending on code complexity, this could take a while.

This SQLite library will essentially contain the assembly and pseudo-code in a format easily parsable by the plugin, as well as all your types, enumerations, and decompiler data (such as your annotations and renamed variables). In this case, I had an existing fairly well-maintained IDB for the latest version of the Service Control Manager for Windows 7 SP1, which had actually not changed since 2012. My pseudo-code had over 3 years to grow into a well-documented, thoroughly structured IDA database.

Diff me once, importing your metadata

On the second run of Diaphora (which at this point, should be on your new, fresh binary), this is where you will direct it to the initial SQLite database from the step above, plus select your diffing options. The default set I use are in the screenshot below.


This second run can take much longer than the first, because not only are you taking the time to generate the a second database, but you are then running all of the diffing algorithms that Diaphora implements (which you can customize), which can take significantly longer. Once the run is complete, Diaphora will show you identical code (“Best Matches”), close matches (“Partial Matches”), and Unidentifiable Matches. This is where comparing a very annotated IDB with a fresh IDB for purposes of security research can have problems.

Since I renamed many of the static global variables, any code using them in their renamed format would appear different from the original “loc_325345” format that IDA uses by default. Any function prototypes which I manually fixed up would also appear different (Hex-Rays is especially bad with variable argument __stdcall on x86), as well any callers of those functions.

So in the initial analysis, I got tons of “Partial Matches” and very few “Best Matches”. Nothing was unmatched, however.

One of the great parts of Diaphora, however, is that you can then confirm that the functions are truly identical. Since we’re talking about files which have symbols, it makes sense to claim that ScmFooBar is identical to ScmFooBar. This will now import all the metadata from your first first IDB to the other, and then give you the option of re-running the analysis stage.

At this point, I have taken all of the 3 years of research I had on one IDB, and instantly (well, almost) merged it to a brand new IDB that covers a more recent version of the binary.

Diff me twice, locating truly changed code

Now that the IDBs have been “synced up”, the second run should identify true code changes — new variables that have been added, structures that changed, and new code paths. In truth, those were identified the first time around, but hidden in the noise of all the IDB annotation changes. Here’s an incredible screenshot of what happened the second time I ran Diaphora.

First, note how almost all the functions are now seen as identical:

And then, on the Partial Matches tab… we see one, and only one function. This is likely what MS15-050 targeted (the description in the Security Bulletin is that this fixed an “Impersonation Level Check” — the function name sounds like it could be related to an access check!).

Now that we have our only candidate for the fix delivered in this update, we can investigate what the change actually was. We do this by right-clicking on the function and selecting “Diff pseudo-code”. The screenshot below is Diaphora’s output:


At this point, the vulnerability is pretty clear. In at least some cases where an access check is being made due to someone calling the Service Control Manager, the impersonation level isn’t verified — meaning that someone with an Anonymous SYSTEM token (for example) could pass off as actually being a SYSTEM caller, and therefore be able to perform actions that only SYSTEM could do. In fact, in this case, we see that the Authentication ID (LUID) of 0x3E7 is checked, which is actually SYSTEM_LUID, making our example reality.

At this point, I won’t yet go into the details on which Service Control Manager calls exactly are vulnerable to this incorrect access check (ScAccessCheck, which is normally used, actually isn’t vulnerable, as it calls NtAccessCheck), or how this vulnerability could be used for local privilege escalation, because I wanted to give kudos to Joxean for this amazing plugin and get more people aware of its existence.

Perhaps we’ll keep the exploitation for a later post? For some  ideas, read up James Forshaw’s excellent Project Zero blog post, in which he details another case of poor impersonation checks in the operating system.


Secrets of the Application Compatilibity Database (SDB) – Part 2

As noted in the introductory article, Windows Vista (and XP) ship with a number of default shims which are not exposed through any control panel or dialog available to end-users. Running the CDD Utility however, one can see all the shims installed in the defalt system database (sysmain.sdb):

Compatibility Database Dumper (CDD) v1.0
Copyright (C) 2007 Alex Ionescu

usage: cdd.exe [-s][-e][-l][-f][-p][-d kernel-mode database file][-a user-mode database file]
  -s Show shims
  -e Show executables
  -l Show layers
  -f Show flags
  -p Show patches
  -d Use Blocked Driver Database from this path
  -a Use Application Compatibility Database from this path

NOTE: If no paths are given, the default system database is used.

Dumping Entry: SHIM

DESCRIPTION=”Our internal hook for GetProcAddress used to not check include/exclude list at all which means the GetProcAddress calls from all modules are shimmed. Then we added code to take include/exclude list into consideration and that “broke” apps that used to rely on the previous behavior. To compensate for this, you can specify this shim to get back the old behavior.”
Dumping Entry: SHIM


The utility continues to dump several dozen other shims. It’s still in beta for now, so the final output might not match, but it allows us to build a list of several interesting system shims, which I’ll enumerate below.  Caveat: my criteria was a mix between usefulness, interesting security implications, and completely out-of-this-world, bizare or uber-hack shims. The ones in bold are some of my favorite, but you should defintely read through them all. Once the tool is completed, you’ll be able to dump your own.

DESCRIPTION=”Add flags to Peb-ProcessParameter-Flags. The flags are a ULONG. Specify it as a hex number (so at most 8 digits).”
DESCRIPTION=”Logs API calls made by the application to an .LGV file in %windir%\AppPatch. You must copy LogExts.dll, LogViewer.exe and the Manifest directory to %windir%\AppPatch in order for this shim to function properly.”

DESCRIPTION=”Changes COM Security Level from RPC_C_AUTHN_LEVEL_NONE to RPC_C_AUTHN_LEVEL_COMMON. This enables temporary elevation of the security context for an application.

DESCRIPTION=”Some applications may use static DLLs, which could potentially issue calls to APIs before the application is ready. This compatibility fix provides a workaround for this behavior by causing a delay in the application’s static DLLs. This compatibility fix takes a command line containing a list of the DLLs affected. They will be loaded in the reverse order of the command line listing. Note that this compatibility fix is similar to InjectDll, which works with dynamically loaded DLLs.”

DESCRIPTION=”Some installation programs will create a randomly named executable when they are launched that is responsible for performing the actual setup. This compatibility fix takes a command line that specifies what random executable name is created, and upon creation, renames it to the new name specified on the command line. The command line is given as the source name followed by the desired name. For example: *.EXE;RANDOMSETUP.EXE.”

DESCRIPTION=”This compatibility fix disables execution protection (NX) for a process. This is useful for applications that decide to execute from memory region marked with NX attribute (like stack, heap etc).”

DESCRIPTION=”Disable safe exception handling.”

DESCRIPTION=”This compatibility fix causes Windows XP to return a significantly reduced environment block from the environment APIs. This reduces the chance of a buffer overrun causing corruption.”

DESCRIPTION=”This compatibility fix emulates the functionality of the Windows 9x heap manager. It is is full implementation of the Windows 9x heap manager ported to Windows XP.”

DESCRIPTION=”Fixes for known API differences between Win9x and NT: SetWindowsHookEx, SetWindowLong, RegisterClass, ChangeDisplaySettings/ChangeDisplaySettingsEx, ToAscii/ToAsciiEx, GetMessage/PeekMessage, ShowWindow. Also persists palette state through mode changes.”

DESCRIPTION=”In Windows 9x applications could restart the computer by calling the ExitWindowsEx API. Windows XP requires the application to run with adequate security privileges to successfully call the ExitWindowsEx API. This compatibility fix enables an application to call the ExitWindowsEx API with correct security privileges. Applies to: Windows 95, Windows 98″

DESCRIPTION=”A service startup circular dependency occurs when two or more installed services depend upon each other to start. That is, service ‘A’ cannot start until service ‘B’ starts, but service ‘B’ cannot start without service ‘A’ running. This compatibility fix attempts to counter this behavior.”

DESCRIPTION=”This compatibility fix addresses issues that may be encountered when an application uses the CheckTokenInformation API call to verify if the current user is part of the Administrators group. The fix intercepts calls to CheckTokenInformation and returns a value of true.”

DESCRIPTION=”This compatibility fix addresses issues with APIs that may not gracefully handle receiving bad parameters. Currently, this works with the BackupSeek, CreateEvent, and GetFileAttributes APIs.”

DESCRIPTION=”This compatibility fix provides a facility to convert the argument list from LPSTR into VA_LIST. Some native Windows 9x applications use LPSTR instead of VA_LIST. Without properly checking the return value, these applications may assume that it is safe to use Wvsprintf, but in Windows XP, this may cause an access violation. This compatibility fix takes one command line: “arglistfix” (case insensitive).”

DESCRIPTION=”This compatibility fix will clear out every heap allocation for the application with zeroes, or with a DWORD value that has been supplied in the command line.”

DESCRIPTION=”This compatibility fix will delay calls to LocalFree. This may help applications that are trying to free heap memory using LocalFree before all activities have been concluded.”

DESCRIPTION=”Prevent CRT shutdown routines from running.”

DESCRIPTION=”This compatibility fix will prevent specified DLLs from being loaded by the LoadLibrary API, specified on the command line. If specifying multiple DLLs on the command line, they should be seperated by spaces. This may be useful for applications that have fallback mechanisms for features that are not supported. In addition, it reduces the error mode so library problems won’t cause the system to generate an error dialog. Applies to: Windows 95, Windows 98″

DESCRIPTION=”This compatibility fix intercepts calls to the MessageBox API and, based on the supplied command line, prevents the message box from being displayed. Many applications display a message box with debugging or other extraneous content that can be confusing to users. These are normally the result of differences between Windows 9x and Windows XP.”

DESCRIPTION=”Some VB apps try to store win32 handles in WORD size variables. On Win9x this works because most handles are 16-bit. However, on NT, the VB type checking code throws a “Runtime Error 6″. The shim intercepts the type checking code and ignores the check.”

DESCRIPTION=”This compatibility fix calls WinExec on the passed command line, and then terminates the caller process. The command line can contain any environment variables that need to be passed to the executable.”

DESCRIPTION=”This compatibility fix fixes problems with any application that uses the Shrinker resource compression library. This library hacks resource functions in ntdll and kernel32 and redirect calls into their own function routines. But Ntdll code has different opcodes in Windows XP. The program failed to find the opcode signature and decided to cancel WriteProcessMemory call to write their redirection. Because of this, the necessary decompression of program code and resources were not executed and caused access violation. Shrinker compatibility fix resolves this by providing necessary opcode signature so the app could write those redirection into ntdll.”

DESCRIPTION=”Many APIs use much more stack space on NT than Win9x. This compatibility fix is command line driven and takes a list of APIs that will be hooked, making them use no stack. The format the command line is “MODULENAME!APINAME[:X]; MODULENAME!APINAME[:X] …” where X is 0 : fill old stack with zeroes 1 : fill old stack with pointers 2 : fill old stack with pointers to pointers by default, no stack filling occurs.”

DESCRIPTION=”This compatibility fix terminates an executable (.EXE) immediately upon launch.”

DESCRIPTION=”Hooks all the registry functions to allow virtual keys, redirection and expansion values.”

As you can see, the Shim Engine allows from the simplest of hacks (such as adding PEB flags) to complete ports of 9x APIs (such as the Heap Manager). Many other shims are simply extremly useful features that should be accessible easier. The ability to deal with random setup application names is something I’ve had to code on my own in the past, and the VirtualRegistry shim in XP seems to be almost as powerful as the built-in Vista feature. Yet others, dealing with delay loading DLLs, instantly killing, and redirection can be lifesavers during certain debugging scenarios.

For now, these shims have only been presented. Later series will deal with actually using this shims, but for now, we’ll have to continue exploring the system further inside the next article.

Secrets of the Application Compatilibity Database (SDB) – Part 1

For the last few days, I’ve been intimately becoming aquainted with a piece of technology in Windows XP and Vista that rarely gets the attention it deserves. It has raised my esteem and admiration towards Microsoft ten fold, and I feel it would be wise to share it, publicize it, and then of course, find (positive) ways to exploit it to turn it into a powerful backend for various purposes.

The Shim Engine, which is how I’ll call it (and is one of the official names), is a technology implemented in various DLLs (mostly shimeng.dll and apphelp.dll — which is the Application Compatibility Interface) as well as through some callbacks and hacks in the PE Loader present in ntdll.dll. It also contains various registry entries for its configuration, as well as system database files.

What does this technology do? You’ve probably seen it in action when using Windows XP/Vista’s “Compatibility Wizard”, or the dialog which gives you options such as “Disable visual themes”, “Run application in Windows 2000” mode or “run at 640×480”. The checkboxes are called “shims”, while the actual Windows 2000 or Windows 98 combo box selections are called “layers”. Although this is hidden from you, layers are usually simple large combinations of other shims, each which somehow modify the system to behave in a different way. Unfortunately, this dialog contains only 3 shims, while over 100 are present by default on a Windows installation.

However, it is enough to illustrate how the technology works. Once an application has been “shimmed” manually, registry entries are created to notify the loader. As it loads, the loader will run the Shim Engine, which will perform lookups in the system compatibility database, recovering various information. This database is called sysmain.sdb, and it is located in your AppPatch directory. On top of the default database, individual, custom databases can be created, which are registered and installed through the registry. These specify settings for programs that you’ve manually chosen to be shimmed.

The way that shims are implemented is usually through a helper DLL, which the Shim Engine will load during PE Loading, and intercept the APIs being used, much like Detours. These DLLs are prefixed “Ac” and are also in the AppPatch directory. They contain the redirected code which behaves differently then the normal system API.

The most interesting part however, is not the ability to select these options, but the fact of how much of this is being done behind your back every single time you run an application. Upon analysis, the system database contains over 5000 applications (in Windows Vista) from small Chinese publishers to the largest application vendors, including Microsoft itself.

One of the core “objects” that the database supports is the Matching File construct, which does file pattern matching to identify whether or not this entry actually applies to the program being run. These pattern matches can go from the very simple “starcraft.exe” with a timestamp and checksum entry, to the more complex entries which try to match various .bmps, .wavs and data files present in a game’s engine. Wildcards and simple boolean logic is also supported, making for powerful pattern matching abilities.

Once a matching Executable construct has been found through its children Matching Files, 4 different types of modifications can be done. The first are system shims, which are implemented typically in the acgenral.dll or aclayers.dll library, and that many products might benefit from, such as emulating an older version of an API. The second are specific shims, which are tailored to an application, and located in acspecfic.dll. The third kind is also a shim, but a Flag shim, which specifices undocumented flags which are to be sent to LUA or the Installer about this application. Finally, the fourth type of change is a binary patch, which represents actual in-memory patching on the executable, instead of a system API hook.

Sound interesting and powerful? It is. I’ll spend the next few blog entries talking more about the various parts of the system, as well as offering two applications that I’ve been writing on. The first is a complete dumper of any .SDB database, and the second will be announced at the end. Here’s an overview of the different posts that I’m expecting to make:

1 ) Introduction (You are here).
2 ) System Shims – The Most Interesting Ones.
3 ) The Private Shim Engine Interface With The PE Loader.
4 ) Built-in Shimmed Applications and Specific Shims – A Sample.
5 ) Tool 1 – CDD – Compatibility Database Dumper
6 ) Flag Shims – LUA and Installer Flags.
7 ) The Run-Time In-Memory Patching Behaviour and Analysis
8 ) The System Blocked Driver Database – The Kernel Side of SDB.
9 ) Conclusion and Tool 2.

Finally Legal!

I’m not one to boast, but today marks my entry into the US “Adult” world. While I’ve been happily enjoying fine beers in Montreal for over 3 years, I can now finally do the same in the US!

I apologize as well for the lack of recent posts, I am currently finishing my semester and in exam session. I will have more exciting things to share upon my return.

Good Discussion on Protected Processes

Down at the Disparity Bit, Dan Armak has a very good discussion on why exactly he thinks protected processes are bad, and a sort of addendum to my post on the subject. Check out “Making it Clear Just Why Protected Processes are a Bad Idea.” for a more detailed explenation on the problem.

A few people have started to reverse engineer the binary I posted, and some have come up with some partial explenations and analysis. I just wanted to clear up a few things: Yes, the method uses a driver. It’s based on the Microsoft documentation which says “Please don’t use a driver to bypass this”, which led me to believe that it would be possible to do this (which wouldn’t work on 64-bit Vista, of course).

Secondly, almost everything inside the binary I provided is a low-level obfuscation to confuse any kiddies that might try to grab a hold of the expanded driver and use it for the own purposes. It was not meant for, nor is it an example of, proper techniques to obfuscate/protect a program against advanced reverse engineers.

Introducting D-Pin Purr v1.0 – 32bit Edition

As promised in my earlier blog post, I’ve finalized the utility and made it available for download here. I won’t be releasing source code for the moment because I don’t want to encourage people to start adding this kind of code into their own malware programs, nor to encourage the Symantec folks to start unprotecting every process on the system.

So until then, have fun with the tool, whether it is to explore previously protected processes, or to try out various system and application behaviour when certain processes are made protected. Here’s a screenshot of audiodg.exe after being unprotected. Try it on your own system to see the before/after difference.

The Interview Experience at Top IT Companies.

As promised, here follows what I hope will be an interesting overview of my interview experience as an intern candidate at Google, Microsoft and Apple.


My interview at Google was probably the most unusual of the bunch. A long time ago (Almost a year), one of my friends in the fellow reverse-engineering community contacted me about a job opporunity at a Google office in Montreal, working on a top-secret project, but which was related to my knowledge. I got to the phone screens, and had a great first interview. My second interview however didn’t go so well. It was the first time my interviewer was ever screening a candidate, and he kept me stuck on a single question. The question was related to a low-level structure change in a private datatype used in Vista’s kernel; this change was documented in a patent, which I always found fishy as an interview question. Nevertheless, I believe I answered correctly some of the more generic implementation details, but the interviewer kept coming back on the same question and seemed like he wanted to hear a precise answer. Additionally, it didn’t seem like the project was fully related to my field of experise; unsurprinsingly, I got a refusal letter two weeks later.

Fast forward eight months later, and the DRM hacking news appears on the Internet. I get a call from Google the day after about setting up some interviews. My interviews get cancelled a couple of days later, then rescheduled for Monday, after my return from the SCALE 5X talk. I have a short (and very interesting) conversation with someone at Google would probably end up being my boss/mentor, and I get news a couple of days later that I got the job. And that’s about it.

Job Description: Security/Software Engineer/Developer. Code Auditing plus Windows Internals consulting/special projects.
Phone Screens: 1
Campus Visit: No


My path to Apple was a long and ultimately rewarding one. I attended CUTC last January, already with knowledge that I would be interviewing with Microsoft later. Therefore, I avoided most of the smaller booths, avoided Microsoft since I already had an interview, as well as Google since, at the time, I had not received the phone call about a new opportunity. The only company that I still had some interest in during the job fair was Apple. This is mostly because during the day, I attended two sessions on Apple Development Tools. The first one was on Shark, which completely amazed me. There were lots of technical questions during the presentation, and I was always the only one answering them correctly, so the Apple people noticed me and asked me to come for a chat. I went to see them, and handed in my CV. The Apple recruiter was mostly looking for people to work on the Ipod or Mac stuff, so my Windows Kernel experience didn’t seem relevant at first.

My friends got calls from Apple during the days after, I didn’t. I gave up on the opporunity since I thought they wouldn’t be interested. Two weeks later, I get a call from the recruiter saying she passed on my information to the OS X Kernel Team. After the DRM news, the Security Team gets interested as well. What followed after was the most exhausting interviewing process I’ve been through. Because Apple couldn’t fly me in (I don’t think they do that for non-local candidates), I had to go through the equivalent of the Microsoft interview process, but over the phone. Since I was actually interviewing with two teams, double the amount of time and people for an accurate depiction. In total, I believe I spoke with 9 or 10 Apple developers, managers and testers on both teams.

The questions were very technical, but not in the “optimize this algorithm” way. The engineers there seemed to be genuinely interested in my ideas, thought process, and solutions/problems I could find to various designs. One question I was asked, which I think I can share, is how a Hypervisor Rootkit would be more dangerous then a normal Kernel Rootkit, how to protect against that, as well as how to create a workable Hypervisor Patchguard-like system, what to look for, how to discriminate against the OS touching critical data, and malware, etc. There also of course the general ReactOS/TinyKRNL questions as well as questions on my interest for the job/company.

I felt exhausted at the end of about the 1-2 weeks this process took, but I thought I had done very well on all the interviews. During my talk in Waterloo, I got a call saying I got offers from both teams, and had to choose one. I chose the Core OS kernel team, and received my offer in the mail a couple of days later.

Job Description: Kernel Developer. Working on various Darwin/OS X related undisclosed projects.
Phone Screens: 6, some were conference calls with multiple people on the line.
Campus Visit: No


My path at Microsoft started through various friends and contacts that I’ve made a the company in the last few months thanks to my security-related research and presentations/papers. They saw in me a really good candidate for the various security groups at Mircrosoft, and also on the actual NT Kernel Team. The interview process at Microsoft was both disappointing and amazing. First off, it started with a pretty technical phone screen. Unforunately, my screen was on SQL, which I knew absolutely nothing about. However, my interviewer was very understanding, gave me a couple of hints, and I was able to identify and solve issues with “cursors”, something I had never even heard of. I was also asked some more generic and personality questions, and I my opinion of the interview was that I did decently. I was also interviewed by one of the most prominent figures in SQL, working on the Core SQL Engine Group at Microsoft, and someone I deeply respect.

This interview let me to an actual invitation for a campus interview. This is where the disappointing part starts. My phone screen was sometime in October or November. It took about two months, by email, to get an actual interview date, and it ended up being in March. Therefore, even though Microsoft was my first confirmed interview, in the time frame that it took the mto set something up for me, Apple and Google had the chance to hear about me, contact me, interview me and both send me offers. This created a very difficult problem for me in terms of various deadlines that the other offers had to meet. All in all, I didn’t feel that my RC (Recruiter Coordinator) was very communicative with me, and I had to rely on my connections inside the company to figure out what was going on. Contrast this with Apple which had everyone on their team calling me (which greatly raised my interest in the company) and even Google, who had one of their top engineers chat with me on the phone, and both companies kept in touch by email and phone relating my status, offers, interviews, etc. Microsoft’s replies, when available, were always robotic and template files.

However, this disappointement quickly faded away once I got on campus. Microsoft has the most amazing interviewing experience. First of all, not only do they pay all your expenses, you also get a generous amount of money to spend during your daily activites, and you’re encourage to stay more then one day. Taxis are included, up to 75$ of food per day is included, museum visits, sightseeing, long-distance calls, Internet access and more are all free perks you get. Additionally, before your rounds start, Building 19 has various computers, big-screen TVs and XBOXes to fill up your time. You can also visit the campus, and even go see the Microsoft Museum, which has some unique artifacts you’re likely not to see anywhere else.

Once your interviews start, you’ll meet with a variety of people on the teams that are interviewing you. There are all very smart people and each of them has his or her own interviewing style. You’ll probably start out with coding questions/tricks, and move up to more high-level implementation/architecture stuff. My final interview was with a hiring manager, which consisted a lot more of personality and profesional questions related to work habits, ethics, etc. I like the fact that the interviews seemed to test every part of the candidate, from your typical algorithm questions down to your pattern of thinking and answering hard business problems.

I had a serious issue with my work at Microsoft however. First of all, the deadlines for my other offers were Monday (and my interviews on a Friday). Secondly, I needed to know if I could ever work on ReactOS/TinyKRNL after my internship was over. The only peopel who could answer this were LCA, the Law and Corporate Affairs deparment of Microsoft, who are usually pretty hard to get by, especially on a weekend. I made it clear that two things were critical to me: being allowed to work on ReactOS after my internship was over, and working on the Base or Virdian Team.

It turns out I passed my interviews, and my understanding was that an offer could’ve been offered to me. Unfortunately, I was qualified as a “legal risk”, and they did not want to go forward with it, due to my work on ReactOS. It was made clear to me that I would have to choose between the two. Since this wasn’t a full time employement, and only my first internship in many to come, I didn’t want to sacrifice the project for an internship. Who knows if I didn’t like the Base Team? Or maybe I wanted to work in some other company later on, or maybe Microsoft would not want me anymore. The restriction of never having to work on ReactOS again seemed way too harsh — not even non-compete agreements are this permanent, but regardless, I can understand why Microsoft chose to do this. I am still very greatful for meeting all those smart people, and will be keeping in touch with them in the future.

Job Description: Kernel Developer. Working on the base kernel and/or Virdian.
Phone Screens: 1
Campus Visit: Yes. Five Interviews.

Final Choice

Ultimately, because of the Microsoft situation, my choice became Apple vs Google. Both companies are dream companies to work for, and it wasn’t easy choosing between the two. Ultimately however, the work I would be doing at Apple was a lot more related to my core competencies (kernel development), and gave me the chance to discover a new architecture and OS design. I felt like some of my work at Google might be hindered by their requirement for computational/algorithm experience and my lack of a formal training in the matter (which won’t come until my next semesters). Also, Apple’s details about my work (which I can’t mention) clearly became the defining factor in my decision. The team size, which is extremly small, meant that my work would have a real impact on the products/services/etc I’d be working on, and that was also another great opporunity that I think an internship is good for.

Another important factor was ReactOS, which didn’t seem to hinder at all my work at Apple, as well as the friendlyness of all the people at Apple. I am trying to bring my girlfriend over with me for the summer, and Apple was very forthcoming in helping with this. In the end, the Apple offer was the most interesting, and the culture/ethics and work seemed the most adapted to me, as did the helpfulness of everyone involved in my interview process and offer. I felt like I was really needed, a truly unique candidate, and that was indeed a great feeling to have.


Please remember that this experience was unique to me; do not attempt to generalize or make any employement choices based on this experience, since you will most likely have a different one. I have however tried my best to avoid giving away any confidential or private information, so please do not ask/make comments on my offer, and perks offered, et caetera, because I will not discuss them.

I strongly recommend anyone with the opportunity to work at any of these three companies to take it, if their interest in the work they’ll be doing is high. They are all amazing companies that I’d love to work for during my life.

If there’s one lesson that I want to share from my experience it’s this: go with your interests. Don’t be amazed by perks, salaries, or other material things. Does the campus/team seem a good match? Does the work interest you? If yes, everything else should be secondary.


I promised I’d blog about this little bugger, so here it is. It’s an amazing little API that’s pretty useful for various purposes. Let’s look at the definion first:

IN HANDLE Process,
IN PVOID CallSite,
IN ULONG ArgumentCount,
IN PULONG Arguments,
IN BOOLEAN PassContext,
IN BOOLEAN AlreadySuspended

You can already guess what this function does! Basically, it hijacks any thread of your choice into any process that you have access to, and sends the thread to a new “CallSite” by updating its EIP there. It also allocates a new ESP, and pushes up to 4 parameters at most. That’s somewhat similar to CreateRemoteThread, but notice that in this case we’re hijacking an existing thread, not creating a new one. Which means that once the CallSite has been hit, the thread will simply die.

So this is where the PassContext argument comes into play. If set to TRUE, then the function will also push the thread’s context before being supended and modified, on the new ESP. Therefore, if you control the routine on the other side, you can have it issue a NtSetContextThread(&Context), and the hijacked thread will return where it was executing. Be careful with waits though, since this will restart a wait.

Finally, the AlreadySuspended parameter is useful if your thread is already suspended, and you want to resume it yourself at a later time. Otherwise, the remote call is instant (as soon as the OS switches to the other thread).

So what’s all this good for? Turns out one great use is to build a sweet IPC (Inter-Process Communication) framework without resorting to LPC/RPC. I built a sample project that I’ll release soon, but the basic premise is that two threads, client and server, in different processes, are running. The server thread is always suspended, and loops around a NtSuspendThread call (for the case when it gets waken up). The server process has a special routine for incoming requests. The client process has a special routine for incoming replies. The client issues, at various times, InterAPI calls. These calls have 4 parameters, which is the maximum. The first is the actual Thread ID of the caller, the second is an API Number, and the last 2 parameters are specific to whichever API call is being done. The handler then calls the respective handler (based on a table, the API Numbers are indexes). To reply, the handler then issues a second RtlRemoteCall back to the client (to its reply routine). But how do we know the caller? That’s where the Thread ID comes in, since we can get a handle to it and its process by using NtQueryInformationThread. And how do we know the pointer to the callback routine? Let’s leave that as a surprise for the code.

This callback routine only takes one parameter, which is the return code. The callback then saves this return code in the TEB (guess where?), and issues an NtResumeThread, which will cause the caller thread to return. The caller thread can then read the return code from the TEB and return it to the caller of the InterAPI.

The code will be shown soon 🙂

Back from CUTC

I had the chance to attend the Canadian Undergraduate Technology Conference 2007 this year, in Toronto, and it was one of the most entertaining, informative and enjoyable event I’ve ever been to lately. Apart from the wonderful keynotes (one of them was by a Nobel laureate), the competitions, tech shows and sessions were extremly useful. I was extremly impressed by Apple’s Shark and Quartz Composer tools. I always imagined Mac development was a bit of a mystery and all command-line based magic, but their tools are a serious threat to Windows development. Windows doesn’t even have a tool that comes close to what Quartz Composer can do, and although tools like Shark already exist, none of them are so seamless, easy to use, and powerful. In 20 minutes we took code that we had never seen before, and optimized it from 900 ‘thoughts per second’ (a metric in an AI test case) to over 5000. The entire platform is built on open source tools (such as GCC), and even Shark is based on the Linux code analysis/profiling tool called DTrace (I believe that’s the name). But it’s the Apple UI and integration that makes it all worth it.

Meeting with various company executives, managers and engineers was great too, and they had a lot of insight into their experience working in the industry.

To make things even better, my team also won the “CUTC 2007 Best Design Award” in the AMD/ATI Tech Team competition. All our team members (five) received an ATI Radeon video card. This week I’ll be attending CUSEC, the Canadian Undergraduate Software Engineering Society, which, thankfully, is in Montreal. I will most probably be doing a demo of ReactOS as well.