Sunday, 10 August 2014

Smartcard authentication and access control

DISCLAIMER: I am NOT a crypto expert. Of course I use cryptography, authentication protocols, etc.
I understand them (and their cleverness amazes me) and I have even taken 2/3 courses on cryptography during my Master and my PhD. But the first lesson you ought to learn when you are working on authentication, and on cryptography in particular, is: you are not, and the courses will not make you, an expert. And so you better never, ever design your own crypto algorithm, or your own authentication protocol. Stick with what others have done (and studied for years) and follow best practices. Not blindly, of course, but one matter is understanding what is going on, and another matter entirely is designing them flawlessly.

Last time I mentioned that one of the possible usage scenario for a smart-card is to act as an access and "presence" token. You can think about them as a read-only ID.
Well, you usually go further than that: writing on cards has advantages (you can update them with new information, for example, or "stamp" them when you see them, so that officials can  check that you have validated it and therefore you are authorized, which is a necessity if you do not have gates at any access point); however for the sake of discussion, we can think about them as read-only IDs.

Couplers, i.e. card readers/writers, will recognize the card (and optionally "stamp" it), and record your passage (via LAN or WiFi or UMTS).

Each card has a unique identifier (UID). This ID is hard-wired when the card is produced. Think about it as the MAC address of a network card.
The UID is always transmitted in the initial phases of the communication between the card and the coupler. Many applications just stop there: read the UID, whitelist them, let them through if they are on the white list.

Does it work? Sure. Is it a good approach? No, absolutely NOT. And I was (and I still am) amazed at how many contactless systems just stop there. These systems are not secure, at all. You just have to read the UID (any NFC reader can do that, you just need -for example- to touch the card with an Android phone) and then configure a device (an NFC phone, or a programmable card) with the same UID and... presto! You "cloned" the card.
You didn't, of course: these cards are not clonable, not without a (very) substantial effort, time, and expensive machinery.

The UID is just not designed for this purpose. It is used to identify a card, not to protect against cloning.
Let's remind us which are the typical requirement in this scenario. You want to:
1. identify each user uniquely
2. prevent the user from "faking" a card - either a non-existing one, or a clone of another user
3. let each user read his own card content, if they want to, but not the content of other cards,
4. let the officials (security guards?) read the content of the card,
5. let only the authorized couplers write the card: nobody else can do that (or the officials could read "fake" validations, i.e. records not produced by authorized couplers, and which the systems knows nothing about - if you can do that, you can pass a casual inspection unpunished).

For doing this, many cards (in our case: Mifare Desfire EV1 cards) provide some mechanism out of the box that already cover most of the points. Furthermore, there are a couple of best practices that let us cover all the points.

Desfire EV1 cards use T-DES keys to provide security and authentication. These cards are essentially file-storage cards, with files organized in a one level file system. The top "directories" of this file system are called applications.

The card, every application, and every file can have three keys each. The keys will be used for access control, and can be viewed as the read-key, write-key, and modify-key.
So, if I authenticate with the card using KeyA, and KeyA is the read-key for App1/File1, I can open and read the content of that file, but not write it.

Keys are symmetric, and are stored on the card and on the coupler (the RFID reader/writer device).
Cards are considered a secure storage element, as keys can NOT be read from cards. There is no way to retrieve them, and cards are engineered to be tamper resistant. Most of them even protected from attempts to directly read their content using differential power analysis.

Keys are stored on couplers using essentially the same technique: they are not stored on disk or memory, but inside a Secure Element (SE) (sometimes called SAM). It is essentially the same piece of silicon and plastic you find in the smart-cards or in your phone SIM card (the have the same physical format as well), and it is in fact a smart-card, with the same tamper-resistence properties. So, even if someone steals one of your couplers, he/she still hasn't got the key.

Obviously, only some couplers have the write key. They are also the only devices using a Secure Element: readers (couplers with only a read key) do not have them, for cost reasons.

How do you authenticate with the card? This is all covered by the smart-card protocol: in the case of Desfire cards it is a basically a challenge-response where you "prove" to each other that you know the key. You to that by generating a random number, encrypt it, and send it to the other party. The other party decrypts it, generate another random number and send both of them back, encypted. The first party decrypts it, see that its number is included (good, the other party has the correct key), and sends back the other number, encrypted. So the other party knows we have the key as well (we can encrypt and decrypt).

Previously I mentioned some best practices you ought to add to the basic framework. These are commonly accepted as standard and easy to implement: key differentiation and blacklisting.
Each card you issue should hold a different set of keys. This can seem like a problem, if you have, like us, half a million cards (and therefore 1 million keys).
Wow.. how can we store in an efficient way millions of keys? And check all of them when a card is presented to our couplers?
Well.. we do not store them. We derive them.

The UID of the card is used with a master key to derive a "personalized" key, using a one-way hash.
So even if a card is defective and compromised, we have lost its set of keys, nothing more.
• each card holds different keys, so if one is compromised, you just have to throw away (blacklist) that card
• (assuming you have chosen a proper hash algorithm) it is impossible to reconstruct the master key from the derived key
• you can just store the master keys on you "official" devices, in a secure way (i.e. in the SAMs) and they will be able to read and write any card (by deriving the right key on the fly)
• you can hand off to the user their own, "personalized" key, and they will be able to read their own card. But not the card of their friends (or spouse).

By using what the cards manufacturer offers and two little (and simple to implement) best practices, you have a very good level of security.
We added a little more, just to be on the safe side, but these best practices already are a very good point. I am amazed at how many companies fail to realize the issues and threats they are facing with the extremely naive UID-only implementation.
But we still have sites that store passwords in clear text, or that force the user to a numeric PIN of 5 digits, or to 8-char passwords.. so I should know better by now :)

Wednesday, 18 September 2013

Where have I been?

I really don't know if anybody besides my friends and coworkers read this blog but.. after some entries in the first half of this year, everything got silent.
The reason is simple: I was working, and in my free time.. learning working again.

The project I mentioned (the mini sharepoint-like replica) got momentum and a deadline. I am really enjoying working on it; I am using and putting together a lot of great techniques and frameworks. Some of them were known to me (WF4 for workflows management, GPPG/Gplex for parser generation, ASP.NET MVC for web applications...) but some were really new (AngularJS.. wow I do like this framework!, Razor.. which is a joy to work with). It is tiresome to work, then get home (using my commuting time to read and learn) and then work again. Fortunately, my wife is super-supportive!

That, and my "regular" day-to-day work were we are also putting together some new and exciting applications. The most interesting, a "framework" for authentication and authorization from third-parties that uses a mix of OAuth, NFC and "custom glue" to authorize access to third-party applications without having to enter username-password (we use our smart cards as credentials).
Think about an open, public environment (a booth at an expo, or an one of the outside events for which our regions is famous, like the Christmas Market)

You have a booth where you are offering some great service subscription. You do want people to stop by and subscribe to your services, but you do not want to bother them for too long, so filling out forms is out of question; you do not even want them to sign-in on a keyboard (try to make people enter a password at -5C.. it's not easy).
I coded a simple prototype, an Android app that uses NFC to read the AltoAdige Pass that every traveler have (or should have :) ) as a set of credentials for authorization against our OAuth provider. The third-party app request an access toke, users are redirected to our domain and "log-in" using the card, by weaving it in front of an Android tablet. The process is secure (it involves the card processor and keys, mutual authentication, etc.) but easy and fast. They see the permissions and confirm (again weaving the card when the tablet asks them).

For now it is only a prototype, but.. I found it interesting when pieces of (cool) technology fall together to produce something that is easy to use and actually makes people's life e little bit easier.
With so many balls to juggle, time for blogging is... naught. But I will be back with more news on my projects, as soon as I have some time.

Monday, 06 March 2006

Security lesson no.6: .NET Security

Finally, we reached the last topic in our cycle of security lessons on software attacks: the security model of .NET. We will see how CAS work, what are evidences and strong names, etc. I'll also give an hint about the "weakest link" in this model.

NOTE: my assumptions were made for version 1.1 of the framework. Some things where updated in 2.0 (in particular, there are good news on how the new version cope with the "weakest link".. but I want to speak about this point more precisely in a future post, since the work I did on this topic allowed me to learn a lot about the .NET runtime/loader and the Windows loader as well!)

dotNETSecurity.ppt (619.5 KB)
Sunday, 05 March 2006

Security lesson no.5: DLL Injection

Yesterday we saw a brilliant :) solution to a problem, using some techniques I always loved. Unfortunately in these days of troubles techniques like DLL injection and IAT patching are used mostly by malware than by useful and great software. So it is important for the software developer that cares about security to know how they work and what can be done to prevent them.

DLL Injection is the topic of this lesson, but we will also see what it is possible to do once our malicious DLL is inserted into another process address space: window subclassing, Virtual Memory walking (in search of private data like passwords, for example) and IAT overwrite.

Have fun! (and behave responsibly, as usual...)

DLLInjection.ppt (319.5 KB)

ex6-DLLinjection.zip (139.39 KB)
ex7-VMWalk.zip (338.91 KB)
ex8-IATOverwrite.zip (224.36 KB)
Saturday, 04 March 2006

Intercepting Windows APIs

As I described in a previous entry, one of the few games I really enjoyed playing was Enemy Territory. It is a free, multi-player FPS based on the Quake 3 engine. It is class based: you chose a class and that dictates the ability of your soldier (and what he can do). I played with my fellow university mates: some of them created a clan (they even did one or two official tournaments) and they wanted to train (I was not particularly good.. I received the "easy frag" attribute!). Besides, it was good to relax an hour after lunch, before attending other lessons.

However, we had an hard time playing it... The admin won't let us use the computer lab for non didactic purposed. It is silly, if you ask me, especially it was not explicitly forbidden by college rules: for example, students and professors alike are allowed to use empty classrooms to play card games. So why can't we use an empty lab to play a free game? Since the labs were not under CCTV surveillance, we took the risk and played nonetheless (we were young.. :) ). But one day, an email from the admin warned me to not use that particular game anymore. How did they know? Simple: someone was checking all the files on the public directories (were the game was installed), which user owned them (using ACLs) and what kind of files they where.

Me and a friend of mine started to think about the problem. Initially we thought about manipulating the ACL to change the ownership (maybe to the Administrator.. it would be ironic!) of the game files, but it was impractical and it required privileges above those allowed to students, and we didn't want to do anything illegal (like an escalation of privileges). Our solution was simple: hide your programs, not only your data.

Once upon a time, programs consisted of a single .exe (or .com) file. Nowadays instead, an average application has thousands of files and DLLs in its installation directory. Think at Office, or at a game like Quake3. We wanted to execute a complete program out of a sigle data packed file, possibly compressed or encrypted. I'll discuss our ideas and the techniques we used, namely DLL injecting and API intercept and forwarding. We began to discuss seriously on the topic. Our first idea was to provide a DLL that was a proxy / interceptor for the msvcrt.dll, the C runtime of MS C++ compiler. This DLL contains the implementation of the C file handling function, such as fopen, fread, fseek. We can make a DLL with the same name, put it in the app directory (which come firts in the loader search path), export all the function of the original msvcrt.dll implementing file handling function and passing other function to the original DLL. Phew, a lot of work...msvcrt.dll exports 780 functions! We can already sense the calluses on our fingers! Furthermore, the C runtime can be statically linked to the exe, or the program could directly call Win32 API functions.

But wait, even fopen, fread, fseek and friends call Win32 API functions! So, plan B: intercept kernel32 functions! Despite her name, kernel32 is not a kernel module: is a simple user mode DLL that provides a nice API for the real kernel calls. So it can be intercepted... Calling the application we want to execute ot of the compound file victim, all we have to do is:

1. Place some code in the victim process address space.
2. Execute this code in order to:
1. locate the IAT (Import Address Table) of the exe
2. patch pointers in the IAT to point to OUR functions
3. For now on, all calls to the patched functions will cause a jump not to the original kernel32 code, but to our functions.
The advantages of this appoach? It's more economic (we have to write only the functions we need), it works with (almost) every app (even with non C apps) and it's funny to code!

DLL injection

We need to place code and execute it in the address space of another process. This at first can seem impossible: every Win32 process has its Virtual Address Space and pointers range over this space, so it's impossible to access another process space [1][2]

The virtual address space: the lower 2GB are the user-mode space, and they are private for each process (see [1][2] for details)

Well, not really: how debuggers can work? With the help of the OS of course! We'll ask for help to the OS too. Our goal is to load a DLL in the victim address space: when a DLL is loaded, function \emph{DllMain} in the DLL is called, with dwReason equal to PROCESS_ATTACH. There are several methods to load a DLL in a process [3]:

1. Windows HOOKS (the most ancient one). A Hook is a callback function called by windows every time a particular event occours: the most interesting one, when a top window is created or destroyed. We can then see if the application is interesting, and what to do with it. The nice thing is that the DLL that contains hook code is loaded into the other application address space.
2. The registry. Somewhere in the registry, you can specify a key in which you place DLLs that have to be loaded in every process address space (\HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\AppInit_DLLs) . This is how mouse or video card DLLs end up in your address space. Drawback: you must have Admin rights to write to the registry, and you DLL is loaded in a lot of non-interesting processes. What a waste.
3. Two magic Win32 functions: CreateRemoteThread and WriteProcessMemory [4].
Ritcher in [4] explains the magic very well. To summarize:
1. Obtain the HPROCESS of the victim (via CreateProcess or via its pid).
2. Reserve some space in the Virtual Address space of the victim with VirtualAllocEx.
3. Use WriteProcessMemory to write the name of the DLL to load in the memory just reserved.

The virtual address space: the lower 2GB of the user-mode space, with the kernel32.dll loaded at the same address.

At a first time, we believed we needed to write shell code to execute LoadLibrary, and this is bad for 3 reasons:
(a) is difficult to write,
(b) with the new XP-SP2 NX (non execute) page protect flag we could have troubles.
Fortunately, we realized a fact: DLL are loaded in every process address space, so are private to each process. However, when you create a DLL you specify a "preferred load address'" at link time, and the OS loader will load the DLL at that address if it's free. This is due the fact that otherwise the loader must relocate the DLL, and this is a time consuming operation. This is particularly true for system DLLs, which are loaded always at the same address in every process. So, if we do a GetProcAddress against LoadLibrary in our process, we obtain the same address as in the victim process.

We can pass to CreateRemoteThread the address of LoadLibrary as startup routine, and the name we wrote in victim address space as parameters as in figure.

IAT patching

Now we have our own code running in a thread in victim's address space. What can we do now? Everything. In particular, we can have access to PE data directories in our "host", the victim. The executables in Win32 (DLLs, exe, and even device drivers) follow a format called PE (Portable Executable). Every PE is divided in sections: export, import, resources, debug data, delayload, bound modules...[5][6].

The section we are interested in is the import section, with its IMAGE_IMPORT_DESCRIPTOR structure.

The import section, with its two parallel arrays of function pointers

The import section after the loader has done its work. The IAT now points to function entries in kernel32.dll

There's one IMAGE_IMPORT_DESCRIPTOR for each imported PE (executable or, a most common case, DLL). Each IMAGE_IMPORT_DESCRIPTOR points to two essentially identical arrays. The first one is the Import Address Table (IAT). The second one is called the Import Name Table (INT) and is used by the loader as a backup copy in case the IAT is overwritten in the binding process. Binding is an operation done to PE files before the link step, but this goes beyond the scope of this article. Matt Pietrek in [5] covers all the details. The IMAGE_THUNK_DATA structures in the IAT has two roles:

• In the executable file, they contain either the ordinal of the imported API or an RVA (Relative Virtual Address, an offset from the base address at which the PE is loaded) to an IMAGE_IMPORT_BY_NAME structure. The functions we need to patch in DLLs are those with a name, so we look at those entries that contain an RVA. The IMAGE_IMPORT_BY_NAME structure is just a WORD, followed by a string naming the imported API.
• When the loader starts the executable, it overwrites each IAT entry with the actual virtual address of the imported function

The import section after zdll's DllMain has done its work. The IAT now points to function entries in zdll.dll

So we need to replace the addresses placed in the IAT by the loader with the addresses of our functions. Here the INT becomes important: how do we know which entry in the IAT we need to overwrite for, as an example, CreateFileA? We need to iterate through the entries of the IAT and INT together. The INT provides the name for the n-th entry, the IAT its VA. We simply overwrite the entry in the IAT with our own.

void patchIAT(PIMAGE_THUNK_DATA32 pINT, PIMAGE_THUNK_DATA32 pIAT)
{
PIMAGE_IMPORT_BY_NAME ordinalName;

while (1) // Loop forever (or until we break out)
{
if ( pINT->u1.AddressOfData == 0 )
break;

ULONGLONG ordinal = -1;

if ( IMAGE_SNAP_BY_ORDINAL32(pINT->u1.Ordinal) )
ordinal = IMAGE_ORDINAL32(pINT->u1.Ordinal);

if ( ordinal != -1 )
{
// We don't consider un-named functions
}
else
{

const char* funcName = (const char*)ordinalName->Name;
PDWORD oldFuncPointer (PDWORD)&(pIAT->u1.Function);

if (funcName == "CreateFileA")
{
pIAT->u1.Function = MyCreateFile;
break;
}
}

pINT++;         // Advance to next thunk
pIAT++;         // Advance to next thunk
}
}

Compound file

So, at this point the only thing to do was to provide our own implementation of functions like CreateFile, WriteFile, SetFilePointer, FindFirstFile... and patch the IAT for kernel32 with them. But how can we  implement a file system in a single file? After some searching, I suggested that maybe Structured Storage, the way Microsoft calls its compound files, could be used: Word and Powerpoint uses them, for example.
It was only a suggestion, but the day after my mate came with an almost complete implementation based con Structured Storage functions and COM interfaces. Amazing! The last things to do were an application for building a compound file, and some cryptography to hide the content of the file. After all, this was the original goal :)

The final product worked. It was great! A piece of software complex as a video game was able to run with our own file APIs. We never used it (it was a bit too slow on startup, and we found a much simpler solution: network our notebooks), but it was fun, and I used the intercepting library we created for more interesting stuff!

[1] Jeffrey Richter. Load your 32-bit dll into another process’s address space using injlib. Microsof System Journal, May 1994.

[2] Jeffrey Richter. Advanced Windows Programming, 3rd edition. Microsoft Press, 1997.

[3] Mark Russinovich. Inside memory management, part 1. Windows and .NET Magazine, August 1998.

[4] Mark Russinovich. Inside memory management, part 2. Windows and .NET Magazine, September 1998.

[5] Matt Pietrek. Inside windows: An in-depth look into the win32 portable executable file format. MSDN Magazine, Feb. 2002.

[6] Matt Pietrek. Inside windows: An in-depth look into the win32 portable executable file format, part 2. MSDN Magazine, March 2002.

[7] Microsoft corp. Platform SDK: Structured storage. MSDN Library, April 2004.

Thursday, 02 March 2006

Security lesson no.4: Integer overflow

To finish the cycle of lessons on overflow-based attacks, I couldn't miss a mention to integer arithmetic overflow. Integer arithmetic overflow is unharmful on its own, but can be combined with another type of attack, typically a buffer overflow. Consider the following code from a previous lesson:

int ConcatString(char *buf1, char *buf2,
size_t len1, size_t len2)
{
char buf[256];
if((len1 + len2) > 256)
return -1;
memcpy(buf, buf1, len1);
memcpy(buf + len1, buf2, len2);
return 0;
}

it seems to avoid the buffer overflow problem with a simple check. However, this function is unsecure. Why? Discover it in my slides!

IntOverflow.ppt (128.5 KB)
Wednesday, 01 March 2006

Security lesson no.3: Pointer Subterfuge

The last buffer overflow technique I treated in my lessons was Pointer Subterfuge. With this technique you try to clobber a function pointer, and make it point to a memory location containing your own code.

My students instantly objected: there is almost no function pointer in our code!
No? What about C++ objects? COM components? Kernel functions exposed as APIs?
A common way to intercept kernel-mode APIs is to patch the kernel’s system service table, a table made of function pointers!

Are you interested? Go ahead and read the powerpoint slides and the sample source!

PointerSubterfuge.ppt (255.5 KB)
ex9-vptrSmash.zip (38.35 KB)
Monday, 27 February 2006

Security lesson no.2: Heap smashing

In a previous post I talked about my Software Attacks lessons for the Computer Security course at the University of Trento, where I was assistant professor.

Now is time for another lesson: it is again on buffer overflow, but using a more complex attack called Heap Smashing.
Have fun with my powerpoint slides and my sample code.

NOTE: about the sample code: THE CODE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND. IN NO EVENT I SHALL BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF INFORMATION AVAILABLE FROM THE SERVICES.

In short, use at your own risk.. :). The code was written and compiled using Microsoft VC++ 6.0 under Windows 2000. As I illustrated in the slides, the enhacement ini Windows XP SP2 should make this kind of technuque uneffective.

HeapOverflow.ppt (264 KB)
ex5-HeapSmash.zip (36.3 KB)
Tuesday, 10 January 2006

Ah-ah! [1]

From Sam Gentile:
“Reported by CNET, of all the CERT security vulnerabilities of the year 2005, 218 belonged to the Windows OS.  But get this - ther were 2,328 CERT security vulnerabilities for UNIX/Linux systems.”
That's great news, but it only confirms that Windows is now a OS that takes security really seriously.
Why even clever people, like Paul Graham, are sometimes so biased about Windows and Microsoft?

Saturday, 19 November 2005

My first lesson was...

It is over an year from my first lesson as an assistant professor at the University of Trento, held while I was writing my thesis. The course was on Security and Privacy, and my lessons where on Software Attacks and how to prevent them. I do not claim to be a Security expert, I'm not, but my love for compilers, inner OS working and my willing to know "how it works inside" provided me with some knowledge to show the others what happens when a cracker gets control of your computer.
I always promised me to publish my lessons on the web. I also have some examples, but I don't think I'm going to publish them. They are all innocuos, since they only affect themselves, but maybe it's not so responsible to leave them "free" in the open air.. =). My first lesson was on general vulnerabilities, an introduction to the lexicon and to the first end simples type of software attack: the Buffer Overflow. If you want to read the whole Lesson you can find it here (in PowerPoint format).

If you, like me, are lazy, here is a little appetizing of what you will found, to see if it's worth your time. I hope so!
When you compile a C program, the instruction are translated into ASM code (and then to machine code, but here is almost a 1-1 mapping). As we saw some days ago, on an x86 machine most of the data (parameters, local space) is held on the stack. As an example, consider the following C code:

void f(int a, int b){   char buffer[20];   int i;   int j;      j = a;   i = a + b;}int main(){   f(2, 3);   return 0;}

When we compile it, it is translated to the following ASM instructions:

; 3    : {

push  ebp
mov   ebp, esp
sub   esp, 28

; 4    :    char buffer[20];
; 5    :    int i;
; 6    :    int j;
; 7    :    j = a;
; 8    :    i = a + b;

mov   eax, DWORD PTR [ebp + 8h] ; a
mov   DWORD PTR [ebp - 8], eax ; j EBP - 8
add   eax, DWORD PTR [ebp + 0Ch] ; b
mov   DWORD PTR [ebp - 4], eax ; i EBP - 4

; 9    : }

mov   esp, ebp
pop   ebp
ret   0

; 12   : {

push  ebp
mov   ebp, esp
sub   esp, 0

; 13   :    f(2, 3);

push  3
push  2
call  _f

; 14   :    return 0;

xor   eax, eax

; 15   : }

mov   esp, ebp
pop   ebp
ret   0

From the assembly code, you can see how parameters and locals are translated to stack space. The return address (the point from which the instruction is called, and to which we should return) is also saved on the stack. Can you see it? If we fill the buffer with over 20 character we will spill over, "invade" the space of the other locals, of the return address, and of the other parameters.

void func(){   char buffer[20];   char* current = buffer;   int i;   for (i = 0; i < 256; ++i)   {      *current = 'A';      ++current;   }}int main(){   func();   return 0;}

What will happen? A memory access error ("This program will be terminated" on Win32, "Segmentation fault" on Linux), surely. But, if you compile and execute the code, at which address the fault will happen? (Suggestion: the ASCII code for 'A', repeated four time, is a memory address in the 32-bit virtual memory space of your process).

Now with another example program we try to spill the buffer in a clever way: knowing that at memory address 0x00401090 there is an "interesting" piece of code, we can try to "return" to it, instead of returning to main.

#include <stdio.h>#include <stdlib.h>#include <string.h>//00401090static char message[] = "AAAAAAAAAAAAAAAAAAAAAAAA\x90\x10\x40\x00";void handleRequest(){   char buffer[20];   strcpy(buffer, message);   printf("Request: %s\n", buffer);}void doNastyThings(){   printf("He he!!\n");}int main(){   while(1)   {      handleRequest();   }   return 0;}

Compile it with VC++ 6.0 (if you have another compiler, obviously the address of the doNastyThings function will change: fix it). Surprised to see "He he!!" on the console? Have I tickled your brains? So, to see how it works.. Let's read my lesson!

BufferOverflow.ppt (162 KB)