# Monday, February 27, 2006

Security lesson no.2: Heap smashing

In a previous post I talked about my Software Attacks lessons for the Computer Security course at the University of Trento, where I was assistant professor.

Now is time for another lesson: it is again on buffer overflow, but using a more complex attack called Heap Smashing.
Have fun with my powerpoint slides and my sample code.


In short, use at your own risk.. :). The code was written and compiled using Microsoft VC++ 6.0 under Windows 2000. As I illustrated in the slides, the enhacement ini Windows XP SP2 should make this kind of technuque uneffective.

HeapOverflow.ppt (264 KB)
ex5-HeapSmash.zip (36.3 KB)
# Sunday, February 05, 2006

Synthesized proxies

Last time we saw a first solution to the problem of adding synchronization contracts to an arbitrary .NET class. The solution based on Context Attributes and Transparent Proxy had some limitations (performance, no access to class members), so I designed a tool based completely on SRE (System.Reflection.Emit) and System.Reflection. It can be thought of as a post-compiler that analyzes an assembly, custom attributes placed on classes and function, and write a small assembly that contains a proxy to the original one.

The proxy is written using metadata information of the real target object. We use facilities provided by System.Reflection in order to read the interface methods, the custom attributes and the fields and properties of the target object, and System.Reflection.Emit to emit code for the guarded methods and forwarders for the other public functions. The methods in the proxy validate the contract and update the state for subsequent validations, all in a automatic way.

The process to create a guarded component is the following:

  • write the class as usual, then add the synchronization contract writing guards for each method using the Guard attribute and Past LTL logic;
  • add a reference to the Attribute.dll assembly -the assembly containing the class implementing the Guard attribute- and compile your component. Note that the attribute can be used with every .NET language that supports custom attributes, like C#, Visual Basic, MC++, Delphi, and so on;
  • the .NET compiler will store the Guard attribute and its fields and values with the metadata associated to the method.
public class Test
bool g;
bool f = true;
[Guard("H (g or f)")] //using constructor
public string m(int i, out int j)
j = i;
return (i + 2).ToString();
[Guard(Formula = "H (g or f)")] //using public field
public string m(int i, out int j)
j = i;
return (i + 2).ToString();

The component is then passed through the post-compiler, that will post-process the assembly performing the following steps:

  • it walks the metadata of the given assembly, finding all the classes;
  • for each class it walks its methods, searching for the Guard attribute. If the attribute is found, the class is marked as forwarded;
  • it generates a new empty assembly;
  • for each forwarded class, it generates a class with the same public interface as the original one:
    • public, non-guarded methods are wrapped by the IL code necessary to perform the synchronization (acquiring the object-wide lock before forwarding the call to the original method and releasing it just after the method return)
    • public and private guarded methods are wrapped by the IL code necessary to perform the synchronization and conditional access, like shown here:
      .method public instance void  m() cil managed
        .maxstack  5
        IL_0000:  ldarg.0
        IL_0001:  call       void [mscorlib]Monitor::Enter(object)
          IL_0006:  br.s       IL_0008
          IL_0008:  ldarg.0
          IL_0009:  call       bool [mscorlib]Monitor::Wait(object)
          IL_000e:  pop
          IL_000f:  ldfld      bool Test::pre1
          IL_0010:  brfalse    IL_0008
          IL_0012:  ldc.i4.0   //"false"
          IL_0013:  ldfld      bool Test::a
          IL_0015:  ceq
          IL_0018:  stfld      bool Test::pre2
          IL_001e:  ldc.i4.0   //"false"
          IL_0021:  stfld      bool Test::pre1
          IL_0026:  ldarg.0
          IL_0027:  ldfld      class [TestLibrary]Test::GuardedTestLibrary.dll
          IL_0031:  call       instance void [TestLibrary]Test::m()
          IL_0036:  leave      IL_0048
        }  // end .try
          IL_003b:  ldarg.0
          IL_003c:  call       void [mscorlib]System.Threading.Monitor::PulseAll(object)
          IL_0041:  ldarg.0
          IL_0042:  call       void [mscorlib]System.Threading.Monitor::Exit(object)
          IL_0047:  endfinally
        }  // end handler
        IL_0048:  ret
      } // end of method Test::m
  • for each forwarded class, it generates constructors with the same signature that calls the ones of the original class.

A schema for the usage of the generated proxy

The generated assembly and its classes are saved in a separate dll, with the same name prefixed by "Guarded". This assembly should be referenced by the client applications in place of the original one, as shown in the Figure.

# Saturday, February 04, 2006

On anonymous delegates

Recently, doing some searches on anonymous methods/delagates, I came into an interesting puzzle posted by Brad Adams
The puzzle (and its explaination) are very interesting to understand what happens under the hood when an anonymous delegate is created. However, it took me some effort to understand what the compiler did using only the post and its comments, so I put Brad's code into Visual Studio, compiled an assembly, and inspected it with Reflector. The first case, the one with the delegator binding the loop variable i (num1, in the reverse engineered code) is translated in the following way:

private static void Main(string[] args)
      for (int num1 = 0; num1 < 10; num1++)
            Program.<>c__DisplayClass1 class1 = new Program.<>c__DisplayClass1();
            class1.j = num1;
            ThreadPool.QueueUserWorkItem(new WaitCallback(class1.<Main>b__0), null);

the anonymous delegate shares i between caller and callee. It is a closure, in a wide sense, in that it captures the state when it is created. However, since the C# compiler capture i "by reference" and in this case the closure is created before the for loop, the behaviour is different from the one expected in real closures (in fact, anonymous delagates are not closures)

The second example however (delegate bound to a variable local to the for loop) is different: the lifetime of j (the for block) forces the compiler to create and initalize a state object for the delegate at each iteration inside the loop:

private static void Main(string[] args)
WaitCallback callback1 = null;
Program.<>c__DisplayClass2 class1 = new Program.<>c__DisplayClass2();
class1.i = 0;
while (class1.i < 10)
if (callback1 == null)
callback1 = new WaitCallback(class1.<Main>b__0);
ThreadPool.QueueUserWorkItem(callback1, null);

public void <Main>b__0(object)

internal class Program
// Methods
public Program();
private static void Main(string[] args);

// Nested Types
private sealed class <>c__DisplayClass2
// Methods
public <>c__DisplayClass2();
public void <Main>b__0(object)

// Fields
public int i;

In a comment of Brad post, a member of the C# team pointed out that

"Anonymous methods do not capture read-only locals at a point in time, instead they actually capture the local by putting it on the heap, to share it between the outer method and all anonymous methods."

Surely this statement makes clear what is happening: however, this example prove one points I always believed: the only way to be sure of what a compiler do is to look at the assembler code it generates.

# Friday, February 03, 2006

Internet Explorer 7

Finally Microsoft released Internet Explorer 7 Beta 2 for non-Vista users! I downloaded and installed it immediately. I miss IE, it was surely faster and more comfortable to use than Firefox, but during the last year it surely become technologically obsolete.

So, what I like about Internet Explorer 7?

  • New toolbars, no menubar: it's great. It's neat, simple to use, leaves a lot of space for the pages. Who needs a complete File, Edit, View, Window, Help menu in a web browser? Finally someone understood that! A+!
  • Tabbed browsing: I do NOT use it in Firefox, because I always get lost in tabs. I think its a nice feature, and IE surely makes it simple to use and seamlessly integrated. QuickTabs are great, although Firefox have a similar extension (foXpose).
    (BTW, I don't really get tabs: why? They are even more confusing for inexperienced users, because the system already has another tab-bar, the application bar)
  • Search box (finally!). It is possible to add engines, but the number is still scarce: no Webster, no Wikipedia... MSDN is here, fortunately.
  • Search also from the address bar: Mozilla had it, why did they remove it from Firefox I will never get. And it uses the default search engine, too.
  • (Unchanged form IE6, but impossible to have with Firefox) Opening a new Tab/ Window: you can choose to open it showing exactly the same page you are watching now. In Firefox I am forced to open a blank page, then copy/paste the address between address bars.

And what is still behind Firefox? Or, what I hate of IE7

  • First, the most horrible think: ALIASED FONTS.They look terrible on my monitor, why if I disabled them in the control panel IE must impose its will and use aliased fonts? They are orrible to see in my opinion, make me wonder if there is fog in the room..
  • Download boxes: one box for download is surely better than Firefox download manager (awful, IMO) but fortunately firefox have a great extension, Download statusbar

  • Extension: if I don't like something in Firefox, there is an extension to modify it. I love extensions, and if IE7 will be programmable using .NET (maybe JScript.NET) I will be the first to use it and to write a lot of extensions.
  • One of the features I love in Firefox and Visual Studio: incremental search. Guys, you made such a nice work with the tool-bars and the notifications (the yellow ribbon), why ruin the user experience prompting a search box? (or a download dialog, btw)
  • On the same line: download files in the right place from the beginning. Why put them in the cache and then move them? I incidentally clicked "cancel" during this move operation (I was typing) and I had to download the file again.

Well, at the end of the list.. I still prefer Firefox. The only hope of IE is to be programmable, and then it will be mine.

More on yield

Yesterday I discovered and introduced the yield keyword. Today, I wanted to see some examples and to discover how it is implemented. Let's start with a classic example, the Fibonacci function, both in Python and in C#.
def fib():
   a, b = 0, 1
   while 1:
       yield b
       a, b = b, a + b
static IEnumerable<int> Fib(int max)
int a = 0;
int b = 1;
yield return 1;

for (int i = 0; i < max - 1; i++)
int c = a + b;
yield return c;

a = b;
b = c;

The two method are essentially equivalent. How are they implemented? The first thought that came into my mind when I saw the Fibonacci example was: the compiler traslate it to static variables. But  clearly this is not the case: if two clients call the same function, they will share its internal state, which is not desiderable.
The second guess was: well, as in anonymous delegates the current state is captured by an anonymous class generated by the compiler. The compiler generates it inseting one field for every local variable in the function, and then instantiate an object of that class for every call site. This explains the Fibonacci function, but what about this one?

static IEnumerable<string> gen2()

yield return "A";

string s = "A" + "B";
yield return s;

s = 1.ToString();
yield return s;

for (int i = 0; i < 5; ++i)
yield return i.ToString();
Clearly, execution of the function body must stop after a yield statement, like after a return statement, but it must also be resumed in the same place, not at the begininng of the function. So, we have to save also the point at which execution stopped, and resume it (with a jump right at the beginning of the function) right at the same point. Using the excellent Reflector tool by Lutz Roeder you can see in pseudo-code (reverse engineered C#) how this class is generated and how the function is wrapped inside MoveNext, with a big switch right at the beginning of the function that allows resuming execution at different points based on a state variable:
private sealed class <gen>d__7 : IEnumerable<string>, IEnumerable, 
IEnumerator<string>,IEnumerator, IDisposable { // Methods [DebuggerHidden] public <gen>d__7(int <>1__state);
      private bool MoveNext()
         switch (this.<>1__state)
            case 0:
                  this.<>1__state = -1;
                  this.<>2__current = "A";
                  this.<>1__state = 1;
                  return true;
            case 1:
                  this.<>1__state = -1;
                  this.<s>5__8 = "AB";
                  this.<>2__current = this.<s>5__8;
                  this.<>1__state = 2;
                  return true;
            case 2:
                  this.<>1__state = -1;
                  int num2 = 1;
                  this.<s>5__8 = num2.ToString();
                  this.<>2__current = this.<s>5__8;
                  this.<>1__state = 3;
                  return true;
            case 3:
                  this.<>1__state = -1;
                  this.<i>5__9 = 0;
                  while (this.<i>5__9 < 5)
                        this.<>2__current = this.<i>5__9.ToString();
                        this.<>1__state = 4;
                        return true;
                        this.<>1__state = -1;
            case 4:
                  goto Label_00DE;
         return false;
[DebuggerHidden] IEnumerator<string> IEnumerable<string>.GetEnumerator(); [DebuggerHidden] IEnumerator IEnumerable.GetEnumerator(); [DebuggerHidden] void IEnumerator.Reset(); void IDisposable.Dispose(); // Properties string IEnumerator<string>.Current { [DebuggerHidden] get; } object IEnumerator.Current { [DebuggerHidden] get; } // Fields private int <>1__state; private string <>2__current; public int <i>5__9; public string <s>5__8; }
All this reasoning happened yesterday with a guy that work on dynamic languages (I don't want to say who right now... I want only to say "cross your fingers for me!" I may have great news for you in the future). This guy correctly pointed out an issue with this approach: what if the yield statement is inside a try block? In CIL, you can't jump inside a try block: you can only jump at the first instruction of the block. I was puzzled at first: what can we do? The simplest solution was to not permit the mixing of try/catch and yield. But surely this is limitating (Python allow yield to be used almost everywhere, the only exception being yield not allowed in try-finally). The guy then gave me an hint: obviously, once you are inside the block, you can jump whenever you like inside that very block. So, the execution can be resumed using a sort of multiple-dispatch. It was all very clear at once: you have to make a first jump inside the right try block, then at the correct location inside the block. In a function like this one
def f():
       yield 1
           yield 2
           yield 3  # never get here
       except ZeroDivisionError:
           yield 4
           yield 5
           yield 6
       yield 7     # the "raise" above stops this
       yield 8
   yield 9
       x = 12
       yield 10
   yield 11

The compiler will translate this code in something like:

 L_0001: ldfld int32 YieldTest.Program/<gen>d__7::<>1__state
 L_0006: stloc.1 
 L_0007: ldloc.1 
 L_0008: switch (L_0031, L_0023, L_0025, L_0027, L_002c)
 L_0021: br.s L_0033
 L_0023: br.s L_0059
 L_0025: br.s L_0085
 L_0027: br L_00b2
 L_0033: switch (L_0034, L_0037)
 L_0034: ldc.i4 2 
 L_0035: stfld int32 YieldTest.Program/<gen>d__7::<>2__current
 L_0036: br L_0105

The first switch (at position L_0008) selects the try block, the second one (position L_0033) selects the yield statement inside that block. I tried to translate this Python function in C#, to see how the IL code generated by the C# compiler looks like, and with my great surprise the following code

static IEnumerable<int> f()
yield return 1;
yield return 2;
int k = 1 / 0;
yield return 3; //never get here
catch (System.DivideByZeroException)
yield return 4;
yield return 5;
yield return 6;
yield return 7; //the "raise" above stops this
yield return 8;
yield return 9;
int x = 12;
yield return 10;
yield return 11;

did not compiled at all! The compiler spits out the following errors:

error CS1626: Cannot yield a value in the body of a try block with a catch clause
error CS1631: Cannot yield a value in the body of a catch clause
error CS1625: Cannot yield in the body of a finally clause

Apparently, the C# team glissed over the problem by not allowing the same behaviour as in Python.

# Thursday, February 02, 2006

The yield statement

C# 2.0 introduced a new keyword, yield. I didn't paid many attention to this new keyword, assuming that anonimous delegates and generics were more interesting and that yield was only a way to wrap an "iterator pattern". I was wrong (but I have the excuse that the name given to this feature in C# is 'enumerators', and I think now that this name is a bit reductive and misleading). Yield exposes a feature known by the Python community as Generators.

They are a bit like continuations, because they return (or better, yield) a value and then when they are called again they resume execution right after the last yield.

In C#, yield must be used inside an iterator block. An iterator block is more or less a function whore return type must be IEnumerable, IEnumerator, IEnumerable, or IEnumerator (see MSDN).

using System;
using System.Collections.Generic;
using System.Text;

namespace YieldTest
class Program
static void Main(string[] args)
foreach(string s in gen())


static IEnumerable<string> gen()
Console.WriteLine("In the generator function");

Console.WriteLine("Give A");
yield return "A";

Console.WriteLine("Give B");
yield return "B";

It is easy to see why C# enumerators are very useful when implementing iteration over a sequence: as Raymond Chen points out in one of his blog posts there are two "models" for the enumerator-consumer pattern:

"The COM model for enumeration (enumeration objects) is biased towards making life easy for the consumer and hard for the producer. The enumeration object (producer) needs to be structured as a state machine, which can be quite onerous for complicated enumerators, for example, tree walking or composite enumeration."

"On the other hand, the callback model for producer (used by most Win32 functions) is biased towards making life easy for the enumerator and hard for the consumer. This time, it is the consumer that needs to be structured as a state machine, which is more work if the consumer is doing something complicated with each callback. (And even if not, you have to create a context structure to pass state from the caller, through the enumerator, to the callback.) "

In the first model, the caller calls Next() repeatedly and the enumerator has to find the next item and return it. Since the enumerator returns, it has to record state informations with a stack data structure, mimicking the call stack.

In the callback model, on the other and, the producer performs the operations it needs on the data structure (walks a tree, for example) and calls back the consumer through the callback as it finds items. This makes the producer implementation straightforward (in the case of the tree, a simple recursive function will do) but life is made harder for the consumer: it needs to maintain state across each callback.

It would be great to have the simpler approach on both sides, with the caller seeing a simple enumerator that returns items in order and the enumerator seeing a callback that it can throw item into. Raymond solution is a great piece of software based on fibers (user-scheduled threads), but as he points out fibers are hard to use, and it is very easy to make subtle error difficult to debug.

C# solution is in the yield statement and in generators.

# Friday, January 20, 2006

More on filesystems

Last time I posted an opinion on one of Jeff Atwood's posts. This time I'd like to elaborate a little more on the good points Jeff made in its article, on filesystems not being a feature but a mere implementation detail.

The post goes on with several proposal, all aimed at reaching the definite solution: get ride of the hierarchical file system. This was done by Jef Raskin with his experimental LEAP interface. A less radical solution is then discusses: do everything you can to hide the file-system from the user, using no filenames, no prompts on saves, automatic versioning. A sort of milestone-based machanism. While this is great for multiple saves and versioning of the same document, and for recovering the too frequent overwrite errors, I don't see "restore points" as a substitute for filesystems, but a complementary feature (although it can really help a lot in naming revised documents; how many times have you seen a file named "thesis14MarTry2.doc"?) (NB: this technology already exists in Windows (2003 and Vista): based on VSS, Shadow Copy preserves multiple copies of the same file transparently, as reported here through AdiOltean).

I don't like very much both of the solutions: however, the problem is real and somehow must be addressed. Hierarchical file-systems, partitions, hard drives are not only implementation details we (developers) must hide form the unexperienced users. It is a pain also for people that work on a computer all days. How many times in the last month did you though "where the heck is that file named something like that?" or "where I installed that application, an year ago? I cannot find neither a link nor the executable!"

One of the comments posted to the original blog post, and Jeff's response,  reflects my own opinion on the subject: The commenter objects that the proposed solution, using the contents of the file, instead of its name, isn't really adequate. He observes that it may make it easier to search for files and simpler to save files, but it makes browsing through the file-system quite difficult. Jeff response was: "When was the last time you "browsed" through the Internet, eg, you used a YAHOO or DMOZ style directory?"
Searching technologies are the future of file-systems. In many cases they already are the present, and they are continuously growing. What they still lack is integration. A lot of integration. Windows desktop search engines are the worst, in this sense. Both Google Desktop search and Windows Desktop search are completely separated applications, integrated only in a poor manner into the OS. In Windows desktop search, you can't even customize your view adding and removing columns, or reordering them.

For actual OSes, OSX has a point (with its SpotLight). You can even save search folders, organizing your items in a way that is independent from the file-system real organization. The find-as-type behavior, the subdivision of results in different categories, all make SpotLight a very well done system.

Windows Vista do the same things, sometimes better (especially for virtual folders, see picture). The search is extensive, in every folder, and in many other places - even in open and save dialogs.

Open dialog: pressing keywords on the left you can make queries.

Searching a folder in Explorer

Meta (virtual) folders, stacked, in Explorer.

However, this point is still a little weak in my opinion. I don't have a Vista beta, but from what I have seen and read, the search function could be integrated better. Take the open and save dialogs, the search for applications in the start menu (which is different! Why??) and, for example, the create shortcut wizard and the run.. dialog. Then look at these screenshot of SkyOS, or even better, go and watch the Search movie form here. I think it is impressive how search functionalities are widespread in the system, using a consistent ad standard interface. I'd really love to open a dialog, and then type a few keywords and find my file. Or to do the same thing in explorer, on the desktop...

Searching for a file in the storage manager...

... an application from the Run dialog (note the Incremental, nor excremental search funtion)...

...and from the Open common dialog.

The explorer view and the common dialog view are also pretty similar (even if they are not identical as I was hoping), making like easier for end users. And the search pane, with its subdivision into categories, is consistent for run dialog, create shortcut, locate icon, open with...
# Thursday, January 19, 2006

Main user interface problem: consistency!

I recently read a very interesting post on Jeff Atwood post: Filesystems Aren't a Feature. He starts pointing out an observation done by a developer watching his relatives using the pc:
When I observe how my wife and son uses the family computer, I can't help noticing how little use they have for the desktop. They look bewildered when I open the Windows Explorer. To them, file open or file save dialog is where the files go. My Documents? It's just an icon they never touch.
I don't even know why the open dialog (and the save dialog) and the File Manager (Explorer) should be different. They have the same function: locate a file. In his post Jeff consider alternatives to the direct exposure of the file system to the user. I agree that the File System hierachical structure -with files, folders, and different partitions- is confusing, but we should wonder in the first place why open/save dialogs ("where the files go") is different from My Computer. If they were identical, sharing the same interface, would it be so confusing? I don't think so..

# Tuesday, January 10, 2006

Ah-ah! [1]

From Sam Gentile:
“Reported by CNET, of all the CERT security vulnerabilities of the year 2005, 218 belonged to the Windows OS.  But get this - ther were 2,328 CERT security vulnerabilities for UNIX/Linux systems.”
That's great news, but it only confirms that Windows is now a OS that takes security really seriously.
Why even clever people, like Paul Graham, are sometimes so biased about Windows and Microsoft?

On openMosix

The first clustering architecture I am going to speak about is openMosix. openMosix is a Linux-only solution, for reasons that will be clear, but the concepts are applicable to every OS with virtual memory architecture. I think that a port of these ideas to the Windows OSes can be very interesting, but enormously challenging (at least for developers that cannot access the sources) and maybe not so paying for: other architectures that require a shift in the concurrent/distributed programming paradigm can bring more benefits at last.

Anyway, openMosix is unique for its (almost) complete tranparency: processes can be migrated to other nodes, and distributed computing could happen, without any intervent on the user or programmer side. openMosix turns a cluster into a big multi-processor machine.

The openMosix architecture consists of two parts:
  • a Preemptive Process Migration (PPM) mechanism and
  • a set of algorithms for adaptive resource sharing. 
Both parts are implemented at the kernel level, thus they are completely transparent to the application level.
The PPM can migrate any process, at anytime, to any available node. Usually, migrations are based on information provided by one of the resource sharing algorithms.
Each process has an home node, the machine where it was created. Every process seems to run at its home node, and all the processes of a user's session share the execution environment of the home node. Processes that migrate to other nodes use the new nodes resources (memory, files, etc.) whenever possible, but interact with the user's environment through the home node.
Until recently, the granularity of the work distribution in openMosix was the process. Users where able to run parallel applications by starting multiple processes in one node, and then the system distributed these processes to the best available nodes at that time; then the load-balancing algorithm running on each node decided when to relocate resources due to changes on nodes load. Thus, openMosix has no central control or master/slave relationship between nodes.

This model makes openMosix not so different from MPI-Beowulf clusters. Fortunately, recent work brought openMosix granularity down to thread level, enabling "migration of shared memory", i.e. the migration of pages of the process address space to other nodes. This feature permits to migrate multi-threaded applications.

Processes and threads in Linux
(Figures from the MigShm technical report and presentation: The MAASK team (Maya, Asmita, Anuradha, Snehal, Krushna) designed and implemented the migration of shared memory on openMosix)

For process migration, openMosix creates a new memory descriptor on the remote node. This is fine for normal processes, but could cause problems for threads. Because a thread shares almost all of its memory pages with its parent (all but the thread stack and TLS) when threads of the same parent process are migrated, they need to share a common memory descriptor. If they have different descriptors, these threads could point to false segments.
When a thread is migrated, openMosix migrates only the user mode stack of that particular thread. The heap is migrated "on demand", paying attention to the case in which the same node is already executing threads of the same process to ensure consistency.

openMosix + MigShm control flow
Other features of the process are the ridefinition of shared-memory primitives (shalloc() etc.) and linux thread primitives, a transparent Eager Release consistency policy, and the addition of an algorithm for adaptive resource sharing based on the frequency of shared memory usage and the load across the cluster, so that threads are migrated in a way that decreases the remote accesses to the shared memory.

Processes, Threads and Memory space

This piece of software is a very interesting and good technical quest, however the question is: it is really worth the effort? Could it scale well? Making processes, and above all developers, thinking that they only have to add threads can be misleading. And multi-thread programming requires locking, explicit synchronization, and to scale well a thoughtful administration of running threads. Threads and semaphores are starting to become uncomfortable even for multi-thread programming on a single machine.
My personal opinion is that the future is going in the other direction. There will be no shared memory, and distributed, multithreaded or clustered computation will all have the same interface, with no shared memory. The problem is that memory is lagging behind.

Processes where created for having different units of execution on the same CPU. When they were introduced, we had multiple processes all runnig in the same address space (directly into the physical address space, at that time).
Then, fortunately there was the advent of Virtual Memory, and of private virtual address spaces. We had a balanced situation: every process thought to be the only one in the machine, and to have a whole address space for its own purposes. Communication with other processes was possible, mainly message based. At that time, IPC was substantially the same if processes where on the same machine or in different machines: the main methods where sockets and named pipes.
The introduction of threads put again the system out of balance: every process had many threads of execution, sharing the same address space.

According to my historic Operating System textbook, a process is a program in execution
"with it’s current values of program counter, registers and variables; conceptually every process has it’s own virtual CPU" - A.S.Tanenbaum.
This is very close to the way modern OSes treat processes, running them in a context of execution virtually independent from the others.
Threads instead
"allow a lot of executions in the environment of a process, in wide measure independent the one from the others" - A.S. Tanenbaum.
However this definition for threads is not so close to reality: threads are not so independent among them, because they always share a primary resource (the common addressing space of the parent process).
openMosix "solves" this problem (making threads "independent" again) migrating trasparentely the required memory pages. 
But it is possible to restore the balance again? What about changing the affinity of memory from process to thread? Notice that here I am not talking about reintroducing the concept of virtual memory space for threads; modern OS uses the processor architecture to enforce and enable virtual memory for processes, at the overhead we all know; furthermore, you can't "box" addresses space one inside the other. What I am thinking about is a "light" thread that encapsulate both its code, its state (the stack) AND its data. If another thread want those data, it must ask them, and the thread that owns the data must be willing to share them. Like in the IPC case back in the '80, but without the burden of context switch unless necessary (i.e. when the thread and its data resides in another process or on another machine).

Application Domains

To complicate this design, .NET brought us Application Domains. Application Domains are designed to be "light-weight" precesses, as Chris Brumme explains. But they are "unrelated" to threads.


In my opinion, we need light threads, let call them wires, that live in the managed space (so they not clobber the scheduler), have their memory and their message based  primitives for communication. Use should be simpler than threads; a good starting point may be Join-calculus, or C-omega, or any other language that support asynchronous or active functions. Those fuctions should map directly to wires, and the runtime will map them to native "tasks" (processes or threads or fibers) so that users can finally stop to worry about hacks to mitigate thread performance limitations (number of threads, thread pools, completion ports) and explicit synchronization (sempahores, mutexes, race conditions).
Wires could also adapt very well to a distributed environment: since they carry with them their data, they can be "detached" from a computational node and "re-attached" to a different destination node.