# Tuesday, 10 January 2006

Ah-ah! [1]

From Sam Gentile:
“Reported by CNET, of all the CERT security vulnerabilities of the year 2005, 218 belonged to the Windows OS.  But get this - ther were 2,328 CERT security vulnerabilities for UNIX/Linux systems.”
That's great news, but it only confirms that Windows is now a OS that takes security really seriously.
Why even clever people, like Paul Graham, are sometimes so biased about Windows and Microsoft?

On openMosix

The first clustering architecture I am going to speak about is openMosix. openMosix is a Linux-only solution, for reasons that will be clear, but the concepts are applicable to every OS with virtual memory architecture. I think that a port of these ideas to the Windows OSes can be very interesting, but enormously challenging (at least for developers that cannot access the sources) and maybe not so paying for: other architectures that require a shift in the concurrent/distributed programming paradigm can bring more benefits at last.

Anyway, openMosix is unique for its (almost) complete tranparency: processes can be migrated to other nodes, and distributed computing could happen, without any intervent on the user or programmer side. openMosix turns a cluster into a big multi-processor machine.

The openMosix architecture consists of two parts:
  • a Preemptive Process Migration (PPM) mechanism and
  • a set of algorithms for adaptive resource sharing. 
Both parts are implemented at the kernel level, thus they are completely transparent to the application level.
The PPM can migrate any process, at anytime, to any available node. Usually, migrations are based on information provided by one of the resource sharing algorithms.
Each process has an home node, the machine where it was created. Every process seems to run at its home node, and all the processes of a user's session share the execution environment of the home node. Processes that migrate to other nodes use the new nodes resources (memory, files, etc.) whenever possible, but interact with the user's environment through the home node.
Until recently, the granularity of the work distribution in openMosix was the process. Users where able to run parallel applications by starting multiple processes in one node, and then the system distributed these processes to the best available nodes at that time; then the load-balancing algorithm running on each node decided when to relocate resources due to changes on nodes load. Thus, openMosix has no central control or master/slave relationship between nodes.

This model makes openMosix not so different from MPI-Beowulf clusters. Fortunately, recent work brought openMosix granularity down to thread level, enabling "migration of shared memory", i.e. the migration of pages of the process address space to other nodes. This feature permits to migrate multi-threaded applications.

Processes and threads in Linux
(Figures from the MigShm technical report and presentation: The MAASK team (Maya, Asmita, Anuradha, Snehal, Krushna) designed and implemented the migration of shared memory on openMosix)

For process migration, openMosix creates a new memory descriptor on the remote node. This is fine for normal processes, but could cause problems for threads. Because a thread shares almost all of its memory pages with its parent (all but the thread stack and TLS) when threads of the same parent process are migrated, they need to share a common memory descriptor. If they have different descriptors, these threads could point to false segments.
When a thread is migrated, openMosix migrates only the user mode stack of that particular thread. The heap is migrated "on demand", paying attention to the case in which the same node is already executing threads of the same process to ensure consistency.

openMosix + MigShm control flow
Other features of the process are the ridefinition of shared-memory primitives (shalloc() etc.) and linux thread primitives, a transparent Eager Release consistency policy, and the addition of an algorithm for adaptive resource sharing based on the frequency of shared memory usage and the load across the cluster, so that threads are migrated in a way that decreases the remote accesses to the shared memory.

Processes, Threads and Memory space

This piece of software is a very interesting and good technical quest, however the question is: it is really worth the effort? Could it scale well? Making processes, and above all developers, thinking that they only have to add threads can be misleading. And multi-thread programming requires locking, explicit synchronization, and to scale well a thoughtful administration of running threads. Threads and semaphores are starting to become uncomfortable even for multi-thread programming on a single machine.
My personal opinion is that the future is going in the other direction. There will be no shared memory, and distributed, multithreaded or clustered computation will all have the same interface, with no shared memory. The problem is that memory is lagging behind.

Processes where created for having different units of execution on the same CPU. When they were introduced, we had multiple processes all runnig in the same address space (directly into the physical address space, at that time).
Then, fortunately there was the advent of Virtual Memory, and of private virtual address spaces. We had a balanced situation: every process thought to be the only one in the machine, and to have a whole address space for its own purposes. Communication with other processes was possible, mainly message based. At that time, IPC was substantially the same if processes where on the same machine or in different machines: the main methods where sockets and named pipes.
The introduction of threads put again the system out of balance: every process had many threads of execution, sharing the same address space.

According to my historic Operating System textbook, a process is a program in execution
"with it’s current values of program counter, registers and variables; conceptually every process has it’s own virtual CPU" - A.S.Tanenbaum.
This is very close to the way modern OSes treat processes, running them in a context of execution virtually independent from the others.
Threads instead
"allow a lot of executions in the environment of a process, in wide measure independent the one from the others" - A.S. Tanenbaum.
However this definition for threads is not so close to reality: threads are not so independent among them, because they always share a primary resource (the common addressing space of the parent process).
openMosix "solves" this problem (making threads "independent" again) migrating trasparentely the required memory pages. 
But it is possible to restore the balance again? What about changing the affinity of memory from process to thread? Notice that here I am not talking about reintroducing the concept of virtual memory space for threads; modern OS uses the processor architecture to enforce and enable virtual memory for processes, at the overhead we all know; furthermore, you can't "box" addresses space one inside the other. What I am thinking about is a "light" thread that encapsulate both its code, its state (the stack) AND its data. If another thread want those data, it must ask them, and the thread that owns the data must be willing to share them. Like in the IPC case back in the '80, but without the burden of context switch unless necessary (i.e. when the thread and its data resides in another process or on another machine).

Application Domains

To complicate this design, .NET brought us Application Domains. Application Domains are designed to be "light-weight" precesses, as Chris Brumme explains. But they are "unrelated" to threads.


In my opinion, we need light threads, let call them wires, that live in the managed space (so they not clobber the scheduler), have their memory and their message based  primitives for communication. Use should be simpler than threads; a good starting point may be Join-calculus, or C-omega, or any other language that support asynchronous or active functions. Those fuctions should map directly to wires, and the runtime will map them to native "tasks" (processes or threads or fibers) so that users can finally stop to worry about hacks to mitigate thread performance limitations (number of threads, thread pools, completion ports) and explicit synchronization (sempahores, mutexes, race conditions).
Wires could also adapt very well to a distributed environment: since they carry with them their data, they can be "detached" from a computational node and "re-attached" to a different destination node.

# Friday, 06 January 2006

Context Attributes and Transparent Proxy

When I started to design synchronization contracts, I wanted to play a little with my ideas before trying to implement the whole mechanism directly into the compiler and runtime monitor. I started to wonder how contracts could be introduced in the .NET platform, and at which level.
The answer is similar to the one given for AOP on .NET (more on this in a future post!): you can act at compiler level (static-weaving in AOP parlor), at the execution engine level, and finally at class library level.

Focusing on the last two, how can you insert code for checking and enforcing constracts? According to the various studies on interception and AOP on the .NET platform(1) there are three ways to intercept calls to methods on an object, and do some preprocessing and postprocessing:
  • Using Context Attributes and a Transparent Proxy;
  • Synthesize a proxy that forward calls to the original object;
  • Injecting directly MSIL code with the Unmanaged .NET profiling APIs.
We will see how each of these methods work, and which is better for the current purpose. In this post, from now on, we will focus on the first method: using Context Attributes and a Transparent Proxy.

(1) NOTE: like Ted Neward points out in this its Interception != AOP post, the term AOP is used incorrectly in many articles. I share his ideas on the subject, but for the purposes of this discussion the term interception will suffice.

Context Attributes and a Transparent Proxy

To cook a solution based on this technology, we need three ingredients:
  • Custom Attributes to mark methods with a formula
  • Reflection (based on .NET metadata) to explore fileds and methods
  • .NET Interceptors to hijack execution and check the validity of a formula upon method invocation.
The BCL provides a complete set of managed classes (the Reflection API) that can be used to emit metadata and MSIL instructions in a rather simple way, from a managed application. In the managed world, the structure of libraries and classes is available through the metadata; this reflection mechanism makes possible to write applications that programatically read the structure of existing types in an easy way, and also that add code and fields to those types.

For the second ingredient, the .NET Framework provide also a mechanism called Attributes. According to the MSDN developers guide,
Attributes are keyword-like descriptive declarations to annotate programming elements such as types, fields, methods, and properties. Attributes are saved with the metadata of a Microsoft .NET Framework file and can be used to describe your code to the runtime or to affect application behavior at run time. While the .NET Framework supplies many useful attributes, you can also design and deploy your own.
Attributes designed by your own are called custom attributes. Custom attributes are essentially traditional classes that derive directly or indirectly from System.Attribute. Just like traditional classes, custom attributes contain methods that store and retrieve data; arguments for the attribute must match a constructor or a set of public fields in the class implementing the custom attribute.
Properties of the custom attribute class (like the element that they are intended for, if they are inherited and so on) are specified through a class-wide attribute: the AttributeUsageAttribute.
Here is an example, applied at our problem: an attribute to attach a formula to a method:
[AttributeUsage(AttributeTargets.Constructor |
AttributeTargets.Method | AttributeTargets.Property,
Inherited = true,
AllowMultiple = true)]
public class GuardAttribute : Attribute
public GuardAttribute() {
Console.WriteLine("Eval Formula: " + Formula);
public GuardAttribute(string ltlFormula) {
Formula = ltlFormula;
Console.WriteLine("Eval Formula: " + Formula);
public string Formula;
And here is an example of application of our new custom attribute:
public class Test
bool g;
bool f = true;
[Guard("H (g or f)")] //using constructor
public string m(int i, out int j)
j = i;
return (i + 2).ToString();
[Guard(Formula = "H (g or f)")] //using public field
public string m(int i, out int j)
j = i;
return (i + 2).ToString();

Attributes, like other metadata elements, can be accessed programmatically. Here is an example of a function that, given a type, scans its members to see if some is marked with our GuardAttribute:

public class AttributeConsumer
Type type;

public AttributeConsumer(Type type)   
this.type = type;
public void findAttributes()
Type attType = typeof(GuardAttribute);

foreach (MethodInfo m in type.GetMethods())
if (m.IsDefined(attType, true))
object[] atts = m.GetCustomAttributes(attType, true);
GuardAttribute att = (GuardAttribute)atts[0];

public void walkMembers(string s)
BindingFlags bf = BindingFlags.Static
| BindingFlags.Instance
| BindingFlags.Public
| BindingFlags.NonPublic
| BindingFlags.FlattenHierarchy ;

Console.WriteLine("Members for {0}", s);
MemberInfo[] members = type.GetMember(s, bf);
for (int i = 0; i < members.Length; ++i)
Console.WriteLine("{0} {1}",

//inject additional Metadata for formulas

//generate code for updating the formulas

//inject it

void injectMetadata() {
If such an attribute is found, its formula is scanned, and appropriate fields to hold the previous status of sub-formulae (needed for recording temporal behaviour) are injected into the actual type.

But what about the code? We need to be notified when a method we are interested in (a method with an attached formula) is called. Here comes the third ingredient: .NET interceptors.

.NET interceptors are associated with the target component via metadata; the CLR uses the metadata to compose in a stack a set of objects (called message sinks) that get notified of every method call. The composition usually happens when an object instance of the target component is created. When the client calls a method on the target object, the call is intercepted by the framework, the message sinks get the chance of processing the call and performing their service; finally the object's method is called.
On the return from the call each sink in the chain is again invoked, giving it the possibility to post-process the call. This set of message sinks work together to provide a context for the component's method to execute.

Thanks to attributes, metadata of the target component can be enriched with all the information necessaries to bind the message sinks we want to the component itself. However, custom attiributes are often not sufficient: if you need access to the call stack before and after each method to read the environment (like parameters of a method call), this requires an interceptor and the context in which the object lives: .NET interceptors can act only if we provide for a context for the component.
Let's see how objects can live in a context, and how contexts and interceptors work togheter. In the .NET Framework, an application domain is a logical boundary that the common language runtime (CLR) creates within a single process. Components loaded by the .NET runtime are isolated form each other: they run independently of one another and cannot directly impact one another; they don't directly share memory and they can communicate only using .NET remoting (although this service is provided transparently by the framework). Components living in separate appdomains have separate contexts. For objects living in the same application domain, a context is provided for any class that derives from System.ContextBoundObject; when we create an instance of a subclass of ContextBoundObject the .NET runtime will automatically create a separate context for the newly created object.


This diagram shows the flow of a call between the a client class and an object in a different context or application domain.
In such a situation, the .NET framework performs the following steps:
  1. A transparent proxy is created. This proxy contains an interface identical to the recipient, so that the caller is kept in the dark about the ultimate location of the callee.
  2. The transparent proxy calls the real proxy, whose job it is to marshal the parameters of the method across the application domain. Before the target object receives the call there are zero or more message sink classes that get called. The first message sink pre-processes the message, sends it along to the next message sink in the stack of message sinks between client and object, and then post-processes the message. The next message sink does the same, and so on until the last sink is reached. Then the control is passed to the stack builder sink.
  3. The last sink in the chain is the stack builder sink. This sink takes the parameters and places them onto the stack before invoking the method in the receiving object.
  4. By doing this, the recipient remains as oblivious to the mechanism used to make the call as the initiator is.
  5. After calling the object, the stack builder sink serializes the outbound parameters and the return value, and returns to the previous message sink.
So, the object implementing our pre- and post-processing logic have to participate in this chain of message sinks.

For a class implementing a sink to be hooked to our target object, we first need to update our attribute to work with context-bound objects. This is done by deriving it from ContextAttribute instead of Attribute and implementing a method for returning a context property for that attribute:
[AttributeUsage(AttributeTargets.Class, Inherited = true)]
public class InterceptAttribute : ContextAttribute

public InterceptAttribute() : base("Intercept")

public override void Freeze(Context newContext)

public override void
GetPropertiesForNewContext(IConstructionCallMessage ctorMsg)
ctorMsg.ContextProperties.Add( new InterceptProperty() );

public override bool IsContextOK(Context ctx,
IConstructionCallMessage ctorMsg)
InterceptProperty p = ctx.GetProperty("Intercept")
as InterceptProperty;
if(p == null)
return false;
return true;

public override bool IsNewContextOK(Context newCtx)
InterceptProperty p = newCtx.GetProperty("Intercept")
as InterceptProperty;
if(p == null)
return false;
return true;


[AttributeUsage(AttributeTargets.Constructor |
AttributeTargets.Method | AttributeTargets.Property,
Inherited = true,
AllowMultiple = true)]
public class GuardAttribute : Attribute
public GuardAttribute() {}
public GuardAttribute(string ltlFormula)
Formula = ltlFormula;

AttributeConsumer ac = new AttributeConsumer();

//parse formula...
LTLcomp parser = new LTLcomp(ac);

private string Formula;

public void Process()
//evalute Formula

public void PostProcess()
//update formula
At object creation time GetPropertiesForNewContext is called for each context attribute associated with the object.
This allows us to add our own \emph{context property} to the list of properties of the context bound with our target object; the property in turn allows us to add a message sink to the chain of message sinks:

The intercepting mechanism is really powerful. However, for our purposes it is not enough. It has three major diadvantages:
  • performance: the overhead of crossing the boundary of a context, or of an appdomain, isn't always acceptable (the cost of a function call is 20-fold, from some simple measures I did). If you already need to do this (your component must live in another appdomain, or in another process, or even in another machine) there is no problem and almost no overhead, since the framework need already to establish a proxy and marshal all method calls;
  • we have to modify the class(es) in the target component. They must inherith from ContextBoundObject. And since .NET doesn't support multiple inhritance, this is a rather serious issue;
  • only parameter of the method call are accessible. The state of the target object, its fields and properties, are hidden. Since to find out if an object is accessible from a client we need to inspect its state, this issue makes it very difficult to use the interception mechanism for our purposes.
Next time we'll see the first completely working solution: synthesized proxies.
# Wednesday, 04 January 2006

What am I going to say?

As 2006 is arrived, is time for a little plan for this blog. Something I want to share is my experience adding contracts (synchronization contracts, to be precise) to the .NET Framework. I did various trials: at BCL, CLR and Compiler level. Every one has advantages and disadvantage. Personally, I think that the one acting at CLR level is very interesting (it uses the Profiling APIs and the Unmanaged Metadata API to dynamically inject the verification and update code!). So, at least three posts on the three methods.

Next, I'd like to spend some more words on concurrency and parallel computing.

And since I'm starting a new project on Bioinformatics at work (fold prediction), I should be able to write some posts on this argument too.

Finally, if they want me to continue fight with castor porting damn java web applications, I'll have to dump my frustration here.. =)

What am I reading right now?

  • What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg. Eh, I knew it from my first Fortran program that floats (the standard 32 bit precision using REALs in that language) will give you rounding precision. And they accumulate pretty well! Very interesting reading, though.

  • Practical Common Lisp: my struggling to become good at this (for me, and for now) weird language. Very good till this point.

  • Language Support for Lightweight Transactions, by Tim Harris and Keir Fraser. I am  always interested in concurrency and distributed computing, so this is a must after MSDN Magazie January's End Bracket, by Joe Duffy

  • Speaking of parallel computing, I learned a lot at work on current standards for grid computing. Basically, we are building a computing cluster, and two different projects come into the scene: openMosix and MPI (the second one is the protocol choosen for Windows Server 2003 CCE, too). The two use two very different approaches, each with his own drawbacks and strenghts. I want to study some more, especially on the openMosix front, and then expose here what I learned and my own ideas.

  • And, last but not least, Hackers and Painters, by Paul Graham. Very interesting and stimulating reading, made even more stimulating by the fact that I agree with many of his opinions, but I totally disagree with many others. I find it difficult to understand how an open minded persons could fall trapped in the same mistakes he points out in other people. But maybe Paul wrote some of his pages only to please an audience.. He was going to sell his book, after all. I want to discuss this topic more deeply in the future; it surely deserves a post.

# Tuesday, 03 January 2006

Happy new year!

Finally 2006 arrived!
Wew, it was a rather tiring December. I had a lot of "collateral" work: new servers, a web infrastructure to build, porting a web application to mysql (and Castor still refuses to collaborate!). I spent a very good Christmas with my family, made some good reads (my heap of to-read papers/articles/books diminished a little)  and went to Salzburg with my girlfriend.
On the informatics side, I designed my 2006 LOTY: Lisp! I will dedicate a post on my decision (Why Lisp?), but for the moment let me say that Practical Common Lisp is a very good book! I look forward to learn using macros: in the second chapter Peter Seibel gives you a whole bunch of database bindings (with very easy to use select, update and delete functions) with only 50 lines of code! Amazing... It works only on attributed lists, but it resembles a lot Linq.
I am really thrilled to see if Lisp turn out to be the language for building languages that claims to be. I always thought that the most annoying thing for a programming language is to NOT expose the language, i.e. to not make available to the programmer constructs of the language itself.
Take java: I find highly annoying that the language has cast operators for classes and primitive types, and operator overloading for classes in the Java class library (like + on String), and do not let the programmer to use them! Till now, C++ was my language of choice for this very reason: it lets you to "adapt" the language, build a meta-language that fits your needs and your application domain.
Maybe Lisp macros will get an hold on me.. =)

# Thursday, 22 December 2005


My 2005 experience with languages was less interesting. I finished university, and started to work at a research center on Bioinformatics, trying to decide what should I do next (work? PhD? try to make a startup?).

I really liked bioinformarics how they introduced us back at the university, being all a thing of concurrency, synchronizzation and programming languages (as I learned later, this is a branch called System Biology).
Well, in the real world (TM) the language of choice of Bioinformaticians is.. perl.
And they (too often) use it NOT how it was meant to be: the 'duct tape' of programming language, to glue together different pieces written by different people in different envirnoment. For this task, perl works like a charm. And it is very easy too... but Larry Wall claim (Perl make easy things easy, and difficult things not impossible) stops at the first sentence. The second one is not wrong, but having seen some perl hacks, it is better said "and difficult things very messy".

And biologists use it because it is messy. Declare varaibles? Why? Not using goto? It is one of the instructions, why can't I use it?

All together, I am glad I learned perl. It has some very interesting stuff (hash tables, file manipulation, a neat database interface that many should learn from) and, above all, is the father of Perl regular expressions. It is un-matched in the field of text-file manipulation.
But surely, it is better remeber that you don't build an house out of duct tape..

# Wednesday, 21 December 2005

LOTY: 2004

2004 was my C# year. I think I have done a nice work.. Coming from C++ was not so difficult, at first it was like using a "scripting C++", i.e. C++ as a two level language. (Little digression: one of the thing I like most of C++ is its ability to be used as a language for languages. Nobody should use C++ low-level features to build a commercial application, but she shuold create a very well designed and optimized library, with all the quick and dirts of the language, and then use this newly created enviroment as her production language. That's the power of C++!)

But C# is much more than a simplified version of C++. It has functional seeds in it, that are growing more and more through the various versions: first delegates, now full closures (anon delegetes), iterators and yields, and for the future anon classes, extension methods, expressions trees and lambda functions! I am very thrilled!
You may wonder why it took me one year (and why I think I have done a nice work) to learn C#. Well, I was productive in an evening (is a very simple language) bbut I wanted to go "under the hood". I think that the best starting point is still Don Box's "Essential .NET". Is a .NET book, but it describes very well the interactions between the language and the compiler. Then I downloaded and take a peek inside Rotor (like you can see here.. Boy, that was interesting!). And then I went for C# variations: Spec#, Comega... and my own contract language! =)

2004 was a nice year. I like C#, and I found it very procuctive (truth to say, it is mainly thanks to .NET), and the functional flavour it is taking is good!

# Monday, 19 December 2005

Language of the Year

"Learn at least one new [programming] language every year. Different languages solve the same problems in different ways. By learning several different approaches, you can help broaden your thinking and avoid getting stuck in a rut."
   --- The Pragmatic Programmer

I embraced this philsophy some years ago, and I found it very fruitful. Learning a new programming language is surely simpler than learning a new language: you can write simple programs in a night, and being productive in a week (surely, it takes a lot more to "master" it, henece the "one language for year").It has however the same advatages. Knowing many languages makes you able to speak directly with different people, easing your job; the same is true form programming languages: you can "speak" directly to many different code, which will definitely ease your work!

Another advantage is that you can use the right tool for the right job. This was not perfectly true in the past: if you program was in C and you had to write a little reasoner, it was hard to write it in another language (say Lisp) and then integrate it in the main program: often the choice did not paid off (for performace, reliability, the difficulty to interface the two worlds using strange mechanisms, sometimes even the necessity to write by yourself such interfaces).

If .NET succeeded in making to me a really good impression, its ability to integrate seamlessy different programming languages without imposing you a "standard language" was surely an important point. I found myself re-opening my CS books on functional programming and use ML and Caml again, with my greatest pleasure.

I think my two points can be summoned with this code snippet I found in an interesing blog post:

sort           [] = []
sort (pivot:rest) = sort [y | y <- rest, y < pivot]
                    ++ [pivot] ++
                    sort [y | y <- rest, y >=pivot]

It's Haskell. If you don't know Haskell, or a similar programming language, you may have the wrong reaction, put this code in the thrash, and write a 20 lines long version of the algorithm in your "standard language". Or, if you have learned a different programming language once in your life, you can appreciate the beauty and the simplicity of it.

Of course, Haskell is terrible for other things: but you can compile the code in an assembly, and reuse it from another language. The right tool for the right thing.

Last but not least: like the opening citation said, learning a new language is food for your mind.

# Wednesday, 14 December 2005

Meaningful programming T-shirts

From Jeff Prosise

Comment my code?

Why do you think they call it code?

Perfectly true. It is the same thing expressed in an interesting book I'm reading (Hackers and Painters, by Paul Graham - it is an interesting book, with some very good points and some very weak spots.. more on this in a later post, when I'll finish it).

The shirt is obviusly intended to be ironc.. but there is truth in the statement. A programming language is the perfect way to express algorithms; if you have to go and comment it.. you are not writing good code! Comments should be placed only in the right and meaningful spots (bleargh to automatic tools - what they were think at when they promoted the use of this tool?)

On a similar note, another T-shirt (or cup) I always wanted to have is

It compiles!

Let's ship it!

(Also the subtitle of my blog). The message is funny, but it shouldn't be: I hate testing, and my dream for the future is a really automatic and clever tool that is executed when you build your program. Love static analisys, and remember: an error the compiler can catch is a bug less in your product!