# Friday, 20 January 2006

More on filesystems

Last time I posted an opinion on one of Jeff Atwood's posts. This time I'd like to elaborate a little more on the good points Jeff made in its article, on filesystems not being a feature but a mere implementation detail.

The post goes on with several proposal, all aimed at reaching the definite solution: get ride of the hierarchical file system. This was done by Jef Raskin with his experimental LEAP interface. A less radical solution is then discusses: do everything you can to hide the file-system from the user, using no filenames, no prompts on saves, automatic versioning. A sort of milestone-based machanism. While this is great for multiple saves and versioning of the same document, and for recovering the too frequent overwrite errors, I don't see "restore points" as a substitute for filesystems, but a complementary feature (although it can really help a lot in naming revised documents; how many times have you seen a file named "thesis14MarTry2.doc"?) (NB: this technology already exists in Windows (2003 and Vista): based on VSS, Shadow Copy preserves multiple copies of the same file transparently, as reported here through AdiOltean).

I don't like very much both of the solutions: however, the problem is real and somehow must be addressed. Hierarchical file-systems, partitions, hard drives are not only implementation details we (developers) must hide form the unexperienced users. It is a pain also for people that work on a computer all days. How many times in the last month did you though "where the heck is that file named something like that?" or "where I installed that application, an year ago? I cannot find neither a link nor the executable!"

One of the comments posted to the original blog post, and Jeff's response,  reflects my own opinion on the subject: The commenter objects that the proposed solution, using the contents of the file, instead of its name, isn't really adequate. He observes that it may make it easier to search for files and simpler to save files, but it makes browsing through the file-system quite difficult. Jeff response was: "When was the last time you "browsed" through the Internet, eg, you used a YAHOO or DMOZ style directory?"
Searching technologies are the future of file-systems. In many cases they already are the present, and they are continuously growing. What they still lack is integration. A lot of integration. Windows desktop search engines are the worst, in this sense. Both Google Desktop search and Windows Desktop search are completely separated applications, integrated only in a poor manner into the OS. In Windows desktop search, you can't even customize your view adding and removing columns, or reordering them.

For actual OSes, OSX has a point (with its SpotLight). You can even save search folders, organizing your items in a way that is independent from the file-system real organization. The find-as-type behavior, the subdivision of results in different categories, all make SpotLight a very well done system.



Windows Vista do the same things, sometimes better (especially for virtual folders, see picture). The search is extensive, in every folder, and in many other places - even in open and save dialogs.



Open dialog: pressing keywords on the left you can make queries.



Searching a folder in Explorer



Meta (virtual) folders, stacked, in Explorer.

However, this point is still a little weak in my opinion. I don't have a Vista beta, but from what I have seen and read, the search function could be integrated better. Take the open and save dialogs, the search for applications in the start menu (which is different! Why??) and, for example, the create shortcut wizard and the run.. dialog. Then look at these screenshot of SkyOS, or even better, go and watch the Search movie form here. I think it is impressive how search functionalities are widespread in the system, using a consistent ad standard interface. I'd really love to open a dialog, and then type a few keywords and find my file. Or to do the same thing in explorer, on the desktop...




Searching for a file in the storage manager...



... an application from the Run dialog (note the Incremental, nor excremental search funtion)...




...and from the Open common dialog.

The explorer view and the common dialog view are also pretty similar (even if they are not identical as I was hoping), making like easier for end users. And the search pane, with its subdivision into categories, is consistent for run dialog, create shortcut, locate icon, open with...
# Thursday, 19 January 2006

Main user interface problem: consistency!

I recently read a very interesting post on Jeff Atwood post: Filesystems Aren't a Feature. He starts pointing out an observation done by a developer watching his relatives using the pc:
When I observe how my wife and son uses the family computer, I can't help noticing how little use they have for the desktop. They look bewildered when I open the Windows Explorer. To them, file open or file save dialog is where the files go. My Documents? It's just an icon they never touch.
I don't even know why the open dialog (and the save dialog) and the File Manager (Explorer) should be different. They have the same function: locate a file. In his post Jeff consider alternatives to the direct exposure of the file system to the user. I agree that the File System hierachical structure -with files, folders, and different partitions- is confusing, but we should wonder in the first place why open/save dialogs ("where the files go") is different from My Computer. If they were identical, sharing the same interface, would it be so confusing? I don't think so..

# Tuesday, 10 January 2006

Ah-ah! [1]


From Sam Gentile:
“Reported by CNET, of all the CERT security vulnerabilities of the year 2005, 218 belonged to the Windows OS.  But get this - ther were 2,328 CERT security vulnerabilities for UNIX/Linux systems.”
That's great news, but it only confirms that Windows is now a OS that takes security really seriously.
Why even clever people, like Paul Graham, are sometimes so biased about Windows and Microsoft?


On openMosix


The first clustering architecture I am going to speak about is openMosix. openMosix is a Linux-only solution, for reasons that will be clear, but the concepts are applicable to every OS with virtual memory architecture. I think that a port of these ideas to the Windows OSes can be very interesting, but enormously challenging (at least for developers that cannot access the sources) and maybe not so paying for: other architectures that require a shift in the concurrent/distributed programming paradigm can bring more benefits at last.

Anyway, openMosix is unique for its (almost) complete tranparency: processes can be migrated to other nodes, and distributed computing could happen, without any intervent on the user or programmer side. openMosix turns a cluster into a big multi-processor machine.

The openMosix architecture consists of two parts:
  • a Preemptive Process Migration (PPM) mechanism and
  • a set of algorithms for adaptive resource sharing. 
Both parts are implemented at the kernel level, thus they are completely transparent to the application level.
The PPM can migrate any process, at anytime, to any available node. Usually, migrations are based on information provided by one of the resource sharing algorithms.
Each process has an home node, the machine where it was created. Every process seems to run at its home node, and all the processes of a user's session share the execution environment of the home node. Processes that migrate to other nodes use the new nodes resources (memory, files, etc.) whenever possible, but interact with the user's environment through the home node.
Until recently, the granularity of the work distribution in openMosix was the process. Users where able to run parallel applications by starting multiple processes in one node, and then the system distributed these processes to the best available nodes at that time; then the load-balancing algorithm running on each node decided when to relocate resources due to changes on nodes load. Thus, openMosix has no central control or master/slave relationship between nodes.

This model makes openMosix not so different from MPI-Beowulf clusters. Fortunately, recent work brought openMosix granularity down to thread level, enabling "migration of shared memory", i.e. the migration of pages of the process address space to other nodes. This feature permits to migrate multi-threaded applications.

Processes and threads in Linux
(Figures from the MigShm technical report and presentation: The MAASK team (Maya, Asmita, Anuradha, Snehal, Krushna) designed and implemented the migration of shared memory on openMosix)


For process migration, openMosix creates a new memory descriptor on the remote node. This is fine for normal processes, but could cause problems for threads. Because a thread shares almost all of its memory pages with its parent (all but the thread stack and TLS) when threads of the same parent process are migrated, they need to share a common memory descriptor. If they have different descriptors, these threads could point to false segments.
When a thread is migrated, openMosix migrates only the user mode stack of that particular thread. The heap is migrated "on demand", paying attention to the case in which the same node is already executing threads of the same process to ensure consistency.




openMosix + MigShm control flow
Other features of the process are the ridefinition of shared-memory primitives (shalloc() etc.) and linux thread primitives, a transparent Eager Release consistency policy, and the addition of an algorithm for adaptive resource sharing based on the frequency of shared memory usage and the load across the cluster, so that threads are migrated in a way that decreases the remote accesses to the shared memory.

Processes, Threads and Memory space

This piece of software is a very interesting and good technical quest, however the question is: it is really worth the effort? Could it scale well? Making processes, and above all developers, thinking that they only have to add threads can be misleading. And multi-thread programming requires locking, explicit synchronization, and to scale well a thoughtful administration of running threads. Threads and semaphores are starting to become uncomfortable even for multi-thread programming on a single machine.
My personal opinion is that the future is going in the other direction. There will be no shared memory, and distributed, multithreaded or clustered computation will all have the same interface, with no shared memory. The problem is that memory is lagging behind.

Processes where created for having different units of execution on the same CPU. When they were introduced, we had multiple processes all runnig in the same address space (directly into the physical address space, at that time).
Then, fortunately there was the advent of Virtual Memory, and of private virtual address spaces. We had a balanced situation: every process thought to be the only one in the machine, and to have a whole address space for its own purposes. Communication with other processes was possible, mainly message based. At that time, IPC was substantially the same if processes where on the same machine or in different machines: the main methods where sockets and named pipes.
The introduction of threads put again the system out of balance: every process had many threads of execution, sharing the same address space.

According to my historic Operating System textbook, a process is a program in execution
"with it’s current values of program counter, registers and variables; conceptually every process has it’s own virtual CPU" - A.S.Tanenbaum.
This is very close to the way modern OSes treat processes, running them in a context of execution virtually independent from the others.
Threads instead
"allow a lot of executions in the environment of a process, in wide measure independent the one from the others" - A.S. Tanenbaum.
However this definition for threads is not so close to reality: threads are not so independent among them, because they always share a primary resource (the common addressing space of the parent process).
openMosix "solves" this problem (making threads "independent" again) migrating trasparentely the required memory pages. 
But it is possible to restore the balance again? What about changing the affinity of memory from process to thread? Notice that here I am not talking about reintroducing the concept of virtual memory space for threads; modern OS uses the processor architecture to enforce and enable virtual memory for processes, at the overhead we all know; furthermore, you can't "box" addresses space one inside the other. What I am thinking about is a "light" thread that encapsulate both its code, its state (the stack) AND its data. If another thread want those data, it must ask them, and the thread that owns the data must be willing to share them. Like in the IPC case back in the '80, but without the burden of context switch unless necessary (i.e. when the thread and its data resides in another process or on another machine).

Application Domains

To complicate this design, .NET brought us Application Domains. Application Domains are designed to be "light-weight" precesses, as Chris Brumme explains. But they are "unrelated" to threads.

Wires?

In my opinion, we need light threads, let call them wires, that live in the managed space (so they not clobber the scheduler), have their memory and their message based  primitives for communication. Use should be simpler than threads; a good starting point may be Join-calculus, or C-omega, or any other language that support asynchronous or active functions. Those fuctions should map directly to wires, and the runtime will map them to native "tasks" (processes or threads or fibers) so that users can finally stop to worry about hacks to mitigate thread performance limitations (number of threads, thread pools, completion ports) and explicit synchronization (sempahores, mutexes, race conditions).
Wires could also adapt very well to a distributed environment: since they carry with them their data, they can be "detached" from a computational node and "re-attached" to a different destination node.




# Friday, 06 January 2006

Context Attributes and Transparent Proxy

When I started to design synchronization contracts, I wanted to play a little with my ideas before trying to implement the whole mechanism directly into the compiler and runtime monitor. I started to wonder how contracts could be introduced in the .NET platform, and at which level.
The answer is similar to the one given for AOP on .NET (more on this in a future post!): you can act at compiler level (static-weaving in AOP parlor), at the execution engine level, and finally at class library level.

Focusing on the last two, how can you insert code for checking and enforcing constracts? According to the various studies on interception and AOP on the .NET platform(1) there are three ways to intercept calls to methods on an object, and do some preprocessing and postprocessing:
  • Using Context Attributes and a Transparent Proxy;
  • Synthesize a proxy that forward calls to the original object;
  • Injecting directly MSIL code with the Unmanaged .NET profiling APIs.
We will see how each of these methods work, and which is better for the current purpose. In this post, from now on, we will focus on the first method: using Context Attributes and a Transparent Proxy.

(1) NOTE: like Ted Neward points out in this its Interception != AOP post, the term AOP is used incorrectly in many articles. I share his ideas on the subject, but for the purposes of this discussion the term interception will suffice.

Context Attributes and a Transparent Proxy

To cook a solution based on this technology, we need three ingredients:
  • Custom Attributes to mark methods with a formula
  • Reflection (based on .NET metadata) to explore fileds and methods
  • .NET Interceptors to hijack execution and check the validity of a formula upon method invocation.
The BCL provides a complete set of managed classes (the Reflection API) that can be used to emit metadata and MSIL instructions in a rather simple way, from a managed application. In the managed world, the structure of libraries and classes is available through the metadata; this reflection mechanism makes possible to write applications that programatically read the structure of existing types in an easy way, and also that add code and fields to those types.

For the second ingredient, the .NET Framework provide also a mechanism called Attributes. According to the MSDN developers guide,
Attributes are keyword-like descriptive declarations to annotate programming elements such as types, fields, methods, and properties. Attributes are saved with the metadata of a Microsoft .NET Framework file and can be used to describe your code to the runtime or to affect application behavior at run time. While the .NET Framework supplies many useful attributes, you can also design and deploy your own.
Attributes designed by your own are called custom attributes. Custom attributes are essentially traditional classes that derive directly or indirectly from System.Attribute. Just like traditional classes, custom attributes contain methods that store and retrieve data; arguments for the attribute must match a constructor or a set of public fields in the class implementing the custom attribute.
Properties of the custom attribute class (like the element that they are intended for, if they are inherited and so on) are specified through a class-wide attribute: the AttributeUsageAttribute.
Here is an example, applied at our problem: an attribute to attach a formula to a method:
[AttributeUsage(AttributeTargets.Constructor |
AttributeTargets.Method | AttributeTargets.Property,
Inherited = true,
AllowMultiple = true)]
public class GuardAttribute : Attribute
{
public GuardAttribute() {
Console.WriteLine("Eval Formula: " + Formula);
}
public GuardAttribute(string ltlFormula) {
Formula = ltlFormula;
Console.WriteLine("Eval Formula: " + Formula);
}
public string Formula;
}
And here is an example of application of our new custom attribute:
public class Test
{
bool g;
bool f = true;
[Guard("H (g or f)")] //using constructor
public string m(int i, out int j)
{
j = i;
return (i + 2).ToString();
}
[Guard(Formula = "H (g or f)")] //using public field
public string m(int i, out int j)
{
j = i;
return (i + 2).ToString();
}
}

Attributes, like other metadata elements, can be accessed programmatically. Here is an example of a function that, given a type, scans its members to see if some is marked with our GuardAttribute:

public class AttributeConsumer
{
Type type;

public AttributeConsumer(Type type)   
{
this.type = type;
}
   
public void findAttributes()
{
Type attType = typeof(GuardAttribute);

foreach (MethodInfo m in type.GetMethods())
{
if (m.IsDefined(attType, true))
{
object[] atts = m.GetCustomAttributes(attType, true);
GuardAttribute att = (GuardAttribute)atts[0];
parseAttribute(att.Formula);
}
}
}

public void walkMembers(string s)
{
BindingFlags bf = BindingFlags.Static
| BindingFlags.Instance
| BindingFlags.Public
| BindingFlags.NonPublic
| BindingFlags.FlattenHierarchy ;

Console.WriteLine("Members for {0}", s);
MemberInfo[] members = type.GetMember(s, bf);
for (int i = 0; i < members.Length; ++i)
{
Console.WriteLine("{0} {1}",
members[i].MemberType,
members[i].Name);

//inject additional Metadata for formulas

//generate code for updating the formulas

//inject it
}
}

void injectMetadata() {
//...
}
}
If such an attribute is found, its formula is scanned, and appropriate fields to hold the previous status of sub-formulae (needed for recording temporal behaviour) are injected into the actual type.

But what about the code? We need to be notified when a method we are interested in (a method with an attached formula) is called. Here comes the third ingredient: .NET interceptors.

.NET interceptors are associated with the target component via metadata; the CLR uses the metadata to compose in a stack a set of objects (called message sinks) that get notified of every method call. The composition usually happens when an object instance of the target component is created. When the client calls a method on the target object, the call is intercepted by the framework, the message sinks get the chance of processing the call and performing their service; finally the object's method is called.
On the return from the call each sink in the chain is again invoked, giving it the possibility to post-process the call. This set of message sinks work together to provide a context for the component's method to execute.

Thanks to attributes, metadata of the target component can be enriched with all the information necessaries to bind the message sinks we want to the component itself. However, custom attiributes are often not sufficient: if you need access to the call stack before and after each method to read the environment (like parameters of a method call), this requires an interceptor and the context in which the object lives: .NET interceptors can act only if we provide for a context for the component.
Let's see how objects can live in a context, and how contexts and interceptors work togheter. In the .NET Framework, an application domain is a logical boundary that the common language runtime (CLR) creates within a single process. Components loaded by the .NET runtime are isolated form each other: they run independently of one another and cannot directly impact one another; they don't directly share memory and they can communicate only using .NET remoting (although this service is provided transparently by the framework). Components living in separate appdomains have separate contexts. For objects living in the same application domain, a context is provided for any class that derives from System.ContextBoundObject; when we create an instance of a subclass of ContextBoundObject the .NET runtime will automatically create a separate context for the newly created object.

figura1.png 

This diagram shows the flow of a call between the a client class and an object in a different context or application domain.
In such a situation, the .NET framework performs the following steps:
  1. A transparent proxy is created. This proxy contains an interface identical to the recipient, so that the caller is kept in the dark about the ultimate location of the callee.
  2. The transparent proxy calls the real proxy, whose job it is to marshal the parameters of the method across the application domain. Before the target object receives the call there are zero or more message sink classes that get called. The first message sink pre-processes the message, sends it along to the next message sink in the stack of message sinks between client and object, and then post-processes the message. The next message sink does the same, and so on until the last sink is reached. Then the control is passed to the stack builder sink.
  3. The last sink in the chain is the stack builder sink. This sink takes the parameters and places them onto the stack before invoking the method in the receiving object.
  4. By doing this, the recipient remains as oblivious to the mechanism used to make the call as the initiator is.
  5. After calling the object, the stack builder sink serializes the outbound parameters and the return value, and returns to the previous message sink.
So, the object implementing our pre- and post-processing logic have to participate in this chain of message sinks.

For a class implementing a sink to be hooked to our target object, we first need to update our attribute to work with context-bound objects. This is done by deriving it from ContextAttribute instead of Attribute and implementing a method for returning a context property for that attribute:
[AttributeUsage(AttributeTargets.Class, Inherited = true)]
public class InterceptAttribute : ContextAttribute
{

public InterceptAttribute() : base("Intercept")
{
}

public override void Freeze(Context newContext)
{
}

public override void
GetPropertiesForNewContext(IConstructionCallMessage ctorMsg)
{
ctorMsg.ContextProperties.Add( new InterceptProperty() );
}

public override bool IsContextOK(Context ctx,
IConstructionCallMessage ctorMsg)
{
InterceptProperty p = ctx.GetProperty("Intercept")
as InterceptProperty;
if(p == null)
return false;
return true;
}

public override bool IsNewContextOK(Context newCtx)
{
InterceptProperty p = newCtx.GetProperty("Intercept")
as InterceptProperty;
if(p == null)
return false;
return true;
}

}

[AttributeUsage(AttributeTargets.Constructor |
AttributeTargets.Method | AttributeTargets.Property,
Inherited = true,
AllowMultiple = true)]
public class GuardAttribute : Attribute
{
public GuardAttribute() {}
public GuardAttribute(string ltlFormula)
{
Formula = ltlFormula;

AttributeConsumer ac = new AttributeConsumer();

//parse formula...
LTLcomp parser = new LTLcomp(ac);
parser.openGrammar(...);
parser.parseSource(ltlFormula);
}

private string Formula;

public void Process()
{
//evalute Formula
}

public void PostProcess()
{
//update formula
}
}
At object creation time GetPropertiesForNewContext is called for each context attribute associated with the object.
This allows us to add our own \emph{context property} to the list of properties of the context bound with our target object; the property in turn allows us to add a message sink to the chain of message sinks:

The intercepting mechanism is really powerful. However, for our purposes it is not enough. It has three major diadvantages:
  • performance: the overhead of crossing the boundary of a context, or of an appdomain, isn't always acceptable (the cost of a function call is 20-fold, from some simple measures I did). If you already need to do this (your component must live in another appdomain, or in another process, or even in another machine) there is no problem and almost no overhead, since the framework need already to establish a proxy and marshal all method calls;
  • we have to modify the class(es) in the target component. They must inherith from ContextBoundObject. And since .NET doesn't support multiple inhritance, this is a rather serious issue;
  • only parameter of the method call are accessible. The state of the target object, its fields and properties, are hidden. Since to find out if an object is accessible from a client we need to inspect its state, this issue makes it very difficult to use the interception mechanism for our purposes.
Next time we'll see the first completely working solution: synthesized proxies.
# Wednesday, 04 January 2006

What am I going to say?

As 2006 is arrived, is time for a little plan for this blog. Something I want to share is my experience adding contracts (synchronization contracts, to be precise) to the .NET Framework. I did various trials: at BCL, CLR and Compiler level. Every one has advantages and disadvantage. Personally, I think that the one acting at CLR level is very interesting (it uses the Profiling APIs and the Unmanaged Metadata API to dynamically inject the verification and update code!). So, at least three posts on the three methods.

Next, I'd like to spend some more words on concurrency and parallel computing.

And since I'm starting a new project on Bioinformatics at work (fold prediction), I should be able to write some posts on this argument too.

Finally, if they want me to continue fight with castor porting damn java web applications, I'll have to dump my frustration here.. =)

What am I reading right now?

  • What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg. Eh, I knew it from my first Fortran program that floats (the standard 32 bit precision using REALs in that language) will give you rounding precision. And they accumulate pretty well! Very interesting reading, though.

  • Practical Common Lisp: my struggling to become good at this (for me, and for now) weird language. Very good till this point.

  • Language Support for Lightweight Transactions, by Tim Harris and Keir Fraser. I am  always interested in concurrency and distributed computing, so this is a must after MSDN Magazie January's End Bracket, by Joe Duffy

  • Speaking of parallel computing, I learned a lot at work on current standards for grid computing. Basically, we are building a computing cluster, and two different projects come into the scene: openMosix and MPI (the second one is the protocol choosen for Windows Server 2003 CCE, too). The two use two very different approaches, each with his own drawbacks and strenghts. I want to study some more, especially on the openMosix front, and then expose here what I learned and my own ideas.

  • And, last but not least, Hackers and Painters, by Paul Graham. Very interesting and stimulating reading, made even more stimulating by the fact that I agree with many of his opinions, but I totally disagree with many others. I find it difficult to understand how an open minded persons could fall trapped in the same mistakes he points out in other people. But maybe Paul wrote some of his pages only to please an audience.. He was going to sell his book, after all. I want to discuss this topic more deeply in the future; it surely deserves a post.

# Tuesday, 03 January 2006

Happy new year!

Finally 2006 arrived!
Wew, it was a rather tiring December. I had a lot of "collateral" work: new servers, a web infrastructure to build, porting a web application to mysql (and Castor still refuses to collaborate!). I spent a very good Christmas with my family, made some good reads (my heap of to-read papers/articles/books diminished a little)  and went to Salzburg with my girlfriend.
On the informatics side, I designed my 2006 LOTY: Lisp! I will dedicate a post on my decision (Why Lisp?), but for the moment let me say that Practical Common Lisp is a very good book! I look forward to learn using macros: in the second chapter Peter Seibel gives you a whole bunch of database bindings (with very easy to use select, update and delete functions) with only 50 lines of code! Amazing... It works only on attributed lists, but it resembles a lot Linq.
I am really thrilled to see if Lisp turn out to be the language for building languages that claims to be. I always thought that the most annoying thing for a programming language is to NOT expose the language, i.e. to not make available to the programmer constructs of the language itself.
Take java: I find highly annoying that the language has cast operators for classes and primitive types, and operator overloading for classes in the Java class library (like + on String), and do not let the programmer to use them! Till now, C++ was my language of choice for this very reason: it lets you to "adapt" the language, build a meta-language that fits your needs and your application domain.
Maybe Lisp macros will get an hold on me.. =)