# Monday, 24 June 2013

A "Mini" workflow engine, part 4.1: from delegates to patterns

Today I will write just a short post, to wrap-up my considerations about how to represent bookmarks.
We have seen that the key to represent episodic execution of reactive programs is to save "continuations", marking points in the program text where to suspend (and resume) execution, together with program state.

We have seen two ways of represent bookmarks in C#: using delegates/Func<> objects, and using iterators.

Before going further, I would like to introduce a third way, a variation of the "Func<> object" solution.
Instead of persisting delegates (which I find kind of interesting and clever, but limiting) we turn to an old,plain pattern solution.

A pattern which recently gained a lot of attention is CQRS (Command Query Responsibility Segregation). It is a pattern that helps to simplify the design of distributed systems, and helps scaling out. It is particularly helpful in cloud environments (my team at MSR in Trento used it for our cloud project).

Having recently used it, I remembered how CQRS (like many other architectural level patterns) is built upon other patterns; in particular, I started thinking about the Command pattern (the "first half" of CQRS).
It looks like the Command pattern could fit quite well with our model; for example, we can turn the ReadLine activity:

[Serializable]
public class ReadLine : Activity
{
    public OutArgument<string> Text = new OutArgument<string>();

    protected override ActivityExecutionStatus Execute(WorkflowContext context)
    {
        context.CreateBookmark(this.Name, this.ContinueAt);
        return ActivityExecutionStatus.Executing;
    }

    void ContinueAt(WorkflowContext context, object value)
    {
        this.Text.Value = (string)value;
        CloseActivity(context);
    }
}
Into the following Command:

/* The Receiver class */
public class ReadLine : Activity
{
    public OutArgument<string> Text = new OutArgument<string>();

    protected override ActivityExecutionStatus Execute(WorkflowContext context)
    {
        context.CreateBookmark(this.Name, new CompleteReadLine(this));
        return ActivityExecutionStatus.Executing;
    }

    void ContinueAt(WorkflowContext context, object value)
    {
        this.Text.Value = (string)value;
        CloseActivity(context);
    }
}

[Serializable]
public interface ICommand
{
    void Execute(WorkflowContext context);
} /* The Command for starting reading a line - ConcreteCommand #1 */
[Serializable]
public StartReadLine: ICommand {     private readonly ReadLine readLine;     public StartReadLine(ReadLine readLine) {         this.readLine = readLine;     }          public void Execute(WorkflowContext context) {      readLine.Status = readLine.Execute(context);     } }      /* The Command for finishing reading a line - ConcreteCommand #2 */
[Serializable]
public class CompleteReadLine : ICommand {     private readonly ReadLine readLine;     public CompleteReadLine(ReadLine readLine) {         this.readLine = readLine;     }          public Object Payload { get; set; }     public void Execute(WorkflowContext context) {         readLine.ContinueAt(context, payload);
readLine.Status = ActivityExecutionStatus.Closed;
    } }
The idea is to substitute the "ContinueAt" delegates with Command objects, which will be created and enqueued to represent continuations.
The execution queue will then call the Execute method, instead of invoking the delegate, and will inspect the activity status accordingly (not shown here).
Why using a pattern, instead of making both Execute and ContinueAt abstract and schedule the Activity itself using the execution queue? To maintain the same flexibility; in this way, we can have ContinueAt1 and ContinueAt2, wrapped by different Command objects, and each of them could be scheduled based on the execution of Execute.

Using objects will help during serialization: while Func<> objects and delegates are tightly coupled with the framework internals and therefore, as we have seen, require BinarySerialization, plain objects may be serialized using methods, more flexible and/or with higher performances (like XmlSerialization or protobuf-net).

Yes, it is less immediate, more cumbersome, less elegant. But it is also a more flexible, more "portable" solution.
Next time: are we done with execution? Not quite.. we are still missing repeated execution of activities! (Yes, loops and gotos!)

# Monday, 10 June 2013

A "Mini" workflow engine. Part 4: "out-of-order" composite activities, internal bookmarks, and execution.

Last time, we have seen how a sequence of activities works with internal bookmarks to schedule the execution of the next step, introducing a queue to hold these bookmarks.
We left with a question: what if execution is not sequential?
In fact, one of the advantages of an extensible workflow framework is the ability to define custom statements; we are not limited to Sequence, IfThenElse, While, and so on but we can invent our own, like Parallel:

public class Parallel : CompositeActivity
{
    protected internal override ActivityExecutionStatus Execute(WorkflowInstanceContext context) {
        foreach (var activity in children)        
           context.RunProgramStatement(activity, this.ContinueAt);       
        return ActivityExecutionStatus.Executing;
    }

    private void ContinueAt(WorkflowInstanceContext context, object arg) {
        foreach (var activity in children)        
            //Check every activity is closes...
            if (activity.ExecutionStatus != ActivityExecutionStatus.Closed)
                return;
        
         CloseActivity(context);
     }
}

And even have different behaviours here (like speculative execution: try several alternative, choose the one that complete first).

Clearly, a queue does not work with such a statement. We can again use delegates to solve this issue, using a pattern that is used in several other scenarios (e.g. forms management).
When a composite activity schedules a child activity for execution, it subscribes to the activity, to get notified of its completion (when the activity will finish and become "closed"). Upon completion, it can then resume execution (the "internal bookmark" mechanism).

child.Close += this.ContinueAt;
Basically, we are moving the bookmark and (above all) its continuation to the activity itself. It will be the role of CloseActivity to find the delegate and fire it. This is exactly how WF works...

context.CloseActivity() {
  ...
  if (currentExecutingActivity.Close != null)
     currentExecutingActivity.Close(this, null);

}
... and it is the reason why WF needs to keep explicit track of the current executing activity.

Personally, I find the "currentExecutingActivity" part a little unnerving. I would rather say explicitly which activity is closing:

context.CloseActivity(this);
Moreover, I would enforce a call to CloseActivity with the correct argument (this)

public class Activity {
   private CloseActivity() {
      context.CloseActivity(this); //internal
Much better!

Now we almost everything sorted out; the only piece missing is about recursion. Remember, when we call the continuation directly, we end up with a recursive behaviour. This may (or may not) be a problem; but what can we do to make the two pieces really independent?

We can introduce a simple execution queue:

public class ExecutionQueue
{
    private readonly BlockingCollection<Action> workItemQueue = new BlockingCollection<Action>();

    public void Post(Action workItem) {
           ...
    }
}
Now, each time we have a continuation (from an internal or external bookmark), we post it to the execution queue. This makes the code for RunActivity much simpler:

internal void RunProgramStatement(Activity activity, Action<WorkflowContext, object> continueAt)
{
    logger.Debug("Context::RunProgramStatement");

    // Add the "bookmark"
    activity.Closed = continueAt;

    // Execute the a activity
    Debug.Assert(activity.ExecutionStatus == ActivityExecutionStatus.Initialized);
    ExecuteActivity(activity);
} 
As you may have noticed, we are giving away a little optimization; a direct call when the activity just completes should be much faster.
Also, why would we need a queue? We could do just fine using Tasks:

Task.Factory.
    StartNew(() => activity.Execute(this)).
    ContinueWith((result) =>
    {
        // The activity already completed?
        if (result.Result == ActivityExecutionStatus.Closed)
        continueAt(this, null);
        else
        {
        // if it didn't complete, an external bookmark was created
        // and will be resumed. When this will happen, be ready!
        activity.OnClose = continueAt;
        }
    });


internal void ResumeBookmark(string bookmarkName, object payload) {            
    ...
    Task.Factory.
        StartNew(() => bookmark.ContinueAt(this, payload));
}

internal void CloseActivity(Activity activity) {
    if (activity.OnClose != null)
    Task.Factory.
        StartNew(() => activity.OnClose(this, null));
}
Which makes things nicer (and potentially parallel too). Potentially, because in a sequential composite activity like Sequence, a new Task will be fired only upon completion of another. Things for the Parallel activity, however, will be different. We will need to be careful with shared state and input and output arguments though, as we will see in a future post.

Are there other alternatives to a (single threaded) execution queue or to Tasks?

Well, every time I see this kind of pause-and-resume-from-here kind of execution, I see an IEnumerable. For example, I was really fond of the Microsoft CCR and how it used IEnumerable to make concurrent, data-flow style asynchronous execution easier to read:

    int totalSum = 0;
    var portInt = new Port<int>();

    // using CCR iterators we can write traditional loops
    // and still yield to asynchronous I/O !
    for (int i = 0; i < 10; i++)
    {
        // post current iteration value to a port
        portInt.Post(i);

        // yield until the receive is satisifed. No thread is blocked
        yield return portInt.Receive();               
        // retrieve value using simple assignment
        totalSum += portInt;
    }
    Console.WriteLine("Total:" + totalSum);

(Example code from MSDN)
I used the CCR and I have to say that it is initially awkward, but then it is a style that grows on you, it is a very efficient, and it was way of representing asynchronous with a "sequential" look, well before async/await.

The code for ReadLine would become much clearer:

public class ReadLine : Activity
{
    public OutArgument<string> Text = new OutArgument<string>();       

    protected override IEnumerator<Bookmark> Execute(WorkflowInstanceContext context, object value)
    {
        // We need to wait? We just yield control to our "executor"
        yield return new Bookmark(this);

        this.Text.Value = (string)value;

        // Finish!
        yield break;
        // This is like returning ActivityExecutionStatus.Close,
        // or calling context.CloseActivity();            
    }
}
Code for composite activities like sequence could be even cleaner:

protected internal override IEnumerator<Bookmark> Execute(WorkflowInstanceContext context)
{
    // Empty statement block
    if (children.Count == 0)
    {
        yield break;
    }
    else
    {
        foreach (var child in children)
        {
            foreach (var bookmark in context.RunProgramStatement(child))
                yield return bookmark;
        }                
    }
}

Or, with some LINQ goodness:

return children.SelectMany(child => context.RunProgramStatement(child));
The IEnumerable/yield pair have a very simple, clever implementation; it is basically a compiler-generated state machine. The compiler transforms your class to include scaffolding and memorize in which state the code is, and jump execution to the right code when the method is invoked.

But remember, our purpose is exactly to save and persist this information: will this state machine be serialized as well?
According to this StackOverflow question, the answer seem to be positive, with the appropriate precautions; also, it seems that the idea is nothing new (actually, it is 5 years old!)
http://dotnet.agilekiwi.com/blog/2007/05/implementing-workflow-with-persistent.html

...and it also seems that the general thought is that you cannot do reliably.

In fact, I would say you can: even if you do not consider that 5 years passed and the implementation of yield is still the same, the CLR will not break existing code. At IL level, the CLR just see some classes, one (or more) variables that holds the state, and a method with conditions based on those variables. It is not worse (or better) that serializing delegates, something we are already doing.

The iterator-based workflow code sits in a branch of my local git, and I will give it a more serious try in the future; for now, let's play on the safe side and stick to the Task-based execution.

Until now, we mimicked quite closely the design choices of WF; I personally like the approach based on continuations, as I really like the code-as-data philosophy. However the serialization of a delegate can hardly be seen as "code-as-data": the code is there, in the assembly; it's only status information (the closure) that is serialized, and the problem is that you have little or no control over these compiler generated classes.

Therefore, even without IEnumerable, serialization of our MiniWorkflow looks problematic: at the moment, it is tied to BinarySerializer. I want to break free from this dependency, so I will move towards a design without delegates.

Next time: bookmarks without delegates

# Friday, 07 June 2013

A "Mini" workflow engine. Part 3: execution of sequences and internal bookmarks

A couple of posts ago, I introduced a basic ("Mini") workflow engine.
The set of classes used an execution model that uses bookmarks and continuations, and I mentioned that it may have problems.

Let's recap some basic points:
  • Activities are like "statements", basic building blocks.
  • Each activity has an "Execute" method, which can either complete immediately (returning a "Closed" status),
// CloseActivity can be made optional, we will see how and why in a later post
context.CloseActivity();
return ActivityExecutionStatus.Closed;

or create a bookmark. A bookmark has a name and a continuation, and it tells the runtime "I am not done, I need to wait for something. When that something is ready, call me back here (the continuation)". In this case, the Activity returns an "Executing" status.
  • There are two types of bookmarks: external and internal.
    External bookmarks are the "normal" ones:
context.CreateBookmark(this.Name, this.ContinueAt);
return ActivityExecutionStatus.Executing;
They are created when the activity needs to wait for an external stimulus (like user input from the command line, the completion of a service request, ...).
Internal bookmarks are used to chain activities: when the something "I need to wait for" is another activity, the activity request the runtime to execute it and call back when done:
context.RunProgramStatement(statements[0], ContinueAt);
return ActivityExecutionStatus.Executing;

In our first implementation, external bookmarks are really created, held into memory and serialized if needed:

public void CreateBookmark(string name, Action<WorkflowInstanceContext, object> continuation)
{
    bookmarks.Add(name, new Bookmark
        {
            ContinueAt = continuation,
            Name = name,
            ActivityExecutionContext = this
        });
}
From there, the runtime is able to resume a bookmark when an event that "unblocks" the continuation finally happens
(notice that WF uses queues, instad of a single-slot bookmarks, which is quite handy in many cases - but for now, this will do just right):

internal void ResumeBookmark(string bookmarkName, object payload)
{            
    var bookmark = bookmarks[bookmarkName];
    bookmarks.Remove(bookmarkName);
    bookmark.ContinueAt(this, payload);
}
Internal bookmarks instead are created only if needed:

internal void RunProgramStatement(Activity activity, Action<WorkflowInstanceContext, object> continueAt)
{
    var result = activity.Execute(this);
    // The activity already completed?
    if (result == ActivityExecutionStatus.Closed)
        continueAt(this, null);
    else
    {
        // Save for later...
        InternalBookmark = new Bookmark
        {
            ContinueAt = continueAt,
            Name = "",
            ActivityExecutionContext = this
        };
    }
}
But who resumes these bookmarks?
CreateBookmark - ResumeBookmark are nicely paired, but RunProgramStatement seems not to have a correspondence.

When do we need to resume an internal bookmark? When the activity passed to RunProgramStatement completes. This seems like a perfect job for CloseActivity!
public void CloseActivity()
{
    logger.Debug("Context::CloseActivity");
    // Someone just completed an activity.
    // Do we need to resume something?
    if (InternalBookmark != null)
    {
        logger.Debug("Context: resuming internal bookmark");
        var continuation = InternalBookmark.ContinueAt;
        var context = InternalBookmark.ActivityExecutionContext;
        var value = InternalBookmark.Payload;
        InternalBookmark = null;
        continuation(context, value);
    }
}

However, this approach has two problems.
The first one is not really a problem, but we need to be aware of this; if the activities chained by internal bookmarks terminate immediately, the execution of our "sequence" of activities result in recursion:

public class Print : Activity
{
    protected override ActivityExecutionStatus Execute(WorkflowInstanceContext context)
    {
        Console.WriteLine("Hello");
            context.CloseActivity();
        return ActivityExecutionStatus.Closed;
    }
}

var sequence = new Sequence();
var p1 = new Print();
var p2 = new Print();
var p3 = new Print();
sequence.Statements.Add(p1);
sequence.Statements.Add(p2);
sequence.Statements.Add(p3);
Its execution will go like:

sequence.Execute
-context.RunProgramStatement
--p1.Execute
--sequence.ContinueAt
---context.RunProgramStatement
----p2.Execute
----sequence.ContinueAt
-----context.RunProgramStatement
------p3.Execute
------sequence.ContinueAt
-------context.CloseActivity (do nothing)
The second problem is more serious; lets' consider this scenario

var s1 = new Sequence();
var p1 = new Print();
var s2 = new Sequence();
var r2 = new Read();
var p3 = new Print();
s1.Statements.Add(p1);
s1.Statements.Add(s2);
s2.Statements.Add(r2);
s2.Statements.Add(p3);

And a possible execution which stops at r2, waiting for input:

s1.Execute
-context.RunProgramStatement
--p1.Execute
--s1.ContinueAt
---context.RunProgramStatement(s2, s1.ContinueAt)
----s2.Execute
-----context.RunProgramStatement(r2, s2.ContinueAt)
------r2.Execute
------(External Bookmark @r2.ContinueAt)
------(return Executing)
-----(Internal Bookmark @s2.ContinueAt)
-----(return)
----(return Executing)
---(Internal Bookmark @s1.ContinueAt) <-- Oops!

What happens when the external bookmark r2.ContinueAt is executed?
r2 will complete and call context.CloseActivity. This will, in turn, resume the internal bookmark.. which is not set to s2.ContinueAt, but to s1.ContinueAt.

We can avoid this situation by observing how RunProgramStatement/CloseActivity really look the like call/return instructions every programmer is familiar with (or, at least, every C/asm programmer is familiar with). Almost every general purpose language uses a stack to handle function calls (even if this is an implementation detail); in particular, the stack is  used to record where the execution will go next, by remembering the "return address", the point in the code where to resume execution when the function returns. Sounds familiar?

call fRunProgramStatement(a)
- pushes current IP on the stack-call a.Execute
- jump to entry of function -saves continueAt in and internal bookmark
  
... at the end of f ...... at completion of a ...
  
return CloseActivity
-pops return address from stack-takes continueAt from internal bookmark
-jump to return address-executes continueAt

Notice that we swapped the jump to execution and saving the return point; I did it to perform an interesting "optimization" (omitting the bookmark if the a.Execute terminates immediately; think of it as the equivalent to inlining the function call - and not really an optimization, as it comes with its problem as we have just seen).

Is it a safe choice? Provided that we are swapping the stack used by function calls with a queue, yes.
In standard, stack based function calls what you have to do is to pinpoint where to resume execution when the function being called returns. The function being called (the callee) can in turn call other functions, so point-to-return to (return addresses) need to be saved in a LIFO (Last In, First Out) fashion. A stack is perfect.

Here, we provide a point where to resume execution directly, and this point has no relation to the return point of execute (i.e. where we are now).
If the call to Execute does not close the activity, that means that execution is suspended. A suspended execution can, in turn, lead to the suspension of other parent activities, like in our nested sequences example. Now, when r2 is resumed, it terminatates and calls its "return" (CloseActivity). The points-to-return to, in this case, need to be processed in a FIFO (First In, First Out) fashion.

Next time: a sequence is not the only kind of possible composite activity. Does a queue satisfy additional scenarios?

# Sunday, 02 June 2013

A "Mini" workflow engine. Part 2: on the serialization of Bookmarks

Last time, we have seen that Activities are composed in a way that allows them to be paused and resumed, using continuations in the form of bookmarks:
[Serializable]
internal class Bookmark
{
    public string Name { get; set; }
    public Action<ActivityExecutionContext, object> ContinueAt { get; set; }  
    public ActivityExecutionContext ActivityExecutionContext { get; set; }
}
A Bookmark has a relatively simple structure: a Name, a continuation (in the form of an Action) and a Context. Bookmarks are created by the Context, and the continuation is always a ContinueAt method implemented in an activity. Thus, serializing a Bookmark (and, transitively, its Context) should lead to the serialization of an object graph that holds the entire workflow code, its state and its "Program Counter".

However, serializing a delegate sounds kind of scary to me: are we sure that we can safely serialize something that is, essentially, a function pointer?
That's why I started looking for an answer to a particular question: what happens when I serialize a delegate, a Func<T> or an Action<T>?

I found some good questions and answers on StackOverflow; in particular, a question about serialization of lambdas and closures (related to prevalence, a technique to provide transactional implementing checkpoints and redo journals in terms of serialization) and a couple of questions about delegate (anonymous and not) serialization (1, 2 and 3).

It looks like:
  • delegate objects (and Func<>) are not XML serializable, but they are BinarySerializable.
  • even anonymous delegates, and delegates with closures (i.e. anonymous classes) are serializable, provided that you do some work to serialize the compiler-generated classes (see for example this MSDN article, and this blog post)
  • still, there is a general agreement that the whole idea of serializing a delegate looks very risky

BinarySerializable looks like the only directly applicable approach; in our case, we are on the safe side: every function  we "point to" from delegates and bookmarks is in classes that are serialized as well, and objects and data that accessed by the delegate are serialized as well (they access only data that is defined in activities themselves, which is saved completely in the context). BinarySerilization will save everything we need to pause and resume a workflow in a different process.

I implemented a very simple persistence of workflow instances using BinarySerializer for MiniWorkflow, and it does indeed work as expected.
You can grab the code on github and see it yourself.

However, I would like to provide an alternative serialization mechanism for MiniWorkflow, less "risky". I do not like the idea of not knowing how my Bookmarks are serialized, I would like something more "visible", to understand if what I need really goes from memory to disk, and to be more in control.
Also, I am really curious to see how WF really serializes bookmarks; workflows are serialized/deserialized as XAML, but a workflow instance is something else. In fact, this is a crucial point: in the end, I will probably end up using WF4 (MiniWorkflow is -probably- just an experiment to understand the mechanics better), but I will need to implement serialization anyway. I do not want to take a direct dependency on SQL, so I will need to write a custom instance store, alternative to the standard SQL instance store.

I will explore which alternatives we have to a BinarySerializer; but before digging into this problem, I want to work out a better way to handle the flow of execution. Therefore, next time calling continuations: recursion, work item queue or... call/cc?

# Saturday, 01 June 2013

A "Mini" workflow engine. Part 1: resumable programs

WF achieves durability of programs in a rather clever way: using a "code-as-data" philosophy, it represents a workflow program as a series of classes (activities), which state of execution is recorded at predefined points in time.

It then saves the whole program (code and status) when needed, through serialization.
The principle is similar to the APM/IAsyncResult pattern: an async function says "do not wait for me to complete, do something else" and to the OS/Framework "Call me when you are done here (the callback), and I will resume what I have to do". The program becomes "thread agile": it does not hold onto a single thread, but it can let a thread go and continue later on another one.

Workflows are more than that: they are process agile. They can let their whole process terminate, and resume later on another one. This is achieved by serializing the whole program on a durable storage.
The state is recorded using continuations; each activity represents a step, a "statement" in the workflow. But an activity does not call directly the following statement: it executes, and then says to the runtime "execute the following, then get back to me", providing a delegate to call upon completion, a continuation.
The runtime sees all these delegates, and can either execute them or save them. It can use the delegate to build a bookmark.  Serializing the bookmark will save it to disk, along with all the program code and state (basically, the whole workflow object graph plus one delegate that holds a sort of "program pointer" (the bookmark), where execution can be resumed).

The same mechanism is used for a (potentially long) wait for input: the activity tells the runtime "I am waiting for this, call me back here when done" using a delegate, and the runtime can use it to passivate (persist and unload) the whole workflow program.

I found it fascinating, but I wanted to understand better how it worked, and I was dubious about one or two bits. So, I built a very limited, but functional workflow runtime around the same principles. I have to say that using WF3 (and WF4 from early CTPs too) I already had quite a good idea of how it could have been implemented, but I found Dharma Shukla and Bob Shmidt "Essential Windows Workflow Foundation" very useful to cement my ideas and fill some gaps. Still, building a clone was the better way to fill in the last gaps.

The main classes are:
  • an Activity class, the base class for all the workflow steps/statements; a workflow is a set of activities;
  • a WorkflowHandle, which represent an instance of a workflow;
  • a Bookmark, which holds a continuation, a "program pointer" inside the workflow instance;
  • a Context, which will hold all the bookmarks for the current workflow instance;
  • a WorkflowRuntime, which handles the lifecycle of a WorkflowInstance and dispatches inputs (resuming the appropriate Bookmark in the process)

A basic Activity is pretty simple:

[Serializable]
public abstract class Activity
{
   public Activity()
   {
     this.name = this.GetType().Name;
   }

   abstract protected internal ActivityExecutionStatus Execute(ActivityExecutionContext context);

   protected readonly string name;
   public string Name
   {
     get { return name; }
   }
}
It just have a name, and an "Execute" method that will be implemented by subclasses; it is how activities are composed that is interesting:

[Serializable]
public class Sequence : Activity
{
    int currentIndex;
    List<Activity> statements = new List<Activity>();
    public IList<Activity> Statements
    {
        get { return statements; }
    }

    protected internal override ActivityExecutionStatus Execute(ActivityExecutionContext context)
    {
        currentIndex = 0;
        // Empty statement block
        if (statements.Count == 0)
        {
            context.CloseActivity();
            return ActivityExecutionStatus.Closed;
        }
        else
        {
            context.RunProgramStatement(statements[0], ContinueAt);
            return ActivityExecutionStatus.Executing;
        }
    }

    public void ContinueAt(ActivityExecutionContext context, object value)
    {
        // If we've run all the statements, we're done
        if (++currentIndex == statements.Count) 
            context.CloseActivity();
        else // Else, run the next statement
            context.RunProgramStatement(statements[currentIndex], ContinueAt);
    }        
}
Now the Execute method needs to call Execute on child activities, one at time. But it does not do it in a single step, looping though all the statements list and calling their execute: this will not let the runtime handle them properly. In fact, the execute method says to the runtime "run the first statement, then call me back". This is a pattern you see both in WF3 and WF4, even if in WF3 is more explicit. It is called an "internal bookmark": you ask the runtime to set a bookmark for you, execute an activity, and the activity will resume the bookmark once done:
// This method says: the next statement to run is this. 
// When you are done, continue with that (call me back there)
internal void RunProgramStatement(Activity activity, Action<ActivityExecutionContext, object> continueAt) { // This code replaces // context.Add(new Bookmark(activity.Name, ContinueAt)); var result = activity.Execute(this); // The activity already completed? if (result == ActivityExecutionStatus.Closed) continueAt(this, null); else { // Save for later... InternalBookmark = new Bookmark { ContinueAt = continueAt, Name = "", ActivityExecutionContext = this }; } }
When an Activity completes, returning ActivityExecutionStatus::Closed or calling explicitly CloseAcctivity, the runtime looks for a previously set bookmark and, if found, resumes execution from it:
public void CloseActivity()
{
    // Someone just completed an activity.
    // Do we need to resume something?
    if (InternalBookmark != null)
    {
        var continuation = InternalBookmark.ContinueAt;
        var context = InternalBookmark.ActivityExecutionContext;
        var value = InternalBookmark.Payload;
        InternalBookmark = null;
        continuation(context, value);
    }
}
Do you already spot a problem with this way of handling the continuations and the control flow of the workflow program? Yes, it is recursive, and the recursion is broken only if an Activity explicitly sets a bookmark. In this way, the runtime simply returns to the main control point, waiting for the input and resuming the bookmark after it receives it.
For example, in this "ReadLine" activity:
[Serializable]
public class ReadLine : Activity
{
    public OutArgument<string> Text = new OutArgument<string>();       

    protected override ActivityExecutionStatus Execute(ActivityExecutionContext context)
    {
        //Waits for user input (from the command line, or from 
        //wherever it may come
        context.CreateBookmark(this.Name, this.ContinueAt);
        return ActivityExecutionStatus.Executing;
    }

    void ContinueAt(ActivityExecutionContext context, object value)
    {
        this.Text.Value = (string)value;
        context.CloseActivity();
    }
}
When the bookmark is created the runtime stores it, and then the "Execute" method returns, without any call to RunProgramStatement. The rutime knows the Activity is waiting for user input, and do not call any continuation: it stores it and wait for input.
public void CreateBookmark(string name, Action<ActivityExecutionContext, object> continuation)
{
    // var q = queuingService.GetWorkflowQueue(name);
    // q += continuation;
    bookmarks.Add(name, new Bookmark
        {
            ContinueAt = continuation,
            Name = name,
            ActivityExecutionContext = this
        });
} 
The next logical step is to use Bookmarks to save the program state to durable storage. This lead me to ask me what ended up to be a rather controversial question: is it possible, or advisable, to serialize continuations (in any .NET variant in which they appear, i.e. delegates, EventHandlers, Func<> or Action<>)?

Therefore, next time: serializing Bookmarks