# Thursday, 02 January 2014

The MiniPoint Workflow language

The workflow language was, for me, the funniest part of MiniPoint. I love working on languages, small or big ones it's not important, as long as they are interesting. In fact, MiniPoint has more than one language/parser: workflows, document templates, AngularJS expressions, ... Most are tiny, but every one makes the code cleaner and the user experience more pleasant.

Take, as an example, the simple boolean language to express visibility of a field on a view; as I mentioned in my previous post, you can make individual fields visible or not (and required or not) using boolean expressions  (or, and, not, <, >, ==, <>) over constants and other fields in the view (or in the schema).
The expression is parsed and then analyzed  to produce two things: an AngularJS expression, which will be inserted into the ng-required and ng-show/ng-hide attributes to make it work entirely on the client side, and the list of affected fields.
Which is the purpose of this list? Remember that a view is only a subset of the schema, but in these visible/required expression you can refer to other member of the schema as well (from previous views, usually).
AngularJS initializes its "viewmodel" (the $scope) with an ajax request (getting JSON data from a ASP.NET controller); For efficiency, we keep this data at a minimum, which usually is a subset of the fields in the view (readonly fields, for example, are rendered on the server and not transmitted). When we have an expressions, however, fields referenced in the expression need to end up in the $scope too, hence the reason of parsing and analyzing the expressions.

But I am digressing; I will write more about the interaction and integration of AngularJS and Razor (and MVC) in another post.

Now I would like to talk about some aspects of the workflow language that needed a bit of thinking on how to best implement them.

I wanted it to be simple, natural to use (i.e. you can use statements/expressions/constructs wherever it makes sense and expect them to work) but still powerful enough. And have a clean grammar too :)

I wrote some langauges in the past, but this is the first one where statement terminators (';') are optional, and you can just use returns.
The people that are going to write the schemas and workflows (so not the end-users, but the "power-users", or site administrators) have a strong background in ... VBA. Therefore, when a decision about the language came up, I tried to use a VBA-like syntax, to give them a familiar look. So, for example, If-EndIf instead of braces { }.
And I wanted to do it because it was interesting, of course! I had to structure my semantic actions a bit differently, as I was getting reduce-reduce conflicts using my usual approach.

On the surface, you it seems that you have statements (very similar to other programming languages), choices (if-then-else-endif) and gotos. I know.. Ugh! gotos! Bear with me :)

step ("view1")

var i = 10

if (i + me.SomeField > 20)
  i = i - 20
  goto view1
else
  goto end
endif

//Generate a report, using the "report1" template
report ("report1")

step ("final"): end 

U
nder the hood, things are a bit.. different. Remember, this is a textual language for a flowchart. So, "step" is actually an input/output block (parallelogram); statements and reports are generic processing steps (rectangles); the "if-then-else" is a choice (rhombus). Therefore if-then-else has a stricter than usual syntax, and it's actually:
IF (condition) [statements] GOTO ELSE [statements] GOTO ENDIF
so that the two possible outcomes are always steps in the workflow.

Therfore, under the hood you have a list of "steps", which may be a statement list (like "var i = 10"), an input/output step ("step" or "delay"), or a branch.
As a consequence, the language has somehow two-levels; at the first level you have "steps"; then, among "steps" or inside them (look at the if-then-else in the example) you can have expressions and statements like in most other programming languages. The two levels appear quite clearly in the grammar, but I think it's difficult to tell from the syntax. And this is what I wanted to accomplish. Who used it to author the workflows was quite pleased, and used it with no problems after very little training.

Tranlsation to WF activities was fun as well: I built a custom Composite Activity to schedule all the steps; also statements (instead of receiving their own activity) where merged together and executed bu the main composite activity, to improve efficiency (and make it easier to add other statements: a new one does not require a new activity).

# Tuesday, 31 December 2013

First version of MiniPoint released!

Last Saturday, I installed the first test version of MiniPoint in a production environment. All went well (we spent more time waiting for SQL Server 2012 to finish installation than anything else). 

I had only some minor hichups: figuring out how to deploy without VS/msbuild/WebPI (i.e. manually), and how to configure correctly IIS 7 (the production server uses Windows 2008) for .NET 4.5 / MVC 4. 

I solved both issues thanks to StackOverflow :)


What is MiniPoint?

MiniPoint is a MVC4/AngularJS web application that allows you to define:

  • a set of related data/metrics you want to record;
  • the process needed to collect and save them;
  • the people who will need to collaborate to get, read, validate and update those data.

The application will replace an existing Sharepoint 2010 solution which was fulfilling the same role, but that was too heavy (it required its own, beefy server), not so flexible and much, much more difficult to modify and extend. MiniPoint, instead, was designed with the goal to make it easy to modify and extend every bit of the process.

MiniPoint is build around four concepts: Schemas, Views, Lists and Workflows.

Minipoint initial page. The design, based on Twitter Bootstrap 2.3.2, is intentionally simplistic and lean: end-users (the one who will use the workflows) are not computer experts.


You can think of a schema as the data structure needed to hold all the information needed for the process, or better all the information that will be collected and saved during the process. It is like a database schema, or like the header rows in an Excel table. The parallel with Excel here is not a casual one: prior to the Sharepoint solution, end-users used Excel files to keep track of the information. For example, they had an Excel worksheet, with fixed columns, for customers calls; at each call they had to fill in a bunch of columns and save the file on a network share.

A list is an instance of the schema: the empty rows in the Excel worksheet. Each row is created using the Schema as a mould, and filled little by little by the process. 

Let me explain through a simple example: the submission of an issue.

We can simplify the process by assuming that the issue is submitted, triaged (rejected or accepted), assigned to someone, fixed, verified. If the verification step fails, we go back and re-work on it.

It's a 5 step process, with two branch points. I suppose this sounds familiar to everyone who used a bug-reporting software :) even if this very process was created for a customer which have nothing to do with software! Processes like these are very common in companies of all sizes, as there are many occasions in which it is necessary to keep track of progress and history (the small company for which the software was initially concieved wanted to record at least 5 of them).


Schemas (and lists)

For the above example process, we need to collect 

  • The issue
  • Its category
  • Who submitted it (who will also be responsible of checking the outcome, in our simple scenario)
  • Who will work on it
  • The verification outcome (acceptance and comments)

So our schema is composed by six fields:

  • Issue (Text)
  • Category (Enumeration)
  • Submitter (User)
  • Assigned_to (User)
  • Status (Enumeration)
  • Comments (Text)

In parenthesis I have put the data types for the various steps. MiniPoint supports different data types (more on this in a following post), and use the data type to treat, save and render the field in the right way.


The definition of a schema. Working on schemas is very quick: adding, removing, modifying a filed is done client-side using AngularJS, for a fast and responsive UI

Views

In any complex enough process, all the needed data and "status" is not readily available, but needs to be collected over time, by different actors (this is usually the reason there is a process in the first place). Therefore, not all fields can or should be filled from the beginning! At each step of the process, some fields will be entered, some will be updated, others will be visible but will be not modifiable anymore. 

For this, MiniPoint let you create different views on a schema

A view is a subset of fields, each decorated with some additional attributes. Attributes indicate how the data needs to be presented to the user: whether the field will be visible (and when, with conditions over other fields), required, readable or writable... 


Definition and edit of a view element. Notice how you can specify visibility (and if an element is required) as a boolean expression over other fields in the same schema.



What if you figure out, while creating a new view, that you need another field in the schema? Shortcuts to manipulate related elements are spread through MiniPoint

Workflow

When you have the views, you need to "stitch them together": you need to define in which order they will be presented to the end-user, who will have right to access them, and how you will route users through different views based on what they entered so far.

MiniPoint will then associate views to workflow steps; steps can be executed in order, or (based on conditions over variable and field values) you can have branches.

The workflow is described using a small and simple language, with statements for steps (display of views), report generation, branch (expressed as simple, fixed if (condition) goto step else goto step - just the text equivalent of a "conditional" diamond in flowcharts)

The workflow is a reactive program: it will execute one step ofter the other, and then it will suspend itself when it needs to wait (typically, for user input at steps, or for a pre-defined delay). Then, it will resume itself to react to external input: a delay expired, or a user completed one step.

For this iteration of MiniPoint, the workflow text is "translated" (or, compiles into) a .NET Workflow Foundation 4.0 series of activities, and runs using the a self-hosted .NET WorkflowApplication. I plan to replace WF with my own workflow engine, as it is simpler, leaner and easier to port to different platforms (for example, WF out-of-the-box only supports SQL Server storage). Another reason to switch to my own workflow engine is to have better integration with ASP.NET async; even if there are some nice examples on how to combine WF and ASP.NET MVC,  hosting reliably, in an efficient (asynchronous) way the .NET WorkflowApplication proved to be a bit tricky. The support for async, long-running events in ASP.NET could be dangerous (if not handled correctly), and it is still evolving (it changed in an important way, and improved, in .NET 4.5, but it still need some care).

The current code works quite well, but I am not entirely pleased with the result, as it looks more complex than it should be. For now, however, WF works smoothly enough.

The little "workflow language" was created together with the people who will author and modify the business process; it needed to be flexible and lightweight enough to let them change the process easily, but powerful enough to actually express what they needed to model. It is, like I mentioned, a flowchart, with the addition of a global, shared state (variables). 

The language is textual and simple enough; a (javascript based) graphical UI is in the works too.


The workflow language. Upon Save, the code is complied to check it for syntax errors; errors are displayed directly in the UI to help find and fix them.


The list of running workflows. Only steps to which the user has access are shown; plus, it is possible to filter, reorder and group them.


A the UI shown by the workflow while executing a step; the fields in the view are rendered using appropriate controls, base on the data type and the attributes set during the creation of the view.

Permissions

Another required improvement over the Sharepoint solution was to provide finer control over read/modify and general access control. I solved the issue by borrowing some ideas from the NT access control model: every "securable" element (schema, views, lists, even single documents) has an ACL (Access Control List) attached. Each item in the list is a tuple (role, access), where role is either a username or a group and access is a list of allowed operations (read, write, execute).


The UI for setting and assigning permissions to the various defined elements

In this way, it is possible to statically assign some permissions to users or groups. It is also possible to dynamically assign permission through the workflow code, with specific instructions that give permission to a dynamically inserted user. For example, you may want to assign an issue to a particular group or user, and let only that user or group access the following steps.

# Wednesday, 18 September 2013

Where have I been?

I really don't know if anybody besides my friends and coworkers read this blog but.. after some entries in the first half of this year, everything got silent.
The reason is simple: I was working, and in my free time.. learning working again.

The project I mentioned (the mini sharepoint-like replica) got momentum and a deadline. I am really enjoying working on it; I am using and putting together a lot of great techniques and frameworks. Some of them were known to me (WF4 for workflows management, GPPG/Gplex for parser generation, ASP.NET MVC for web applications...) but some were really new (AngularJS.. wow I do like this framework!, Razor.. which is a joy to work with). It is tiresome to work, then get home (using my commuting time to read and learn) and then work again. Fortunately, my wife is super-supportive!

That, and my "regular" day-to-day work were we are also putting together some new and exciting applications. The most interesting, a "framework" for authentication and authorization from third-parties that uses a mix of OAuth, NFC and "custom glue" to authorize access to third-party applications without having to enter username-password (we use our smart cards as credentials).
Think about an open, public environment (a booth at an expo, or an one of the outside events for which our regions is famous, like the Christmas Market)



You have a booth where you are offering some great service subscription. You do want people to stop by and subscribe to your services, but you do not want to bother them for too long, so filling out forms is out of question; you do not even want them to sign-in on a keyboard (try to make people enter a password at -5C.. it's not easy).
I coded a simple prototype, an Android app that uses NFC to read the AltoAdige Pass that every traveler have (or should have :) ) as a set of credentials for authorization against our OAuth provider. The third-party app request an access toke, users are redirected to our domain and "log-in" using the card, by weaving it in front of an Android tablet. The process is secure (it involves the card processor and keys, mutual authentication, etc.) but easy and fast. They see the permissions and confirm (again weaving the card when the tablet asks them).

For now it is only a prototype, but.. I found it interesting when pieces of (cool) technology fall together to produce something that is easy to use and actually makes people's life e little bit easier.
With so many balls to juggle, time for blogging is... naught. But I will be back with more news on my projects, as soon as I have some time.

# Thursday, 25 July 2013

Postmortem of my (failed) interview with StackOverflow

Some months ago, I made a very peculiar decision: I applied for a job.
I am currently employed as a senior dev in a small software company. I like my current position: it is a good blend of project management, architecture, and software development. It has upsides and downsides, as any "regular" job in the world. On the positive side, it gives me some unique challenges, which is something I always look for; I get to work on many different things, at very different levels (from low-level programming on embedded devices, to distributed systems and big-data crunching, up to web applications and mobile devices).

Being quite content, I was not looking for a job. But I am a Stack Overflow user, and as anyone actively working on software development, I visit the site several times a day. They are THE resource for programming nowadays: they changed the way every developer I know works.


More than that: I am a big fan, and long time reader, of Jeff Atwood and Joel Spolsky, the fathers of Stack Overflow. And in January this year, I saw on Stack Overflow a Careers 2.0 banner/ad. And I discovered they were hiring.

Oh my. Stack Overflow. All those amazing devs: Marc Gravell, Nick Craver, Kevin Montrose, Demis Bellot... And Joel Spolsky.
If you are reading this, you surely know who Joel Spolsky is. He wrote THE book about making  software development really work. By applying his ideas, following his suggestions, I was able to steer both management and developers towards a successful way to write software. Twice.
Joel means this to me: making developers life better. There are few companies I really dream about working at, and a company run by Joel is definitely one of them.

So I applied. I knew it was going to be hard, but I also knew I had what it is needed to be great there. But... Long story short, I failed.
What can I do about it? Probably nothing: a job interview for a place like SO is not like an exam at the University: it is much harder. Failing does not mean that you can go home, study harder, and try it again a couple of months later. It means that someone else takes the job. And succeeding is not a simple matter of scoring well enough: you have to perform better than all the others.

Anyway, this is what I would have liked to have read some time ago, before doing the interview. Maybe it will not do any good to me, but if it can help another great developer, I will be happy.

Before starting my retrospective though, I think it is important to point out that the interview process was not only very well handled, but really great: it was fair, everyone was super-polite, and it was a pleasure overall. I did not expect anything less; on the contrary, I would have been disappointed had it been any easier, or shorter. To me, a good interview process is an indicator of a good, solid company.

The interview

The application, fairly enough, happens through Stack Overflow Careers.
And there I made my first mistake: the first time I applied, I wrote a simple, nice cover letter. Don't! Write what you feel, do not be restrained! Well, that means write a polite letter, of course, but you also have to make clear why you are applying, and how much the company you are applying for means to you.
Fortunately, I was able to fix this mistake by applying a second time, this time showing my true enthusiasm.

From that point, the hiring process looks very familiar to Joel's reader: he describes it, as a series of advices mainly addressed to interviewers, both in his books and in his blog.  It is not exactly the same, but it is only slightly different (probably it was adjusted to the remote-distributed nature of Stack Overflow employees).

It all begins on the Stack Overflow side; they sort cover letters and resumes (I can only assume they do it in a way similar to what is described here).

Then, if your resume and your cover letter show the right characteristics, you start: you get a first phone interview, were they basically walk through your CV (probably checking that you actually did what you claim), your position, your expectations and your passion for Stack Exchange.

After that, you get (if I got it right) to talk to up to 5 people ("all the way up to Joel").
In my case, I talked with members of both the Core Q&A team and Careers.
The pattern is more or less the same: you and the interviewer chat about your past experiences, then you do one or two coding (or design) exercises, and it ends up with reciprocal Q&A (you get to ask questions to those guys, which is totally awesome by itself).

After each interview the recruiter, who made the first contact with you in the phone interview, gets back to you, to report back how it went (i.e. if you got to the "next level"), ask you how it was, and fix a schedule for the next interview. They are really quick (from 10 minutes to 1 day), which is also very good: it is very stressful to stay there and wait for an outcome, so it is very nice to have them get back to you quickly.

The interviews are also a very nice experience: the interviewers are good, prepared, and they keep it interesting. A couple of times I forgot I was doing an interview, and I actually enjoyed myself!

Sure, before the first interview I was scared as hell. But it went well, and I was happy at the end.
The coding problem was interesting, even if I expected something harder, and I solved it rather quickly so.. we did another one. I made a couple of stupid mistakes but hey, coding without doing errors on a white-board is really difficult. Even Chris Sells got linked lists wrong on a white-board for his interview!
The second interview was great. I actually had a lot of fun, both chatting about my experience (I got excited talking about what I did) and solving the problem. This is why I arrived at interview #3 with high spirits and good expectations. But I never got to level 4. Damn.

The outcome

After the 3rd interview, the recruiter got back to me with a different looking email. It was much more formal: isn't it strange that bad news are always so formal?
I was sad. Shocked. It was like the world had stopped for a second. There it was, my dream, which seemed to become closer to realization, shattered.

I felt a little depressed, but above all perplexed. Why? What went wrong? Nothing seemed to indicate that something was bad. I didn't failed the coding exercise, and I didn't froze up. Sure, I told myself, there are better developers out there. But still, I accomplished good things in the past, and I knew that I would have done even greater things at Stack Overflow!

I knew, but did they?

And then, slowly, painfully, as I replayed my last interview over and over again, I realized that it was MY fault.

The biggest mistake

I haven't learned a very important lesson: you have to show yourself. You have to blow them away, and I did not.

Partly this is because I am a humble guy. Not really shy, but I do not like to make an appearance.
But this is not an excuse.

Only now I realize that an interviewer have slightly more than one hour to understand if you are a good fit for the company (and I should have known better, since I have been on the other side quite a few times in the past years). How can he tell, if you don't help him by saying anything you can in your favor?

I think I dug myself into a hole when I was asked about my current project, and which role I had in that. As I wrote in my CV, I had designed the architecture, which pieces are necessary to process in a reliable and efficient way these hundred thousands transactions we have each day, and coded myself the core portions of the system (from the embedded software on the devices which collect data, to the algorithms to reconstruct the big picture from the partial data coming from the embedded devices). All of this while I was managing the project and the process, explaining to management and developers how to plan, estimate, execute and keep track.
Oh, and shipping a (not perfect, but working) product in 6 months, on a ridiculous deadline imposed by external factors (marketing).

So, when I was asked about this kick ass project, what did I said?

Let's see...
I was asked specifically about the embedded devices, why I was the one writing software for them. And my answer was something like: "the only other developer on the team that knew C was overworked, so I stepped in and completed the software".
A very lame answer, in retrospective. Did I mentioned that I rewrote the conctactless card reader driver, reverse engineered the protocol and implemented the whole set of commands, because the driver was only for Windows and we have Linux boxes?
Or that I saved thousands of euros when our  embedded device  vendor refused to tell us how they "securely store" keys to access smart cards? (I have to admit it is very clever for them to have such a nice vendor lock, having your keys stored away in a way you cannot read them..  especially if you discover this after you printed and sent to your customers 100K smart cards).
It was both frightening and exhilarating when I had to figure out how to convince their “secure storage” to extract the keys to memory!
But I have “forgot” to mention these facts.

Same story when I was asked about project management. When I arrived, the team scored 1 on the Joel test. One. They did have a CVS (CVS!), and someone was using it. When we shipped, we were at 5; it is still 5 over 12, but it is not 1. And my answer to the question was: "In September, I was only a senior developer. In January, I was (officially) the project manager, because management was happy with the way I handled the team"
 
And what about web development? I wrote my own Razor-style parser in Scala, for fun and to keep things cleaner and more extensible, for that project, but I answered only "I wrote the services to query and present data" and "I do not do much front-end development, but I know the latest standards for HTML and CSS".

Not big things, taken one by one. But it makes a difference. Even if you are nervous, you have to remember to show you at your best in any case. It is not an interviewer's job to make you comfortable: you have to perform at your best. Of course, you have to give credit where credit is due (to your team, for example): in an interview, you have to be honest.
But you do not have to be modest.

The second mistake

Do not make assumptions.
During the first interview,  I went on slowly through the coding questions, carefully explaining my reasoning. This was very appreciated by the interviewer. It is usually appreciated, because it let's the person at the other end understand how you think.
But it may not be the case! What if they are looking for speed? How quickly you grasp the problem? Or if they are looking for style over simplicity?
You can't know, and you cannot make assumptions.

A third problem

Even if you end up performing at your best, it may not be enough.
Of course, if you blow the interviewers away, you are in a good position. But there is still the chance that you are not the best for the job.

This is something common to all the good workplaces. Good workplaces benefit of a "positive spiral": they attract great developers because they are great places to work to; and they are great places because they are full of great developers.

So, they can afford to be picky: they can select both on general skills (something a good company always do) and on "platform". You can choose to select only people that are already profitable in the technologies you use:
"...You still get jobs, and employers pay the cost of your getting up to speed on the platform. But when [...] 600 people apply for every job opening, employers have the luxury of choosing programmers who are already experts at the platform in question. Like programmers who can name four ways to FTP a file from Visual Basic code and the pros and cons of each" (from Lord Palmerston on Programming)

What can I do?

Of course, the net is full of advices on how to perform greatly during an interview.
This may only apply to my specific case, but it might help even in your case.
  • Be passionate! Show your passion, and your skills; it does not matter if you are frightened, or if you think it is not important, or even if you think that the interviewer is not interested: you have to leave a sign, an impression. Go for it!
  • Ask! Talk to the interviewer: asking is always a good thing. No-one will see a good question negatively.
  • Show! the only way to make sure you and the interviewer are actually on the same line. Show them what you can do!
How can you show it? By... showing it! Do open source. This is something I really regret from my past: I did not contributed to open source projects. I used to think it was not so important: my jobs did not allowed it, and even the little personal projects I did in my spare time were always linked to the job at hand, and therefore based on things I was not able to disclose.
Publishing some great open source code, in the technology your favorite company is using, shows them really what you are good for.
# Monday, 24 June 2013

A "Mini" workflow engine, part 4.1: from delegates to patterns

Today I will write just a short post, to wrap-up my considerations about how to represent bookmarks.
We have seen that the key to represent episodic execution of reactive programs is to save "continuations", marking points in the program text where to suspend (and resume) execution, together with program state.

We have seen two ways of represent bookmarks in C#: using delegates/Func<> objects, and using iterators.

Before going further, I would like to introduce a third way, a variation of the "Func<> object" solution.
Instead of persisting delegates (which I find kind of interesting and clever, but limiting) we turn to an old,plain pattern solution.

A pattern which recently gained a lot of attention is CQRS (Command Query Responsibility Segregation). It is a pattern that helps to simplify the design of distributed systems, and helps scaling out. It is particularly helpful in cloud environments (my team at MSR in Trento used it for our cloud project).

Having recently used it, I remembered how CQRS (like many other architectural level patterns) is built upon other patterns; in particular, I started thinking about the Command pattern (the "first half" of CQRS).
It looks like the Command pattern could fit quite well with our model; for example, we can turn the ReadLine activity:

[Serializable]
public class ReadLine : Activity
{
    public OutArgument<string> Text = new OutArgument<string>();

    protected override ActivityExecutionStatus Execute(WorkflowContext context)
    {
        context.CreateBookmark(this.Name, this.ContinueAt);
        return ActivityExecutionStatus.Executing;
    }

    void ContinueAt(WorkflowContext context, object value)
    {
        this.Text.Value = (string)value;
        CloseActivity(context);
    }
}
Into the following Command:

/* The Receiver class */
public class ReadLine : Activity
{
    public OutArgument<string> Text = new OutArgument<string>();

    protected override ActivityExecutionStatus Execute(WorkflowContext context)
    {
        context.CreateBookmark(this.Name, new CompleteReadLine(this));
        return ActivityExecutionStatus.Executing;
    }

    void ContinueAt(WorkflowContext context, object value)
    {
        this.Text.Value = (string)value;
        CloseActivity(context);
    }
}

[Serializable]
public interface ICommand
{
    void Execute(WorkflowContext context);
} /* The Command for starting reading a line - ConcreteCommand #1 */
[Serializable]
public StartReadLine: ICommand {     private readonly ReadLine readLine;     public StartReadLine(ReadLine readLine) {         this.readLine = readLine;     }          public void Execute(WorkflowContext context) {      readLine.Status = readLine.Execute(context);     } }      /* The Command for finishing reading a line - ConcreteCommand #2 */
[Serializable]
public class CompleteReadLine : ICommand {     private readonly ReadLine readLine;     public CompleteReadLine(ReadLine readLine) {         this.readLine = readLine;     }          public Object Payload { get; set; }     public void Execute(WorkflowContext context) {         readLine.ContinueAt(context, payload);
readLine.Status = ActivityExecutionStatus.Closed;
    } }
The idea is to substitute the "ContinueAt" delegates with Command objects, which will be created and enqueued to represent continuations.
The execution queue will then call the Execute method, instead of invoking the delegate, and will inspect the activity status accordingly (not shown here).
Why using a pattern, instead of making both Execute and ContinueAt abstract and schedule the Activity itself using the execution queue? To maintain the same flexibility; in this way, we can have ContinueAt1 and ContinueAt2, wrapped by different Command objects, and each of them could be scheduled based on the execution of Execute.

Using objects will help during serialization: while Func<> objects and delegates are tightly coupled with the framework internals and therefore, as we have seen, require BinarySerialization, plain objects may be serialized using methods, more flexible and/or with higher performances (like XmlSerialization or protobuf-net).

Yes, it is less immediate, more cumbersome, less elegant. But it is also a more flexible, more "portable" solution.
Next time: are we done with execution? Not quite.. we are still missing repeated execution of activities! (Yes, loops and gotos!)

# Monday, 10 June 2013

A "Mini" workflow engine. Part 4: "out-of-order" composite activities, internal bookmarks, and execution.

Last time, we have seen how a sequence of activities works with internal bookmarks to schedule the execution of the next step, introducing a queue to hold these bookmarks.
We left with a question: what if execution is not sequential?
In fact, one of the advantages of an extensible workflow framework is the ability to define custom statements; we are not limited to Sequence, IfThenElse, While, and so on but we can invent our own, like Parallel:

public class Parallel : CompositeActivity
{
    protected internal override ActivityExecutionStatus Execute(WorkflowInstanceContext context) {
        foreach (var activity in children)        
           context.RunProgramStatement(activity, this.ContinueAt);       
        return ActivityExecutionStatus.Executing;
    }

    private void ContinueAt(WorkflowInstanceContext context, object arg) {
        foreach (var activity in children)        
            //Check every activity is closes...
            if (activity.ExecutionStatus != ActivityExecutionStatus.Closed)
                return;
        
         CloseActivity(context);
     }
}

And even have different behaviours here (like speculative execution: try several alternative, choose the one that complete first).

Clearly, a queue does not work with such a statement. We can again use delegates to solve this issue, using a pattern that is used in several other scenarios (e.g. forms management).
When a composite activity schedules a child activity for execution, it subscribes to the activity, to get notified of its completion (when the activity will finish and become "closed"). Upon completion, it can then resume execution (the "internal bookmark" mechanism).

child.Close += this.ContinueAt;
Basically, we are moving the bookmark and (above all) its continuation to the activity itself. It will be the role of CloseActivity to find the delegate and fire it. This is exactly how WF works...

context.CloseActivity() {
  ...
  if (currentExecutingActivity.Close != null)
     currentExecutingActivity.Close(this, null);

}
... and it is the reason why WF needs to keep explicit track of the current executing activity.

Personally, I find the "currentExecutingActivity" part a little unnerving. I would rather say explicitly which activity is closing:

context.CloseActivity(this);
Moreover, I would enforce a call to CloseActivity with the correct argument (this)

public class Activity {
   private CloseActivity() {
      context.CloseActivity(this); //internal
Much better!

Now we almost everything sorted out; the only piece missing is about recursion. Remember, when we call the continuation directly, we end up with a recursive behaviour. This may (or may not) be a problem; but what can we do to make the two pieces really independent?

We can introduce a simple execution queue:

public class ExecutionQueue
{
    private readonly BlockingCollection<Action> workItemQueue = new BlockingCollection<Action>();

    public void Post(Action workItem) {
           ...
    }
}
Now, each time we have a continuation (from an internal or external bookmark), we post it to the execution queue. This makes the code for RunActivity much simpler:

internal void RunProgramStatement(Activity activity, Action<WorkflowContext, object> continueAt)
{
    logger.Debug("Context::RunProgramStatement");

    // Add the "bookmark"
    activity.Closed = continueAt;

    // Execute the a activity
    Debug.Assert(activity.ExecutionStatus == ActivityExecutionStatus.Initialized);
    ExecuteActivity(activity);
} 
As you may have noticed, we are giving away a little optimization; a direct call when the activity just completes should be much faster.
Also, why would we need a queue? We could do just fine using Tasks:

Task.Factory.
    StartNew(() => activity.Execute(this)).
    ContinueWith((result) =>
    {
        // The activity already completed?
        if (result.Result == ActivityExecutionStatus.Closed)
        continueAt(this, null);
        else
        {
        // if it didn't complete, an external bookmark was created
        // and will be resumed. When this will happen, be ready!
        activity.OnClose = continueAt;
        }
    });


internal void ResumeBookmark(string bookmarkName, object payload) {            
    ...
    Task.Factory.
        StartNew(() => bookmark.ContinueAt(this, payload));
}

internal void CloseActivity(Activity activity) {
    if (activity.OnClose != null)
    Task.Factory.
        StartNew(() => activity.OnClose(this, null));
}
Which makes things nicer (and potentially parallel too). Potentially, because in a sequential composite activity like Sequence, a new Task will be fired only upon completion of another. Things for the Parallel activity, however, will be different. We will need to be careful with shared state and input and output arguments though, as we will see in a future post.

Are there other alternatives to a (single threaded) execution queue or to Tasks?

Well, every time I see this kind of pause-and-resume-from-here kind of execution, I see an IEnumerable. For example, I was really fond of the Microsoft CCR and how it used IEnumerable to make concurrent, data-flow style asynchronous execution easier to read:

    int totalSum = 0;
    var portInt = new Port<int>();

    // using CCR iterators we can write traditional loops
    // and still yield to asynchronous I/O !
    for (int i = 0; i < 10; i++)
    {
        // post current iteration value to a port
        portInt.Post(i);

        // yield until the receive is satisifed. No thread is blocked
        yield return portInt.Receive();               
        // retrieve value using simple assignment
        totalSum += portInt;
    }
    Console.WriteLine("Total:" + totalSum);

(Example code from MSDN)
I used the CCR and I have to say that it is initially awkward, but then it is a style that grows on you, it is a very efficient, and it was way of representing asynchronous with a "sequential" look, well before async/await.

The code for ReadLine would become much clearer:

public class ReadLine : Activity
{
    public OutArgument<string> Text = new OutArgument<string>();       

    protected override IEnumerator<Bookmark> Execute(WorkflowInstanceContext context, object value)
    {
        // We need to wait? We just yield control to our "executor"
        yield return new Bookmark(this);

        this.Text.Value = (string)value;

        // Finish!
        yield break;
        // This is like returning ActivityExecutionStatus.Close,
        // or calling context.CloseActivity();            
    }
}
Code for composite activities like sequence could be even cleaner:

protected internal override IEnumerator<Bookmark> Execute(WorkflowInstanceContext context)
{
    // Empty statement block
    if (children.Count == 0)
    {
        yield break;
    }
    else
    {
        foreach (var child in children)
        {
            foreach (var bookmark in context.RunProgramStatement(child))
                yield return bookmark;
        }                
    }
}

Or, with some LINQ goodness:

return children.SelectMany(child => context.RunProgramStatement(child));
The IEnumerable/yield pair have a very simple, clever implementation; it is basically a compiler-generated state machine. The compiler transforms your class to include scaffolding and memorize in which state the code is, and jump execution to the right code when the method is invoked.

But remember, our purpose is exactly to save and persist this information: will this state machine be serialized as well?
According to this StackOverflow question, the answer seem to be positive, with the appropriate precautions; also, it seems that the idea is nothing new (actually, it is 5 years old!)
http://dotnet.agilekiwi.com/blog/2007/05/implementing-workflow-with-persistent.html

...and it also seems that the general thought is that you cannot do reliably.

In fact, I would say you can: even if you do not consider that 5 years passed and the implementation of yield is still the same, the CLR will not break existing code. At IL level, the CLR just see some classes, one (or more) variables that holds the state, and a method with conditions based on those variables. It is not worse (or better) that serializing delegates, something we are already doing.

The iterator-based workflow code sits in a branch of my local git, and I will give it a more serious try in the future; for now, let's play on the safe side and stick to the Task-based execution.

Until now, we mimicked quite closely the design choices of WF; I personally like the approach based on continuations, as I really like the code-as-data philosophy. However the serialization of a delegate can hardly be seen as "code-as-data": the code is there, in the assembly; it's only status information (the closure) that is serialized, and the problem is that you have little or no control over these compiler generated classes.

Therefore, even without IEnumerable, serialization of our MiniWorkflow looks problematic: at the moment, it is tied to BinarySerializer. I want to break free from this dependency, so I will move towards a design without delegates.

Next time: bookmarks without delegates

# Friday, 07 June 2013

A "Mini" workflow engine. Part 3: execution of sequences and internal bookmarks

A couple of posts ago, I introduced a basic ("Mini") workflow engine.
The set of classes used an execution model that uses bookmarks and continuations, and I mentioned that it may have problems.

Let's recap some basic points:
  • Activities are like "statements", basic building blocks.
  • Each activity has an "Execute" method, which can either complete immediately (returning a "Closed" status),
// CloseActivity can be made optional, we will see how and why in a later post
context.CloseActivity();
return ActivityExecutionStatus.Closed;

or create a bookmark. A bookmark has a name and a continuation, and it tells the runtime "I am not done, I need to wait for something. When that something is ready, call me back here (the continuation)". In this case, the Activity returns an "Executing" status.
  • There are two types of bookmarks: external and internal.
    External bookmarks are the "normal" ones:
context.CreateBookmark(this.Name, this.ContinueAt);
return ActivityExecutionStatus.Executing;
They are created when the activity needs to wait for an external stimulus (like user input from the command line, the completion of a service request, ...).
Internal bookmarks are used to chain activities: when the something "I need to wait for" is another activity, the activity request the runtime to execute it and call back when done:
context.RunProgramStatement(statements[0], ContinueAt);
return ActivityExecutionStatus.Executing;

In our first implementation, external bookmarks are really created, held into memory and serialized if needed:

public void CreateBookmark(string name, Action<WorkflowInstanceContext, object> continuation)
{
    bookmarks.Add(name, new Bookmark
        {
            ContinueAt = continuation,
            Name = name,
            ActivityExecutionContext = this
        });
}
From there, the runtime is able to resume a bookmark when an event that "unblocks" the continuation finally happens
(notice that WF uses queues, instad of a single-slot bookmarks, which is quite handy in many cases - but for now, this will do just right):

internal void ResumeBookmark(string bookmarkName, object payload)
{            
    var bookmark = bookmarks[bookmarkName];
    bookmarks.Remove(bookmarkName);
    bookmark.ContinueAt(this, payload);
}
Internal bookmarks instead are created only if needed:

internal void RunProgramStatement(Activity activity, Action<WorkflowInstanceContext, object> continueAt)
{
    var result = activity.Execute(this);
    // The activity already completed?
    if (result == ActivityExecutionStatus.Closed)
        continueAt(this, null);
    else
    {
        // Save for later...
        InternalBookmark = new Bookmark
        {
            ContinueAt = continueAt,
            Name = "",
            ActivityExecutionContext = this
        };
    }
}
But who resumes these bookmarks?
CreateBookmark - ResumeBookmark are nicely paired, but RunProgramStatement seems not to have a correspondence.

When do we need to resume an internal bookmark? When the activity passed to RunProgramStatement completes. This seems like a perfect job for CloseActivity!
public void CloseActivity()
{
    logger.Debug("Context::CloseActivity");
    // Someone just completed an activity.
    // Do we need to resume something?
    if (InternalBookmark != null)
    {
        logger.Debug("Context: resuming internal bookmark");
        var continuation = InternalBookmark.ContinueAt;
        var context = InternalBookmark.ActivityExecutionContext;
        var value = InternalBookmark.Payload;
        InternalBookmark = null;
        continuation(context, value);
    }
}

However, this approach has two problems.
The first one is not really a problem, but we need to be aware of this; if the activities chained by internal bookmarks terminate immediately, the execution of our "sequence" of activities result in recursion:

public class Print : Activity
{
    protected override ActivityExecutionStatus Execute(WorkflowInstanceContext context)
    {
        Console.WriteLine("Hello");
            context.CloseActivity();
        return ActivityExecutionStatus.Closed;
    }
}

var sequence = new Sequence();
var p1 = new Print();
var p2 = new Print();
var p3 = new Print();
sequence.Statements.Add(p1);
sequence.Statements.Add(p2);
sequence.Statements.Add(p3);
Its execution will go like:

sequence.Execute
-context.RunProgramStatement
--p1.Execute
--sequence.ContinueAt
---context.RunProgramStatement
----p2.Execute
----sequence.ContinueAt
-----context.RunProgramStatement
------p3.Execute
------sequence.ContinueAt
-------context.CloseActivity (do nothing)
The second problem is more serious; lets' consider this scenario

var s1 = new Sequence();
var p1 = new Print();
var s2 = new Sequence();
var r2 = new Read();
var p3 = new Print();
s1.Statements.Add(p1);
s1.Statements.Add(s2);
s2.Statements.Add(r2);
s2.Statements.Add(p3);

And a possible execution which stops at r2, waiting for input:

s1.Execute
-context.RunProgramStatement
--p1.Execute
--s1.ContinueAt
---context.RunProgramStatement(s2, s1.ContinueAt)
----s2.Execute
-----context.RunProgramStatement(r2, s2.ContinueAt)
------r2.Execute
------(External Bookmark @r2.ContinueAt)
------(return Executing)
-----(Internal Bookmark @s2.ContinueAt)
-----(return)
----(return Executing)
---(Internal Bookmark @s1.ContinueAt) <-- Oops!

What happens when the external bookmark r2.ContinueAt is executed?
r2 will complete and call context.CloseActivity. This will, in turn, resume the internal bookmark.. which is not set to s2.ContinueAt, but to s1.ContinueAt.

We can avoid this situation by observing how RunProgramStatement/CloseActivity really look the like call/return instructions every programmer is familiar with (or, at least, every C/asm programmer is familiar with). Almost every general purpose language uses a stack to handle function calls (even if this is an implementation detail); in particular, the stack is  used to record where the execution will go next, by remembering the "return address", the point in the code where to resume execution when the function returns. Sounds familiar?

call fRunProgramStatement(a)
- pushes current IP on the stack-call a.Execute
- jump to entry of function -saves continueAt in and internal bookmark
  
... at the end of f ...... at completion of a ...
  
return CloseActivity
-pops return address from stack-takes continueAt from internal bookmark
-jump to return address-executes continueAt

Notice that we swapped the jump to execution and saving the return point; I did it to perform an interesting "optimization" (omitting the bookmark if the a.Execute terminates immediately; think of it as the equivalent to inlining the function call - and not really an optimization, as it comes with its problem as we have just seen).

Is it a safe choice? Provided that we are swapping the stack used by function calls with a queue, yes.
In standard, stack based function calls what you have to do is to pinpoint where to resume execution when the function being called returns. The function being called (the callee) can in turn call other functions, so point-to-return to (return addresses) need to be saved in a LIFO (Last In, First Out) fashion. A stack is perfect.

Here, we provide a point where to resume execution directly, and this point has no relation to the return point of execute (i.e. where we are now).
If the call to Execute does not close the activity, that means that execution is suspended. A suspended execution can, in turn, lead to the suspension of other parent activities, like in our nested sequences example. Now, when r2 is resumed, it terminatates and calls its "return" (CloseActivity). The points-to-return to, in this case, need to be processed in a FIFO (First In, First Out) fashion.

Next time: a sequence is not the only kind of possible composite activity. Does a queue satisfy additional scenarios?

# Sunday, 02 June 2013

A "Mini" workflow engine. Part 2: on the serialization of Bookmarks

Last time, we have seen that Activities are composed in a way that allows them to be paused and resumed, using continuations in the form of bookmarks:
[Serializable]
internal class Bookmark
{
    public string Name { get; set; }
    public Action<ActivityExecutionContext, object> ContinueAt { get; set; }  
    public ActivityExecutionContext ActivityExecutionContext { get; set; }
}
A Bookmark has a relatively simple structure: a Name, a continuation (in the form of an Action) and a Context. Bookmarks are created by the Context, and the continuation is always a ContinueAt method implemented in an activity. Thus, serializing a Bookmark (and, transitively, its Context) should lead to the serialization of an object graph that holds the entire workflow code, its state and its "Program Counter".

However, serializing a delegate sounds kind of scary to me: are we sure that we can safely serialize something that is, essentially, a function pointer?
That's why I started looking for an answer to a particular question: what happens when I serialize a delegate, a Func<T> or an Action<T>?

I found some good questions and answers on StackOverflow; in particular, a question about serialization of lambdas and closures (related to prevalence, a technique to provide transactional implementing checkpoints and redo journals in terms of serialization) and a couple of questions about delegate (anonymous and not) serialization (1, 2 and 3).

It looks like:
  • delegate objects (and Func<>) are not XML serializable, but they are BinarySerializable.
  • even anonymous delegates, and delegates with closures (i.e. anonymous classes) are serializable, provided that you do some work to serialize the compiler-generated classes (see for example this MSDN article, and this blog post)
  • still, there is a general agreement that the whole idea of serializing a delegate looks very risky

BinarySerializable looks like the only directly applicable approach; in our case, we are on the safe side: every function  we "point to" from delegates and bookmarks is in classes that are serialized as well, and objects and data that accessed by the delegate are serialized as well (they access only data that is defined in activities themselves, which is saved completely in the context). BinarySerilization will save everything we need to pause and resume a workflow in a different process.

I implemented a very simple persistence of workflow instances using BinarySerializer for MiniWorkflow, and it does indeed work as expected.
You can grab the code on github and see it yourself.

However, I would like to provide an alternative serialization mechanism for MiniWorkflow, less "risky". I do not like the idea of not knowing how my Bookmarks are serialized, I would like something more "visible", to understand if what I need really goes from memory to disk, and to be more in control.
Also, I am really curious to see how WF really serializes bookmarks; workflows are serialized/deserialized as XAML, but a workflow instance is something else. In fact, this is a crucial point: in the end, I will probably end up using WF4 (MiniWorkflow is -probably- just an experiment to understand the mechanics better), but I will need to implement serialization anyway. I do not want to take a direct dependency on SQL, so I will need to write a custom instance store, alternative to the standard SQL instance store.

I will explore which alternatives we have to a BinarySerializer; but before digging into this problem, I want to work out a better way to handle the flow of execution. Therefore, next time calling continuations: recursion, work item queue or... call/cc?

# Saturday, 01 June 2013

A "Mini" workflow engine. Part 1: resumable programs

WF achieves durability of programs in a rather clever way: using a "code-as-data" philosophy, it represents a workflow program as a series of classes (activities), which state of execution is recorded at predefined points in time.

It then saves the whole program (code and status) when needed, through serialization.
The principle is similar to the APM/IAsyncResult pattern: an async function says "do not wait for me to complete, do something else" and to the OS/Framework "Call me when you are done here (the callback), and I will resume what I have to do". The program becomes "thread agile": it does not hold onto a single thread, but it can let a thread go and continue later on another one.

Workflows are more than that: they are process agile. They can let their whole process terminate, and resume later on another one. This is achieved by serializing the whole program on a durable storage.
The state is recorded using continuations; each activity represents a step, a "statement" in the workflow. But an activity does not call directly the following statement: it executes, and then says to the runtime "execute the following, then get back to me", providing a delegate to call upon completion, a continuation.
The runtime sees all these delegates, and can either execute them or save them. It can use the delegate to build a bookmark.  Serializing the bookmark will save it to disk, along with all the program code and state (basically, the whole workflow object graph plus one delegate that holds a sort of "program pointer" (the bookmark), where execution can be resumed).

The same mechanism is used for a (potentially long) wait for input: the activity tells the runtime "I am waiting for this, call me back here when done" using a delegate, and the runtime can use it to passivate (persist and unload) the whole workflow program.

I found it fascinating, but I wanted to understand better how it worked, and I was dubious about one or two bits. So, I built a very limited, but functional workflow runtime around the same principles. I have to say that using WF3 (and WF4 from early CTPs too) I already had quite a good idea of how it could have been implemented, but I found Dharma Shukla and Bob Shmidt "Essential Windows Workflow Foundation" very useful to cement my ideas and fill some gaps. Still, building a clone was the better way to fill in the last gaps.

The main classes are:
  • an Activity class, the base class for all the workflow steps/statements; a workflow is a set of activities;
  • a WorkflowHandle, which represent an instance of a workflow;
  • a Bookmark, which holds a continuation, a "program pointer" inside the workflow instance;
  • a Context, which will hold all the bookmarks for the current workflow instance;
  • a WorkflowRuntime, which handles the lifecycle of a WorkflowInstance and dispatches inputs (resuming the appropriate Bookmark in the process)

A basic Activity is pretty simple:

[Serializable]
public abstract class Activity
{
   public Activity()
   {
     this.name = this.GetType().Name;
   }

   abstract protected internal ActivityExecutionStatus Execute(ActivityExecutionContext context);

   protected readonly string name;
   public string Name
   {
     get { return name; }
   }
}
It just have a name, and an "Execute" method that will be implemented by subclasses; it is how activities are composed that is interesting:

[Serializable]
public class Sequence : Activity
{
    int currentIndex;
    List<Activity> statements = new List<Activity>();
    public IList<Activity> Statements
    {
        get { return statements; }
    }

    protected internal override ActivityExecutionStatus Execute(ActivityExecutionContext context)
    {
        currentIndex = 0;
        // Empty statement block
        if (statements.Count == 0)
        {
            context.CloseActivity();
            return ActivityExecutionStatus.Closed;
        }
        else
        {
            context.RunProgramStatement(statements[0], ContinueAt);
            return ActivityExecutionStatus.Executing;
        }
    }

    public void ContinueAt(ActivityExecutionContext context, object value)
    {
        // If we've run all the statements, we're done
        if (++currentIndex == statements.Count) 
            context.CloseActivity();
        else // Else, run the next statement
            context.RunProgramStatement(statements[currentIndex], ContinueAt);
    }        
}
Now the Execute method needs to call Execute on child activities, one at time. But it does not do it in a single step, looping though all the statements list and calling their execute: this will not let the runtime handle them properly. In fact, the execute method says to the runtime "run the first statement, then call me back". This is a pattern you see both in WF3 and WF4, even if in WF3 is more explicit. It is called an "internal bookmark": you ask the runtime to set a bookmark for you, execute an activity, and the activity will resume the bookmark once done:
// This method says: the next statement to run is this. 
// When you are done, continue with that (call me back there)
internal void RunProgramStatement(Activity activity, Action<ActivityExecutionContext, object> continueAt) { // This code replaces // context.Add(new Bookmark(activity.Name, ContinueAt)); var result = activity.Execute(this); // The activity already completed? if (result == ActivityExecutionStatus.Closed) continueAt(this, null); else { // Save for later... InternalBookmark = new Bookmark { ContinueAt = continueAt, Name = "", ActivityExecutionContext = this }; } }
When an Activity completes, returning ActivityExecutionStatus::Closed or calling explicitly CloseAcctivity, the runtime looks for a previously set bookmark and, if found, resumes execution from it:
public void CloseActivity()
{
    // Someone just completed an activity.
    // Do we need to resume something?
    if (InternalBookmark != null)
    {
        var continuation = InternalBookmark.ContinueAt;
        var context = InternalBookmark.ActivityExecutionContext;
        var value = InternalBookmark.Payload;
        InternalBookmark = null;
        continuation(context, value);
    }
}
Do you already spot a problem with this way of handling the continuations and the control flow of the workflow program? Yes, it is recursive, and the recursion is broken only if an Activity explicitly sets a bookmark. In this way, the runtime simply returns to the main control point, waiting for the input and resuming the bookmark after it receives it.
For example, in this "ReadLine" activity:
[Serializable]
public class ReadLine : Activity
{
    public OutArgument<string> Text = new OutArgument<string>();       

    protected override ActivityExecutionStatus Execute(ActivityExecutionContext context)
    {
        //Waits for user input (from the command line, or from 
        //wherever it may come
        context.CreateBookmark(this.Name, this.ContinueAt);
        return ActivityExecutionStatus.Executing;
    }

    void ContinueAt(ActivityExecutionContext context, object value)
    {
        this.Text.Value = (string)value;
        context.CloseActivity();
    }
}
When the bookmark is created the runtime stores it, and then the "Execute" method returns, without any call to RunProgramStatement. The rutime knows the Activity is waiting for user input, and do not call any continuation: it stores it and wait for input.
public void CreateBookmark(string name, Action<ActivityExecutionContext, object> continuation)
{
    // var q = queuingService.GetWorkflowQueue(name);
    // q += continuation;
    bookmarks.Add(name, new Bookmark
        {
            ContinueAt = continuation,
            Name = name,
            ActivityExecutionContext = this
        });
} 
The next logical step is to use Bookmarks to save the program state to durable storage. This lead me to ask me what ended up to be a rather controversial question: is it possible, or advisable, to serialize continuations (in any .NET variant in which they appear, i.e. delegates, EventHandlers, Func<> or Action<>)?

Therefore, next time: serializing Bookmarks

# Friday, 31 May 2013

Building a Sharepoint "clone"

Well, from my previous post it should be clear that it will not be a "full" Sharepoint clone :) That would be a daunting task!
Anyway, for the kind of application I have in mind, I need to replicate the following functionalities:
  •     Everything is accessible via a web browser
  •     The basic data collection will be a list of "things" (more on that later)
  •     It will be possible to perform complex, collaborative, persisted, long-running operations to create and modify "things" and "list of things"
  •     Operations, lists and "things" need to be secured (have different accesses, with permissions assignable to users and roles)
  •     It is possible to define a "thing" (lists of them) (but only for users in the correct role)
  •     It is possible to define operations using predefined basic blocks ("steps")
  •     Steps will have automatically generated UI to gather user input, if necessary
For example: one of these "things" may be a project proposal.

Someone starts a project proposal, fills up some fields, and submit it to a manager. The manager can send it back, asking more details. Then someone is assigned to it, to explore the feasibility, for example. And many other steps can follow.

Each of these steps fills out a little part of the proposal (our "thing"). I want to discuss how I want to build "things", and which shape they will have. For now, let's focus on steps.
First of all, not that I have not used the word "workflow" up to now.
Yes, this kind of step-by-step operations fall nicely in a the definition of workflow; if you look at the company documentation, or even if you ask an employee to describe the procedure to submit a proposal, they are described in a graphical way, with boxes and arrows and diamonds: a flowchart.

WF (the .NET workflow engine) enforces this feeling by using a graphical language very similar to flowcharts as one of the ways to author a workflow. Besides the familiar authoring experience (which is, unfortunately, not without shortcomings in SP), WF adds many nice features we want in this kind of operations. Above all, we have  resilience-durability-persistence. This feature is really useful in our scenario: delays introduced by human interaction are very long, and a way to persist and resume programs is really useful to allow resilience and also scalability (programs can pause and resume, and migrate from one machine to another too).

The plan (for this blog):
- A series of posts about workflows
- Comparison with an interesting alternative: dataflows
- Building and persisting "things" (Dynamic types)
- Some nice ASP.NET MVC features used to put everything together (and make the actual users of all of this happy!)

Everything is accompanied by code.
I will try to put most of what I am discussing here on github, pushing new stuff as I blog about it.
Eventually, I hope to push all the code I wrote (and will write) for this project on github, but I will have to discuss it with my "investor" before doing that..

Next time: Part 1 of the workflow series!