# Saturday, August 09, 2014

Introducing my old, new job

Today I wanted to blog about some "fancy" (fun and unusual) work I had to do with Android. Then, I realized that I had to explain how to do authentication properly using smart-cards. No problem, I can write a post about that!
Then I realized I have never properly introduced my "new" job here, and especially the project for which I was hired (which is why I am using smart-cards...). "New" and "old", because next month I will be working here for 3 years.

"Here" is Servizi ST, a small company which is part of SAD Local Transport, which is the biggest public transport company in South Tirol. The role of Servizi ST is to provide the software (ANY software) needed by SAD and by all the other transport companies in South Tirol (and to the national railway company (Trenitalia), too) to run its business.
This means a LOT of software, especially for such a small company: from the on-board systems (PCs with GPS, UMTS, WiFi, etc.) used to track the vehicle (or train) and for the fare system, to the ticket offices, to billing and information, website, statistics, tracking and diagnosing issues, asset management...
It has been an incredible time, during which me and my team built from (almost) zero all the software needed to support a completely new traveling model. And by completely new, I mean completely: everything changed, both from the user and from the technological aspects.

The South Tirol local government issues a smart-card, for subscriptions to the regional public transport network. It can be used anywhere: trains, cable-cars, buses.
The interesting bit is that the subscription by itself is free: you pay as you go, using a distance-based fare schema. You use it every day? Cool. Just in the week-end? Great, you do not have to pay for the other 5 days as well.
Also, the subscription can be linked to a bank account (and SAD will issue you an invoice every two months, with the trips you have done during this interval), or you can have a "prepaid" schema, where you "put some money" on your card.
Except you are not putting any money on the card: you put them in a virtual account. This way of using the smart-card is not common in the public transport systems; usually, you store a counter (and, therefore, "money") on the card itself, and the card becomes so a substitute for paper tickets.
Instead, we use the card as a "token", a way for the system to recognize you. This is quite common in many other domains; think about access control at big companies or car parks.

This approach has pros and cons; our goal as software developers was to highlight the pros and "smooth" the cons.
For example: you, as a user, cannot immediately "see" precisely how much you still have on your card. Because you have nothing on your card. On the one hand, this is great: you just need exactly the amount to complete your travel, nothing more (the alternative for money-on-card systems is to let you travel only if you have enough money for the cost of the longest travel on the entire network, so you can always pay no matter where you will get off).
On the other hand, you do not want your users to feel that they do not have "control" on their own money.
Therefore, we needed (and wanted!) to provide a complete infrastructure to support the user, the legacy applications (like the ticket offices and automatic ticket machines), and third parties to get information about the user's account, in a precise, reliable and secure way.

After a "rushed" launch (we got this thing out of the door and working at the 14th of February 2012, by pulling too many stops IMO... even if everything worked out in the end!), we had time to build and refine the missing parts. Now this infrastructure is complete, and it is really something I can be proud of!

# Wednesday, February 05, 2014

Lambdas in Java 8

Today I will introduce a feature of the upcoming Java 8, a programming language feature I really like: lambdas. Support for lambdas and higher order (HO) functions is what made me switch to Scala when I went back to JVM-land a couple of years ago: after tasting lambdas in C#, I wasn't able to go back to a programming language without them.
Now Java 8 promises to bring them to Java: something I was waiting for a long time!

Lambdas, higher order functions... what?

First thing: do not get scared by terminology. The whole concept is borrowed from functional programming languages, where functions are king (just like objects are king in Object Oriented Programming).
Functional programmers love to define every theoretical aspect in detail, hence the fancy words.
But here I want to keep it simple; the whole concept around lambdas and HO functions is that you can pass functions as arguments to other functions.

Functional Java

Passing functions around is incredibly useful in many scenarios, but I want to focus on the very best one: handling collections.

Suppose we have a collection of Files, and we want to perform a very common operation: go through these files, do something with them. Perhaps, we want to print all the directories:

static void printAllDirectoriesToStdout(List<File> files) {
  for (File f: files) {
      if (f.isDirectory()) {

Or print all the “big” files:

static void printBigFilesToStdout(List<File> files) {
  for (File f: files) {
      if (f.getTotalSpace() > threshold) {

Have you spotted the problem? Yes, there is some code duplication.
In Java, we already have a tool to go around it: object orientation.

interface IFilter<T> {
   boolean isOk(T file);

public static void printFilesToStdout(List<File> files, IFilter<File> filter) {
  for (File f: files) {

     if (filter.isOk(f) {

Now we can implement our original functions using specific “Filter” classes, or even anonymous classes:

printFilesToStdout(files, new IFilter<File>() {
  public boolean isOk(File file) { return  f.isDirectory(); }

This is already quite close to passing a function around; instead, you pass a functional interface, an interface that contains only one abstract method. Anonymous classes  derived from functional interfaces are just single inline functions... lambdas!

printFilesToStdout(files, (File f) -> f.isDirectory());

Aggregate Operations

So far.. cool! Shorter, more general, readable.
But what really matters is the ecosystem built around this simple concept. It is possible to write general functions accepting functions, and the new JDK already provides the most useful ones, Aggregate Operations.

As an example, take our “IFilter” functional interface. With it, you can build a “filter” function:

Collection<T> filter(Collection<T> c, IFilter<T> filter) { … }

which is one of these Aggregate Operations. The Stream class defines it and many others as member functions, making them even easier to compose through chaining.

I just want to give you an hint on how they are used for complex collections  processing.
Do you want to get all the big tar files, open them, print every word in each text file you find inside?

files.filter(f -> f.getTotalSize() > 100 && isTar(f)).
      flatMap(f -> openTarStream(f).getEntries()).
      filter(entry -> isText(entry.getFile()).
      flatMap(entry -> ReadAllLines(entry.getFile())).
      flatMap(line -> Stream.of(line.split(" "))).
      forEach(word -> System.out.println(word))

Compact, clean.

Now try to do it in Java 7... look at the indentation! It is easy to get lost in the mechanics of collection handling.

We only scratched the surface of what can be done with lambdas; together with aggregate operations and generics, they are a very powerful tool that will make most of the data transformation operations easy. And they are very efficient too! But this is something for another post.

# Sunday, January 12, 2014

Integration of Razor and AngularJS (2)

I found the technique I showed in my previous post particularly useful when you need to "migrate" some fields from server-side to client-side. By "migrate" I mean: make fields which were initially only needed on the server side, for page rendering, available on the client-side, in order to access and manipulate them through JavaScript too (for user interaction through Angular, in this case).

Suppose you have a page to display some person details:

@model PersonVM

All is well, until you decide that you want to provide a way to edit the description. You do so by adding a input field (for example), plus an "edit" button:

<button ng-click="'editing = true'" ... >Edit</button>
<input ng-model="description" ng-show="editing" ... >

This modification requires you to move the person's description from the server-side ViewModel to the client-side $scope. That means changing the controller code as well, since Razor does not need @Model.Description anymore, but you need to pass it (though JSON) to $scope.description:

$http.get("/Person/DetailsData" + $scope.personId).
            success(function (data, status) {
                $scope.description = data. description;

This is not bad, and the code still stays readable and compact using the "pattern" I described a couple of posts ago. But I'd rather not touch the controller at all.
Using the couple of directives I wrote, it is as simple as:

<button ng-click="'editing = true'" ... >Edit</button>
<input ng-model="description" ng-show="editing" value='@Model.Description' ... >


<button ng-click="'editing = true'" ... >Edit</button>
<input ng-model="description" ng-show="editing" ... >
<span ng-model="description" ng-content>@Model.Description</span>

No need for an additional $http request, nor for another controller action.
The "value" attibute on the input, or the element text of the span, will be written out by Razor as before; the new directives will use the content of the value attribute (or the element text) to initialize $scope.description. And without the need to do another round-trip to the server!
You don't even need to change the controller and split model fields between
ViewResult Person::Details 


JsonResult Person::DetailsData.

# Saturday, January 11, 2014

Some programming jokes...

I found them hilarious :D

  • A UDP packet walks into a bar, no one acknowledges him
    Alternate version: A UDP packet walks into...
  • A guy walks into a bar and asks for 1.4 root beers. The bartender says "I'll have to charge you extra, that's a root beer float". The guy says "In that case, better make it a double."
  • Java and C were telling jokes. It was C's turn, so he writes something on the wall, points to it and says "Do you get the reference?" But Java didn't.
  • Why C gets all the girls and Java doesn't? Because C doesn't treat them like objects.
    • C could at least give Java some pointers...
  • Knock knock
    Who's there?
  • Knock knock
    Branch prediction.
    Who's there?
  • How many SEO engineers does it take to change a light bulb, lightbulb, globe, lamp, sex, xxx
  • 99 little bugs in the code
    99 bugs in the code
    patch one down, compile it around
    117 bugs in the code

# Thursday, January 09, 2014

Integration of Razor and AngularJS

In my previous post I described different ways of using AngularJS with ASP.NET MVC; in particular, some ways of sharing view data (the ViewModel) between ASP.NET and AngularJS.

While the last combination I used (ng-init for the ID(s) plus ajax requests for the bulk of data) satisfies me, there are three drawbacks with this approach:
  • the need to maintain two separate ViewModels: the ASP.NET @Model for Razor, which I put in *VM classes, and the Json model to initialize the AngulaJS $scope, which I put in *DTO classes.
  • two separate calls to the web server, one for the html content, one for the Json data
  • the data for the $scope is in the ajax request, and therefore it is difficult (impossible?) to make it available to "low-tech" clients; this means both browsers with no JavaScript and, more important in some cases, search engine robots.

One idea could be to put all the data inside the html page itself, by embedding everything in ng-init, in a Json object inside <script> or CDATA.. or (why not?) in the HTML itself!
For example, using "value" for inputs, the element text for paragraphs, and so on:

<input ... value="Some value">
<p ... >Some text</p>

Disclaimer: I know this is not "the Angular way"; the idiomatic way is to have data only in the $scope, not in the view. There are good reasons for it: clarity, testability are the ones to the top of my head. But mixing two paradigms may require compromises. Besides, it was fun to see if I could accomplish what I had in mind.

The way I did it was to create a couple of custom directives. For input, I did not create a new one, as input already has already a perfecly natural place (the "value" attribute) where I could put my initial value.

<input type="text" ng-model="someVariable" value="Some Value">

The purpose is to initialize $scope.someVariable with "Some Value", so that it can be used elsewhere, as in:


The binding should be bi-directional too. That's quite easy with input, I just had to redefine the "input" directive:

app.directive('input', function ($parse) {
  return {
    restrict: 'E',
    require: '?ngModel',
    link: function (scope, element, attrs) {
      if (attrs.ngModel && attrs.value) {
        $parse(attrs.ngModel).assign(scope, attrs.value);

For the element text I need a different directive; I wanted to write something like:

<span ng-model="anotherVariable" ng-content>Some nice text</span>

Whenever the "ng-content" directive is present, I wanted to initialize the model ("anotherVariable") with the element text. I wanted the binding to be by-directional too.

It wasn't much more difficult:

app.directive('ngContent', function ($parse) {
  return {
    restrict: 'A',
    require: '?ngModel',
    link: function (scope, element, attrs) {
      if (attrs.ngModel && element.text()) {
        $parse(attrs.ngModel).assign(scope, element.text());
      scope.$watch(attrs.ngModel, function(value) {

The "bi-directionality" is given by $watch; when the model changes, the element text is updated as well.
You can find a complete example that shows this behaviour at this plunker.

Enjoy! :)

# Wednesday, January 08, 2014

Mixing AngularJS and ASP.NET MVC

MiniPoint is a web application created using a mix of AngularJS and MVC.

In the past I have used JavaScript libraries (in particular, jQuery) in conjunction with other web frameworks (ASP.NET pages, mainly).
Before beginning to work on MiniPoint, more or less 5 months ago, I needed to create a very simple example application to show how an external developer could use our OAuth provider for authentication.

A former collegue pointed me to AngularJS, and I was very impressed by it.

Let me put it straight: I like jQuery, I think it's fantastic for two reasons: it just works everyswhere, taking upon itself the burden of cross-browser scripting, and it lets you work with your existing web pages and improve them, significantly and progressively.
But for complex web applications, AngularJS is just.. so much cleaner!

The philosophy is very different (you should read this excellent answer if you have not read it yet); as highlighted in it, you don't design your page, and then change it with DOM manipulations (that's the jQuery way). You use the page to tell what you want to accomplish. It's much closer to what you would do in XAML, for example, supporting very well the MVVM concept.
This difference between jQuery and AngularJS actually reminds me of WinForms (or old Java/Android) programming VS. WPF: the approach of the former is to build the UI (often with a designer) and then change it through code. The latter leverages the power of data-binding to declare in the view what you want to accomplish, what is supposed to happen.
The view directly presents the intent.

The first AngularJS application I created was a pure AngularJS application: all the views where "static" (served by the web server, not generated) and all the code, all the beaviour (routes, controllers, ...) was in AngularJS and in a bunch of REST services I build with ServiceStack. AngularJS and ServiceStack seem made for each other: the approach is was very clean and it works really well if you have a rich SPA (Single Page Application). It is different from what I was used to do, and I needed some time to wrap my head around it (I kept wishing I could control my views, my content, on the server).

So, for the next, bigger problem I said "well, let's have the best of both world: server-side generated (Razor) views with angular controllers, to have more control on the content; MVC controllers to server the Views and their content".

Seems easy, but it has a couple of issues. jQuery is great for pages produced by something else (ASP.NET), where you design a page, and then you make it dynamic using jQuery. This is because jQuery was designed for augmentation and has grown incredibly in that direction. AngularJS excels in building JavaScript applications, entirely in angular.

It is possible to integrate AngularJS with MVC, of course, but you have different choices of how to pass, or better transform, data from a MVC Controller, the (View)Model it generates, and the AngularJS controller and its "viewmodel" (the $scope). Choosing the right one is not always easy.

In plain MVC (producing a page with no client-side dynamic) you have a request coming in and (through routing) triggering and action (a method) on a Contoller. The Controller then builds a Model (the ViewBag, or a typed (View)Model), selects a View and passes the both the (View)Model and the View to a ViewEngine. The ViewEngine (Razor) uses the View as a template, and fills it with the data found in the (View)Model. The resulting html page is sent back to the client.

Why am I talking about (View)Model, instead of just plain Model? Because this is what I usually end up creating for any but the simplest Views. The data model, which hold the so-called "business objects", the one you are going to persist on your database, is often different from what you are going to render on a page. What you want to show on a page is often the combination of two (or more) objects from the data model. There is a whole pattern built on this concept: MVVM. The patter is widespread in frameworks with two-way binding mechanisms (Knockout.js, WPF, Silverlight); many (me included) find it beneficial even in frameworks like MVC; after all, the ViewBag is exactly that: a bag of data, drawn from your business objects, needed by the ViewEngine to correctly render a View.

However, instead of passing data through an opaque, untyped ViewBag, it is a good practice to build a class containing exactly the fields/data needed by the View (or, better, by Razor to build the View).

If you add AngularJS to the picture, you have to pass data not only to Razor, but to AngularJS as well. How can you do it? There are a couple of options:

Razor inside the script

   var id = @(Model.Id);

This approach works, of course: any data you need to share between the server and the client can be written directly in the script, by embedding it in the (cs)html page and generating the script with the page. I do not like this approach for a number of reasons, above all caching and clear separation of code (JavaScript) and view (html).

The url

I wanted to keep my controller code in a separate .js file, so I discarded this option. The next place where I looked at was the url; after all, it has been used for ages to pass information from the client to the server. For a start, I needed to pass a single ID, a tiny little number, to angular. I already had this tiny number on the URL, as ASP.NET routing passes it to a Controller action (as a parameter) to identify a resource. As an example, suppose we have a list of persons, and we want the details relative to a single person with ID 3. ASP.NET routing expects a URL like:


This maps "automatically" (or better, by convention) to:

ActionResult PersonController::Details(int id)

If I want to get the same number in my angular controller, I could just get the very same url using the location service, and parse it like MVC routing does:

var id = $location.path().substring($location.path().lastIndexOf("/"));

But it's kind of ugly, and I find it "hack-ish" to do it in this way, so I kept looking.

Razor inside the page (ng-init)

A better alternative is to use Razor to write "something" on the page, and then read it from the client script. You can use hidden fields, custom data- attributes and so on; fortunately angular already provide an attribute for this purpose, ng-init.

<div xmlns:ng="http://angularjs.org" id="ng-app" class="ng-app:MiniModule" ng-app="MiniModule">
    <div class="row-fluid" ng-controller="PersonDetailsController" ng-init="personId=@Model.Id">

Angular injects it in the scope during initialization, so you can refer to it as $scope.personId


Finally, one of the most common way to trasfer data from the server to a script it through ajax calls. AngularJS has a great service ($http), very simple and powerful:

        success(function (data, status) {
            $scope.data = data;
        }).error(function (data, status) {

On the server side, there is a

JsonResult PersonController::GroupData()

method which returns a JsonResult, encapsulating a Json object.

Mixing up

It is not convenient to use ng-init for large objects, or for many objects. On the other hand, you need a practical way to pass around an ID, to use ajax on resources that requires it (like http://localhost/Person/Details/3).

The more sensible approach, which I ended up using, seems to use ng-init to pass around the id, and ajax to actually retrieve the data. In the current implementation of MiniPoint it seems to work quite well.
In general, when I have a resource (like Person) and I want to show and edit information and details linked to it, I have:
  • an object model (Person)
  • a ViewModel (PersonVM) which is populated in controller actions and passed to the View:

ActionResult PersonController::Details(int id) {
   return View(new PersonVM { ... });

@model PersonVM

<div xmlns:ng="http://angularjs.org" id="ng-app" class="ng-app:MiniModule" ng-app="MiniModule">
    <div class="row-fluid" ng-controller="PersonDetailsController" ng-init="personId=@Model.Id">

  • a Person data transfer object (PersonDTO) which is populated requested by the angular controller, populated by a "Data" controller action and then returned as JSON to the controller:

    // Defer ajax call to let ng-init assign the correct values
    $scope.$evalAsync(function () {
        $http.get("/Person/DetailsData/" + $scope.personId).
            success(function (data, status) {
                $scope.data = data;
                // ...
            }).error(function (data, status) {
                // error handling

JsonResult PersonController::DetailsData(int id) {
   return Json(new PersonDTO { ... });

# Thursday, January 02, 2014

The MiniPoint Workflow language

The workflow language was, for me, the funniest part of MiniPoint. I love working on languages, small or big ones it's not important, as long as they are interesting. In fact, MiniPoint has more than one language/parser: workflows, document templates, AngularJS expressions, ... Most are tiny, but every one makes the code cleaner and the user experience more pleasant.

Take, as an example, the simple boolean language to express visibility of a field on a view; as I mentioned in my previous post, you can make individual fields visible or not (and required or not) using boolean expressions  (or, and, not, <, >, ==, <>) over constants and other fields in the view (or in the schema).
The expression is parsed and then analyzed  to produce two things: an AngularJS expression, which will be inserted into the ng-required and ng-show/ng-hide attributes to make it work entirely on the client side, and the list of affected fields.
Which is the purpose of this list? Remember that a view is only a subset of the schema, but in these visible/required expression you can refer to other member of the schema as well (from previous views, usually).
AngularJS initializes its "viewmodel" (the $scope) with an ajax request (getting JSON data from a ASP.NET controller); For efficiency, we keep this data at a minimum, which usually is a subset of the fields in the view (readonly fields, for example, are rendered on the server and not transmitted). When we have an expressions, however, fields referenced in the expression need to end up in the $scope too, hence the reason of parsing and analyzing the expressions.

But I am digressing; I will write more about the interaction and integration of AngularJS and Razor (and MVC) in another post.

Now I would like to talk about some aspects of the workflow language that needed a bit of thinking on how to best implement them.

I wanted it to be simple, natural to use (i.e. you can use statements/expressions/constructs wherever it makes sense and expect them to work) but still powerful enough. And have a clean grammar too :)

I wrote some langauges in the past, but this is the first one where statement terminators (';') are optional, and you can just use returns.
The people that are going to write the schemas and workflows (so not the end-users, but the "power-users", or site administrators) have a strong background in ... VBA. Therefore, when a decision about the language came up, I tried to use a VBA-like syntax, to give them a familiar look. So, for example, If-EndIf instead of braces { }.
And I wanted to do it because it was interesting, of course! I had to structure my semantic actions a bit differently, as I was getting reduce-reduce conflicts using my usual approach.

On the surface, you it seems that you have statements (very similar to other programming languages), choices (if-then-else-endif) and gotos. I know.. Ugh! gotos! Bear with me :)

step ("view1")

var i = 10

if (i + me.SomeField > 20)
  i = i - 20
  goto view1
  goto end

//Generate a report, using the "report1" template
report ("report1")

step ("final"): end 

nder the hood, things are a bit.. different. Remember, this is a textual language for a flowchart. So, "step" is actually an input/output block (parallelogram); statements and reports are generic processing steps (rectangles); the "if-then-else" is a choice (rhombus). Therefore if-then-else has a stricter than usual syntax, and it's actually:
IF (condition) [statements] GOTO ELSE [statements] GOTO ENDIF
so that the two possible outcomes are always steps in the workflow.

Therfore, under the hood you have a list of "steps", which may be a statement list (like "var i = 10"), an input/output step ("step" or "delay"), or a branch.
As a consequence, the language has somehow two-levels; at the first level you have "steps"; then, among "steps" or inside them (look at the if-then-else in the example) you can have expressions and statements like in most other programming languages. The two levels appear quite clearly in the grammar, but I think it's difficult to tell from the syntax. And this is what I wanted to accomplish. Who used it to author the workflows was quite pleased, and used it with no problems after very little training.

Tranlsation to WF activities was fun as well: I built a custom Composite Activity to schedule all the steps; also statements (instead of receiving their own activity) where merged together and executed bu the main composite activity, to improve efficiency (and make it easier to add other statements: a new one does not require a new activity).

# Tuesday, December 31, 2013

First version of MiniPoint released!

Last Saturday, I installed the first test version of MiniPoint in a production environment. All went well (we spent more time waiting for SQL Server 2012 to finish installation than anything else). 

I had only some minor hichups: figuring out how to deploy without VS/msbuild/WebPI (i.e. manually), and how to configure correctly IIS 7 (the production server uses Windows 2008) for .NET 4.5 / MVC 4. 

I solved both issues thanks to StackOverflow :)

What is MiniPoint?

MiniPoint is a MVC4/AngularJS web application that allows you to define:

  • a set of related data/metrics you want to record;
  • the process needed to collect and save them;
  • the people who will need to collaborate to get, read, validate and update those data.

The application will replace an existing Sharepoint 2010 solution which was fulfilling the same role, but that was too heavy (it required its own, beefy server), not so flexible and much, much more difficult to modify and extend. MiniPoint, instead, was designed with the goal to make it easy to modify and extend every bit of the process.

MiniPoint is build around four concepts: Schemas, Views, Lists and Workflows.

Minipoint initial page. The design, based on Twitter Bootstrap 2.3.2, is intentionally simplistic and lean: end-users (the one who will use the workflows) are not computer experts.

You can think of a schema as the data structure needed to hold all the information needed for the process, or better all the information that will be collected and saved during the process. It is like a database schema, or like the header rows in an Excel table. The parallel with Excel here is not a casual one: prior to the Sharepoint solution, end-users used Excel files to keep track of the information. For example, they had an Excel worksheet, with fixed columns, for customers calls; at each call they had to fill in a bunch of columns and save the file on a network share.

A list is an instance of the schema: the empty rows in the Excel worksheet. Each row is created using the Schema as a mould, and filled little by little by the process. 

Let me explain through a simple example: the submission of an issue.

We can simplify the process by assuming that the issue is submitted, triaged (rejected or accepted), assigned to someone, fixed, verified. If the verification step fails, we go back and re-work on it.

It's a 5 step process, with two branch points. I suppose this sounds familiar to everyone who used a bug-reporting software :) even if this very process was created for a customer which have nothing to do with software! Processes like these are very common in companies of all sizes, as there are many occasions in which it is necessary to keep track of progress and history (the small company for which the software was initially concieved wanted to record at least 5 of them).

Schemas (and lists)

For the above example process, we need to collect 

  • The issue
  • Its category
  • Who submitted it (who will also be responsible of checking the outcome, in our simple scenario)
  • Who will work on it
  • The verification outcome (acceptance and comments)

So our schema is composed by six fields:

  • Issue (Text)
  • Category (Enumeration)
  • Submitter (User)
  • Assigned_to (User)
  • Status (Enumeration)
  • Comments (Text)

In parenthesis I have put the data types for the various steps. MiniPoint supports different data types (more on this in a following post), and use the data type to treat, save and render the field in the right way.

The definition of a schema. Working on schemas is very quick: adding, removing, modifying a filed is done client-side using AngularJS, for a fast and responsive UI


In any complex enough process, all the needed data and "status" is not readily available, but needs to be collected over time, by different actors (this is usually the reason there is a process in the first place). Therefore, not all fields can or should be filled from the beginning! At each step of the process, some fields will be entered, some will be updated, others will be visible but will be not modifiable anymore. 

For this, MiniPoint let you create different views on a schema

A view is a subset of fields, each decorated with some additional attributes. Attributes indicate how the data needs to be presented to the user: whether the field will be visible (and when, with conditions over other fields), required, readable or writable... 

Definition and edit of a view element. Notice how you can specify visibility (and if an element is required) as a boolean expression over other fields in the same schema.

What if you figure out, while creating a new view, that you need another field in the schema? Shortcuts to manipulate related elements are spread through MiniPoint


When you have the views, you need to "stitch them together": you need to define in which order they will be presented to the end-user, who will have right to access them, and how you will route users through different views based on what they entered so far.

MiniPoint will then associate views to workflow steps; steps can be executed in order, or (based on conditions over variable and field values) you can have branches.

The workflow is described using a small and simple language, with statements for steps (display of views), report generation, branch (expressed as simple, fixed if (condition) goto step else goto step - just the text equivalent of a "conditional" diamond in flowcharts)

The workflow is a reactive program: it will execute one step ofter the other, and then it will suspend itself when it needs to wait (typically, for user input at steps, or for a pre-defined delay). Then, it will resume itself to react to external input: a delay expired, or a user completed one step.

For this iteration of MiniPoint, the workflow text is "translated" (or, compiles into) a .NET Workflow Foundation 4.0 series of activities, and runs using the a self-hosted .NET WorkflowApplication. I plan to replace WF with my own workflow engine, as it is simpler, leaner and easier to port to different platforms (for example, WF out-of-the-box only supports SQL Server storage). Another reason to switch to my own workflow engine is to have better integration with ASP.NET async; even if there are some nice examples on how to combine WF and ASP.NET MVC,  hosting reliably, in an efficient (asynchronous) way the .NET WorkflowApplication proved to be a bit tricky. The support for async, long-running events in ASP.NET could be dangerous (if not handled correctly), and it is still evolving (it changed in an important way, and improved, in .NET 4.5, but it still need some care).

The current code works quite well, but I am not entirely pleased with the result, as it looks more complex than it should be. For now, however, WF works smoothly enough.

The little "workflow language" was created together with the people who will author and modify the business process; it needed to be flexible and lightweight enough to let them change the process easily, but powerful enough to actually express what they needed to model. It is, like I mentioned, a flowchart, with the addition of a global, shared state (variables). 

The language is textual and simple enough; a (javascript based) graphical UI is in the works too.

The workflow language. Upon Save, the code is complied to check it for syntax errors; errors are displayed directly in the UI to help find and fix them.

The list of running workflows. Only steps to which the user has access are shown; plus, it is possible to filter, reorder and group them.

A the UI shown by the workflow while executing a step; the fields in the view are rendered using appropriate controls, base on the data type and the attributes set during the creation of the view.


Another required improvement over the Sharepoint solution was to provide finer control over read/modify and general access control. I solved the issue by borrowing some ideas from the NT access control model: every "securable" element (schema, views, lists, even single documents) has an ACL (Access Control List) attached. Each item in the list is a tuple (role, access), where role is either a username or a group and access is a list of allowed operations (read, write, execute).

The UI for setting and assigning permissions to the various defined elements

In this way, it is possible to statically assign some permissions to users or groups. It is also possible to dynamically assign permission through the workflow code, with specific instructions that give permission to a dynamically inserted user. For example, you may want to assign an issue to a particular group or user, and let only that user or group access the following steps.

# Wednesday, September 18, 2013

Where have I been?

I really don't know if anybody besides my friends and coworkers read this blog but.. after some entries in the first half of this year, everything got silent.
The reason is simple: I was working, and in my free time.. learning working again.

The project I mentioned (the mini sharepoint-like replica) got momentum and a deadline. I am really enjoying working on it; I am using and putting together a lot of great techniques and frameworks. Some of them were known to me (WF4 for workflows management, GPPG/Gplex for parser generation, ASP.NET MVC for web applications...) but some were really new (AngularJS.. wow I do like this framework!, Razor.. which is a joy to work with). It is tiresome to work, then get home (using my commuting time to read and learn) and then work again. Fortunately, my wife is super-supportive!

That, and my "regular" day-to-day work were we are also putting together some new and exciting applications. The most interesting, a "framework" for authentication and authorization from third-parties that uses a mix of OAuth, NFC and "custom glue" to authorize access to third-party applications without having to enter username-password (we use our smart cards as credentials).
Think about an open, public environment (a booth at an expo, or an one of the outside events for which our regions is famous, like the Christmas Market)

You have a booth where you are offering some great service subscription. You do want people to stop by and subscribe to your services, but you do not want to bother them for too long, so filling out forms is out of question; you do not even want them to sign-in on a keyboard (try to make people enter a password at -5C.. it's not easy).
I coded a simple prototype, an Android app that uses NFC to read the AltoAdige Pass that every traveler have (or should have :) ) as a set of credentials for authorization against our OAuth provider. The third-party app request an access toke, users are redirected to our domain and "log-in" using the card, by weaving it in front of an Android tablet. The process is secure (it involves the card processor and keys, mutual authentication, etc.) but easy and fast. They see the permissions and confirm (again weaving the card when the tablet asks them).

For now it is only a prototype, but.. I found it interesting when pieces of (cool) technology fall together to produce something that is easy to use and actually makes people's life e little bit easier.
With so many balls to juggle, time for blogging is... naught. But I will be back with more news on my projects, as soon as I have some time.

# Thursday, July 25, 2013

Postmortem of my (failed) interview with StackOverflow

Some months ago, I made a very peculiar decision: I applied for a job.
I am currently employed as a senior dev in a small software company. I like my current position: it is a good blend of project management, architecture, and software development. It has upsides and downsides, as any "regular" job in the world. On the positive side, it gives me some unique challenges, which is something I always look for; I get to work on many different things, at very different levels (from low-level programming on embedded devices, to distributed systems and big-data crunching, up to web applications and mobile devices).

Being quite content, I was not looking for a job. But I am a Stack Overflow user, and as anyone actively working on software development, I visit the site several times a day. They are THE resource for programming nowadays: they changed the way every developer I know works.

More than that: I am a big fan, and long time reader, of Jeff Atwood and Joel Spolsky, the fathers of Stack Overflow. And in January this year, I saw on Stack Overflow a Careers 2.0 banner/ad. And I discovered they were hiring.

Oh my. Stack Overflow. All those amazing devs: Marc Gravell, Nick Craver, Kevin Montrose, Demis Bellot... And Joel Spolsky.
If you are reading this, you surely know who Joel Spolsky is. He wrote THE book about making  software development really work. By applying his ideas, following his suggestions, I was able to steer both management and developers towards a successful way to write software. Twice.
Joel means this to me: making developers life better. There are few companies I really dream about working at, and a company run by Joel is definitely one of them.

So I applied. I knew it was going to be hard, but I also knew I had what it is needed to be great there. But... Long story short, I failed.
What can I do about it? Probably nothing: a job interview for a place like SO is not like an exam at the University: it is much harder. Failing does not mean that you can go home, study harder, and try it again a couple of months later. It means that someone else takes the job. And succeeding is not a simple matter of scoring well enough: you have to perform better than all the others.

Anyway, this is what I would have liked to have read some time ago, before doing the interview. Maybe it will not do any good to me, but if it can help another great developer, I will be happy.

Before starting my retrospective though, I think it is important to point out that the interview process was not only very well handled, but really great: it was fair, everyone was super-polite, and it was a pleasure overall. I did not expect anything less; on the contrary, I would have been disappointed had it been any easier, or shorter. To me, a good interview process is an indicator of a good, solid company.

The interview

The application, fairly enough, happens through Stack Overflow Careers.
And there I made my first mistake: the first time I applied, I wrote a simple, nice cover letter. Don't! Write what you feel, do not be restrained! Well, that means write a polite letter, of course, but you also have to make clear why you are applying, and how much the company you are applying for means to you.
Fortunately, I was able to fix this mistake by applying a second time, this time showing my true enthusiasm.

From that point, the hiring process looks very familiar to Joel's reader: he describes it, as a series of advices mainly addressed to interviewers, both in his books and in his blog.  It is not exactly the same, but it is only slightly different (probably it was adjusted to the remote-distributed nature of Stack Overflow employees).

It all begins on the Stack Overflow side; they sort cover letters and resumes (I can only assume they do it in a way similar to what is described here).

Then, if your resume and your cover letter show the right characteristics, you start: you get a first phone interview, were they basically walk through your CV (probably checking that you actually did what you claim), your position, your expectations and your passion for Stack Exchange.

After that, you get (if I got it right) to talk to up to 5 people ("all the way up to Joel").
In my case, I talked with members of both the Core Q&A team and Careers.
The pattern is more or less the same: you and the interviewer chat about your past experiences, then you do one or two coding (or design) exercises, and it ends up with reciprocal Q&A (you get to ask questions to those guys, which is totally awesome by itself).

After each interview the recruiter, who made the first contact with you in the phone interview, gets back to you, to report back how it went (i.e. if you got to the "next level"), ask you how it was, and fix a schedule for the next interview. They are really quick (from 10 minutes to 1 day), which is also very good: it is very stressful to stay there and wait for an outcome, so it is very nice to have them get back to you quickly.

The interviews are also a very nice experience: the interviewers are good, prepared, and they keep it interesting. A couple of times I forgot I was doing an interview, and I actually enjoyed myself!

Sure, before the first interview I was scared as hell. But it went well, and I was happy at the end.
The coding problem was interesting, even if I expected something harder, and I solved it rather quickly so.. we did another one. I made a couple of stupid mistakes but hey, coding without doing errors on a white-board is really difficult. Even Chris Sells got linked lists wrong on a white-board for his interview!
The second interview was great. I actually had a lot of fun, both chatting about my experience (I got excited talking about what I did) and solving the problem. This is why I arrived at interview #3 with high spirits and good expectations. But I never got to level 4. Damn.

The outcome

After the 3rd interview, the recruiter got back to me with a different looking email. It was much more formal: isn't it strange that bad news are always so formal?
I was sad. Shocked. It was like the world had stopped for a second. There it was, my dream, which seemed to become closer to realization, shattered.

I felt a little depressed, but above all perplexed. Why? What went wrong? Nothing seemed to indicate that something was bad. I didn't failed the coding exercise, and I didn't froze up. Sure, I told myself, there are better developers out there. But still, I accomplished good things in the past, and I knew that I would have done even greater things at Stack Overflow!

I knew, but did they?

And then, slowly, painfully, as I replayed my last interview over and over again, I realized that it was MY fault.

The biggest mistake

I haven't learned a very important lesson: you have to show yourself. You have to blow them away, and I did not.

Partly this is because I am a humble guy. Not really shy, but I do not like to make an appearance.
But this is not an excuse.

Only now I realize that an interviewer have slightly more than one hour to understand if you are a good fit for the company (and I should have known better, since I have been on the other side quite a few times in the past years). How can he tell, if you don't help him by saying anything you can in your favor?

I think I dug myself into a hole when I was asked about my current project, and which role I had in that. As I wrote in my CV, I had designed the architecture, which pieces are necessary to process in a reliable and efficient way these hundred thousands transactions we have each day, and coded myself the core portions of the system (from the embedded software on the devices which collect data, to the algorithms to reconstruct the big picture from the partial data coming from the embedded devices). All of this while I was managing the project and the process, explaining to management and developers how to plan, estimate, execute and keep track.
Oh, and shipping a (not perfect, but working) product in 6 months, on a ridiculous deadline imposed by external factors (marketing).

So, when I was asked about this kick ass project, what did I said?

Let's see...
I was asked specifically about the embedded devices, why I was the one writing software for them. And my answer was something like: "the only other developer on the team that knew C was overworked, so I stepped in and completed the software".
A very lame answer, in retrospective. Did I mentioned that I rewrote the conctactless card reader driver, reverse engineered the protocol and implemented the whole set of commands, because the driver was only for Windows and we have Linux boxes?
Or that I saved thousands of euros when our  embedded device  vendor refused to tell us how they "securely store" keys to access smart cards? (I have to admit it is very clever for them to have such a nice vendor lock, having your keys stored away in a way you cannot read them..  especially if you discover this after you printed and sent to your customers 100K smart cards).
It was both frightening and exhilarating when I had to figure out how to convince their “secure storage” to extract the keys to memory!
But I have “forgot” to mention these facts.

Same story when I was asked about project management. When I arrived, the team scored 1 on the Joel test. One. They did have a CVS (CVS!), and someone was using it. When we shipped, we were at 5; it is still 5 over 12, but it is not 1. And my answer to the question was: "In September, I was only a senior developer. In January, I was (officially) the project manager, because management was happy with the way I handled the team"
And what about web development? I wrote my own Razor-style parser in Scala, for fun and to keep things cleaner and more extensible, for that project, but I answered only "I wrote the services to query and present data" and "I do not do much front-end development, but I know the latest standards for HTML and CSS".

Not big things, taken one by one. But it makes a difference. Even if you are nervous, you have to remember to show you at your best in any case. It is not an interviewer's job to make you comfortable: you have to perform at your best. Of course, you have to give credit where credit is due (to your team, for example): in an interview, you have to be honest.
But you do not have to be modest.

The second mistake

Do not make assumptions.
During the first interview,  I went on slowly through the coding questions, carefully explaining my reasoning. This was very appreciated by the interviewer. It is usually appreciated, because it let's the person at the other end understand how you think.
But it may not be the case! What if they are looking for speed? How quickly you grasp the problem? Or if they are looking for style over simplicity?
You can't know, and you cannot make assumptions.

A third problem

Even if you end up performing at your best, it may not be enough.
Of course, if you blow the interviewers away, you are in a good position. But there is still the chance that you are not the best for the job.

This is something common to all the good workplaces. Good workplaces benefit of a "positive spiral": they attract great developers because they are great places to work to; and they are great places because they are full of great developers.

So, they can afford to be picky: they can select both on general skills (something a good company always do) and on "platform". You can choose to select only people that are already profitable in the technologies you use:
"...You still get jobs, and employers pay the cost of your getting up to speed on the platform. But when [...] 600 people apply for every job opening, employers have the luxury of choosing programmers who are already experts at the platform in question. Like programmers who can name four ways to FTP a file from Visual Basic code and the pros and cons of each" (from Lord Palmerston on Programming)

What can I do?

Of course, the net is full of advices on how to perform greatly during an interview.
This may only apply to my specific case, but it might help even in your case.
  • Be passionate! Show your passion, and your skills; it does not matter if you are frightened, or if you think it is not important, or even if you think that the interviewer is not interested: you have to leave a sign, an impression. Go for it!
  • Ask! Talk to the interviewer: asking is always a good thing. No-one will see a good question negatively.
  • Show! the only way to make sure you and the interviewer are actually on the same line. Show them what you can do!
How can you show it? By... showing it! Do open source. This is something I really regret from my past: I did not contributed to open source projects. I used to think it was not so important: my jobs did not allowed it, and even the little personal projects I did in my spare time were always linked to the job at hand, and therefore based on things I was not able to disclose.
Publishing some great open source code, in the technology your favorite company is using, shows them really what you are good for.