# Saturday, August 16, 2014

AngularJS and ASP.NET MVC validation

Not really a blog post, just a collection of links and articles I used when I was writing validation code for Angular with ASP.NET MVC.
More a "reminder to self" than anything else, as they are a bit outdated (they refer to ASP.NET MVC version 4, mostly) but they could be helpful to others as well!


#    Comments [0] |
# Friday, August 15, 2014

Android NFC service and "thin client": one problem, and one hack

Lately (in the last year or so), Android work intensified at my company. So, I finally took the time to study it in depth and I discovered how MUCH Android differs from what I was expecting. It really starts to make sense when you dig under the cover. And you start to discover how much better your apps behave when you are using the SDK the way it should be used (and you also start to pick up defects in other apps and say "Ha! You did that! Gotcha!).
But this is the topic for another post... :)

Today I want to concentrate on an issue I was experiencing using the NFC framework in Android to read our contactless cards.
Using the NFC framework as a general purpose card reader is a bit on the "stretchy" side: the framework, after all, is mainly there to read Ndef tags, which have a precise structure. Fortunately, Android allows you do go deeper, and interact directly with a card using a "transceive" method.

In this way, you can send commands and data to the card (in the form of a byte[]) and receive a response from the card (again, in the form of a byte[]).
So far, so good: this means that we can read our Mifare Desfire cards, using the Desfire authentication and our keys.
I implemented the commands to authenticate a card, select an application (i.e. a protected directory inside the card memory) and read the data.

All is working well, but.. you have to store your key on the phone, and the storage on your phone is not secure.
In theory, every application has a local storage that cannot be read by other applications. In practice, you just have to have root access to your phone (which is mighty easy with Android handsets) and you are done.

This is not a particular problem for some scenarios (e.g. if you provide an app that uses the user differentiated key, so that the user can read his own card), but it is a problem when you need to read multiple cards, and therefore to use the master key.

Suppose you are a third-party company. You are my friend, and you want to provide a discount for my subscribers (people that have my smart-card).
How can you check that the card is real, and that the card is not expired? Easy, you authenticate with the card and read its content: the expiration date is written right there.
But I do not trust you enough to let you have my read keys!

Maybe you even want to top up my card with "reward points": if my users buy something from you, they will have discount on my services. Super cool!
But I will not let you have my write keys.. that's out of question!

Sure, you can read just the UID, and use that to look up user info on my web-service. And use the same service to POST reward points. But my network is sparsely connected, and it might take long before a card is used on one of my terminals and I can update them.
And we have seen that a UID can be faked..
So?

The answer is "thin-client". You use your NFC phone as an antenna, nothing more. What you read from the card is sent as a (hex-encoded) string to a web service. The web service contains the logic and data to interpret the request and prepare the right response. The response is sent back to the phone, and then transmitted to the card.

You can authenticate with the card, but your keys are safely stored away on your server and they never transit on the the phone!
The phone does not even see the personalized key, so the user is safe against cloning.
I build a prototype, and it worked great on our WiFi network.
They I tried to use it on a cellular network and it failed (almost) regularly. Why?

My suspect was that after a (very short) while the card was reset.
The answer I was getting back from the card was something like "operation not supported in this state". It was like somehow the card forgot that we were in the middle of an authentication challenge-response before the protocol was over.
I decided to investigate, to see if my suspicion was confirmed.
Fortunately, Android is OSS and source code is available! So I dug into the Android source code, looking for clues in the NFC implementation.

Android implements NFC using a mix of libraries and processes; most of the NFC stack is native, and managed by the OS. Then, there is a Service (provided by the OS) that handle communication with the native NFC stack. And some client-side classes you can use inside your application, which will communicate with the Service, hiding it from you.
I started to dig into the source by following a "tranceive" call.

On the application side, you receive an Intent when a card is presented to the reader. Inside the intent payload there is a class derived from BasicTagTechnology; in our case, we use a ISO-A compatible card, so we get a IsoDep object.

The most important method of this class is, as I mentioned, tranceive:

IsoDep.transceive
BasicTagTechnology.transceive

The method inside is just a thin wrapper for remote invocation to a service, which is the NfcService or NfcApplication (the name has changed between Android releases:

Tag.getTagService().transceive(mTag.getServiceHandle(), data, raw)

class Tag ...

    public INfcTag getTagService() {
        return mTagService;
    }
 
INfcTag is an aidl interface, which is used to forward data and commands to NfcService.
We can follow the transceive implementation inside NfcService:

public TransceiveResult transceive(int nativeHandle, byte[] data, boolean raw)   
 
 tag = (TagEndpoint) findObject(nativeHandle);
 response = tag.transceive(data, raw, targetLost);
 ...
 Object findObject(int key) {
        synchronized (this) {
            Object device = mObjectMap.get(key);
            if (device == null) {
                Log.w(TAG, "Handle not found");
            }
            return device;
        }
    }
  ...

So, there is another "Tag" class inside the service; all known (in range) tags are held by the NfcService class in a map.
This "Tag" is named NativeNfcTag:
    
public class NativeNfcTag implements TagEndpoint
   ...
   private native byte[] doTransceive(byte[] data);
   public synchronized byte[] transceive(byte[] data) {
      if (mWatchdog != null) {
         mWatchdog.reset();
      }
      return doTransceive(data);
   }
   ...

The implementation of doTransceive is native, and it varies from a card tech to another.
We have found the end of the flow. Have we also found any clue about the card reset?

The answer is there, inside NativeNfcTag. You should have notice the "mWatchdog.reset()" statemente inside doConnect. What is mWatchdog?

private PresenceCheckWatchdog mWatchdog;
    class PresenceCheckWatchdog extends Thread {

        private int watchdogTimeout = 125;

        ...

        @Override
        public synchronized void run() {
            if (DBG) Log.d(TAG, "Starting background presence check");
            while (isPresent && !isStopped) {
                try {
                    if (!isPaused) {
                        doCheck = true;
                    }
                    this.wait(watchdogTimeout);
                    if (doCheck) {
                        isPresent = doPresenceCheck();
                    } else {
                        // 1) We are paused, waiting for unpause
                        // 2) We just unpaused, do pres check in next iteration
                        //       (after watchdogTimeout ms sleep)
                        // 3) We just set the timeout, wait for this timeout
                        //       to expire once first.
                        // 4) We just stopped, exit loop anyway
                    }
                } catch (InterruptedException e) {
                    // Activity detected, loop
                }
            }
            // Restart the polling loop

            Log.d(TAG, "Tag lost, restarting polling loop");
            doDisconnect();
            if (DBG) Log.d(TAG, "Stopping background presence check");
        }
    }

    
    
The "watchdog" is a thread that at short intervals (125ms) checks if the card is still in range, using the "doPresenceCheck()" function. Which is native, and card-dependent.

The function could be therefore an innocuous instruction (a no-op), or a new select that will reset the card to its not-authenticated state.
Guess which one is for Desfire cards?

So, if the watchdog is not reset periodically by transmitting something to the card, a presence check will be triggered and the card will be selected again, resetting the authentication process. While you are still waiting for the cellular network to answer (125ms is a short time on 3G).

I started to think on ways to work around it, from suspending the thread (inside another process - the service - in Android? Root necessary), to set the timeout (by invoking a method on NativeNfcTag using reflection... again, another process was out of my reach) to substitute the code for "doPresenceCheck()" (which you can do with things like Xposed, but.. that will require Root access too).

You just cannot access anything inside another process in Android, if you don't have root access. Which is usually a very good thing indeed, but it getting in our way in this case.
But what about our process? Sure, we can do almost anything inside it... but how can it do any good?

Well, there is a function inside NativeNfcCard which we can use. This function is "exposed" from "Tag" (the non-public class used at "client" side, see above), but not by BasicTagTechnology.
So we cannot call it directly (like transceive), but from the Tag class onwards it follows the same flow as transceive. This function is "connect":

class Tag {
   ...
   public int connect(int nativeHandle, int technology)
   public synchronized int connectWithStatus(int technology)
   ...
}

If we examine the source code of "doConnect" on the other side (its implementation inside NativeNfcCard) we can see that this function will reset the watchdog too (like transceive). Moreover, we can turn "connect" into a no-op:
private native boolean doConnect(int handle);
    public synchronized boolean connect(int technology) {
        if (mWatchdog != null) {
            mWatchdog.pause();
        }
        boolean isSuccess = false;
        for (int i = 0; i < mTechList.length; i++) {
            if (mTechList[i] == technology) {
                // Get the handle and connect, if not already connected
                if (mConnectedTechnology != i) {
                    ...
                } else {
                    isSuccess = true; // Already connect to this tech
                }
                break;
If the technology we specify is the same one we are already using, or if it is a non-existing technology, the function will do nothing.

We can just grab the Tag class inside our code, call connect on our side (using reflection, as it is not exposed by the API), and wait for it to forward the command to the service, resetting the watchdog. Do this regularly, and we can "buy" as much time as we want to complete our authentication protocol!

This is obviously a hack. But I tested it with every version of Android we support (2.3.6, 3.x, 4.x up to 4.4.3) and it just works. It uses knowledge of an internal mechanism which is subject to change even at the next internal revision, but it seems that the code I examined has been stable for a while. And maybe, by the time it changes, they will fix the main issue (use a select function to control presence of a card) as well!

#    Comments [0] |
# Sunday, August 10, 2014

Smartcard authentication and access control

DISCLAIMER: I am NOT a crypto expert. Of course I use cryptography, authentication protocols, etc.
I understand them (and their cleverness amazes me) and I have even taken 2/3 courses on cryptography during my Master and my PhD. But the first lesson you ought to learn when you are working on authentication, and on cryptography in particular, is: you are not, and the courses will not make you, an expert. And so you better never, ever design your own crypto algorithm, or your own authentication protocol. Stick with what others have done (and studied for years) and follow best practices. Not blindly, of course, but one matter is understanding what is going on, and another matter entirely is designing them flawlessly.

Last time I mentioned that one of the possible usage scenario for a smart-card is to act as an access and "presence" token. You can think about them as a read-only ID.
Well, you usually go further than that: writing on cards has advantages (you can update them with new information, for example, or "stamp" them when you see them, so that officials can  check that you have validated it and therefore you are authorized, which is a necessity if you do not have gates at any access point); however for the sake of discussion, we can think about them as read-only IDs.

Couplers, i.e. card readers/writers, will recognize the card (and optionally "stamp" it), and record your passage (via LAN or WiFi or UMTS).

Each card has a unique identifier (UID). This ID is hard-wired when the card is produced. Think about it as the MAC address of a network card.
The UID is always transmitted in the initial phases of the communication between the card and the coupler. Many applications just stop there: read the UID, whitelist them, let them through if they are on the white list.

Does it work? Sure. Is it a good approach? No, absolutely NOT. And I was (and I still am) amazed at how many contactless systems just stop there. These systems are not secure, at all. You just have to read the UID (any NFC reader can do that, you just need -for example- to touch the card with an Android phone) and then configure a device (an NFC phone, or a programmable card) with the same UID and... presto! You "cloned" the card.
You didn't, of course: these cards are not clonable, not without a (very) substantial effort, time, and expensive machinery.

The UID is just not designed for this purpose. It is used to identify a card, not to protect against cloning.
Let's remind us which are the typical requirement in this scenario. You want to:
  1. identify each user uniquely
  2. prevent the user from "faking" a card - either a non-existing one, or a clone of another user
  3. let each user read his own card content, if they want to, but not the content of other cards,
  4. let the officials (security guards?) read the content of the card,
  5. let only the authorized couplers write the card: nobody else can do that (or the officials could read "fake" validations, i.e. records not produced by authorized couplers, and which the systems knows nothing about - if you can do that, you can pass a casual inspection unpunished).

For doing this, many cards (in our case: Mifare Desfire EV1 cards) provide some mechanism out of the box that already cover most of the points. Furthermore, there are a couple of best practices that let us cover all the points.

Desfire EV1 cards use T-DES keys to provide security and authentication. These cards are essentially file-storage cards, with files organized in a one level file system. The top "directories" of this file system are called applications.

The card, every application, and every file can have three keys each. The keys will be used for access control, and can be viewed as the read-key, write-key, and modify-key.
So, if I authenticate with the card using KeyA, and KeyA is the read-key for App1/File1, I can open and read the content of that file, but not write it.

Keys are symmetric, and are stored on the card and on the coupler (the RFID reader/writer device).
Cards are considered a secure storage element, as keys can NOT be read from cards. There is no way to retrieve them, and cards are engineered to be tamper resistant. Most of them even protected from attempts to directly read their content using differential power analysis.

Keys are stored on couplers using essentially the same technique: they are not stored on disk or memory, but inside a Secure Element (SE) (sometimes called SAM). It is essentially the same piece of silicon and plastic you find in the smart-cards or in your phone SIM card (the have the same physical format as well), and it is in fact a smart-card, with the same tamper-resistence properties. So, even if someone steals one of your couplers, he/she still hasn't got the key.

Obviously, only some couplers have the write key. They are also the only devices using a Secure Element: readers (couplers with only a read key) do not have them, for cost reasons.

How do you authenticate with the card? This is all covered by the smart-card protocol: in the case of Desfire cards it is a basically a challenge-response where you "prove" to each other that you know the key. You to that by generating a random number, encrypt it, and send it to the other party. The other party decrypts it, generate another random number and send both of them back, encypted. The first party decrypts it, see that its number is included (good, the other party has the correct key), and sends back the other number, encrypted. So the other party knows we have the key as well (we can encrypt and decrypt).

Previously I mentioned some best practices you ought to add to the basic framework. These are commonly accepted as standard and easy to implement: key differentiation and blacklisting.
Each card you issue should hold a different set of keys. This can seem like a problem, if you have, like us, half a million cards (and therefore 1 million keys).
Wow.. how can we store in an efficient way millions of keys? And check all of them when a card is presented to our couplers?
Well.. we do not store them. We derive them.

The UID of the card is used with a master key to derive a "personalized" key, using a one-way hash.
So even if a card is defective and compromised, we have lost its set of keys, nothing more.
This has several advantages:
  • each card holds different keys, so if one is compromised, you just have to throw away (blacklist) that card
  • (assuming you have chosen a proper hash algorithm) it is impossible to reconstruct the master key from the derived key
  • you can just store the master keys on you "official" devices, in a secure way (i.e. in the SAMs) and they will be able to read and write any card (by deriving the right key on the fly)
  • you can hand off to the user their own, "personalized" key, and they will be able to read their own card. But not the card of their friends (or spouse).

By using what the cards manufacturer offers and two little (and simple to implement) best practices, you have a very good level of security.
We added a little more, just to be on the safe side, but these best practices already are a very good point. I am amazed at how many companies fail to realize the issues and threats they are facing with the extremely naive UID-only implementation.
But we still have sites that store passwords in clear text, or that force the user to a numeric PIN of 5 digits, or to 8-char passwords.. so I should know better by now :)

#    Comments [0] |
# Saturday, August 09, 2014

Introducing my old, new job

Today I wanted to blog about some "fancy" (fun and unusual) work I had to do with Android. Then, I realized that I had to explain how to do authentication properly using smart-cards. No problem, I can write a post about that!
Then I realized I have never properly introduced my "new" job here, and especially the project for which I was hired (which is why I am using smart-cards...). "New" and "old", because next month I will be working here for 3 years.

"Here" is Servizi ST, a small company which is part of SAD Local Transport, which is the biggest public transport company in South Tirol. The role of Servizi ST is to provide the software (ANY software) needed by SAD and by all the other transport companies in South Tirol (and to the national railway company (Trenitalia), too) to run its business.
This means a LOT of software, especially for such a small company: from the on-board systems (PCs with GPS, UMTS, WiFi, etc.) used to track the vehicle (or train) and for the fare system, to the ticket offices, to billing and information, website, statistics, tracking and diagnosing issues, asset management...
It has been an incredible time, during which me and my team built from (almost) zero all the software needed to support a completely new traveling model. And by completely new, I mean completely: everything changed, both from the user and from the technological aspects.

The South Tirol local government issues a smart-card, for subscriptions to the regional public transport network. It can be used anywhere: trains, cable-cars, buses.
The interesting bit is that the subscription by itself is free: you pay as you go, using a distance-based fare schema. You use it every day? Cool. Just in the week-end? Great, you do not have to pay for the other 5 days as well.
Also, the subscription can be linked to a bank account (and SAD will issue you an invoice every two months, with the trips you have done during this interval), or you can have a "prepaid" schema, where you "put some money" on your card.
Except you are not putting any money on the card: you put them in a virtual account. This way of using the smart-card is not common in the public transport systems; usually, you store a counter (and, therefore, "money") on the card itself, and the card becomes so a substitute for paper tickets.
Instead, we use the card as a "token", a way for the system to recognize you. This is quite common in many other domains; think about access control at big companies or car parks.

This approach has pros and cons; our goal as software developers was to highlight the pros and "smooth" the cons.
For example: you, as a user, cannot immediately "see" precisely how much you still have on your card. Because you have nothing on your card. On the one hand, this is great: you just need exactly the amount to complete your travel, nothing more (the alternative for money-on-card systems is to let you travel only if you have enough money for the cost of the longest travel on the entire network, so you can always pay no matter where you will get off).
On the other hand, you do not want your users to feel that they do not have "control" on their own money.
Therefore, we needed (and wanted!) to provide a complete infrastructure to support the user, the legacy applications (like the ticket offices and automatic ticket machines), and third parties to get information about the user's account, in a precise, reliable and secure way.

After a "rushed" launch (we got this thing out of the door and working at the 14th of February 2012, by pulling too many stops IMO... even if everything worked out in the end!), we had time to build and refine the missing parts. Now this infrastructure is complete, and it is really something I can be proud of!

#    Comments [0] |
# Wednesday, February 05, 2014

Lambdas in Java 8

Today I will introduce a feature of the upcoming Java 8, a programming language feature I really like: lambdas. Support for lambdas and higher order (HO) functions is what made me switch to Scala when I went back to JVM-land a couple of years ago: after tasting lambdas in C#, I wasn't able to go back to a programming language without them.
Now Java 8 promises to bring them to Java: something I was waiting for a long time!

Lambdas, higher order functions... what?


First thing: do not get scared by terminology. The whole concept is borrowed from functional programming languages, where functions are king (just like objects are king in Object Oriented Programming).
Functional programmers love to define every theoretical aspect in detail, hence the fancy words.
But here I want to keep it simple; the whole concept around lambdas and HO functions is that you can pass functions as arguments to other functions.

Functional Java


Passing functions around is incredibly useful in many scenarios, but I want to focus on the very best one: handling collections.

Suppose we have a collection of Files, and we want to perform a very common operation: go through these files, do something with them. Perhaps, we want to print all the directories:

static void printAllDirectoriesToStdout(List<File> files) {
  for (File f: files) {
      if (f.isDirectory()) {
          System.out.println(f.getName());
      }
}


Or print all the “big” files:

static void printBigFilesToStdout(List<File> files) {
  for (File f: files) {
      if (f.getTotalSpace() > threshold) {
          System.out.println(f.getName());
      }
}


Have you spotted the problem? Yes, there is some code duplication.
In Java, we already have a tool to go around it: object orientation.

interface IFilter<T> {
   boolean isOk(T file);
}

public static void printFilesToStdout(List<File> files, IFilter<File> filter) {
  for (File f: files) {

     if (filter.isOk(f) {
        System.out.println(f.getName());
  }
}


Now we can implement our original functions using specific “Filter” classes, or even anonymous classes:

printFilesToStdout(files, new IFilter<File>() {
  public boolean isOk(File file) { return  f.isDirectory(); }
});


This is already quite close to passing a function around; instead, you pass a functional interface, an interface that contains only one abstract method. Anonymous classes  derived from functional interfaces are just single inline functions... lambdas!

printFilesToStdout(files, (File f) -> f.isDirectory());

Aggregate Operations


So far.. cool! Shorter, more general, readable.
But what really matters is the ecosystem built around this simple concept. It is possible to write general functions accepting functions, and the new JDK already provides the most useful ones, Aggregate Operations.

As an example, take our “IFilter” functional interface. With it, you can build a “filter” function:

Collection<T> filter(Collection<T> c, IFilter<T> filter) { … }

which is one of these Aggregate Operations. The Stream class defines it and many others as member functions, making them even easier to compose through chaining.

I just want to give you an hint on how they are used for complex collections  processing.
Do you want to get all the big tar files, open them, print every word in each text file you find inside?

files.filter(f -> f.getTotalSize() > 100 && isTar(f)).
      flatMap(f -> openTarStream(f).getEntries()).
      filter(entry -> isText(entry.getFile()).
      flatMap(entry -> ReadAllLines(entry.getFile())).
      flatMap(line -> Stream.of(line.split(" "))).
      forEach(word -> System.out.println(word))

Compact, clean.

Now try to do it in Java 7... look at the indentation! It is easy to get lost in the mechanics of collection handling.

We only scratched the surface of what can be done with lambdas; together with aggregate operations and generics, they are a very powerful tool that will make most of the data transformation operations easy. And they are very efficient too! But this is something for another post.

#    Comments [0] |
# Sunday, January 12, 2014

Integration of Razor and AngularJS (2)

I found the technique I showed in my previous post particularly useful when you need to "migrate" some fields from server-side to client-side. By "migrate" I mean: make fields which were initially only needed on the server side, for page rendering, available on the client-side, in order to access and manipulate them through JavaScript too (for user interaction through Angular, in this case).

Suppose you have a page to display some person details:

@model PersonVM
...
<h1>@Model.Name</h1>
<span>@Model.Description</span>

All is well, until you decide that you want to provide a way to edit the description. You do so by adding a input field (for example), plus an "edit" button:

<button ng-click="'editing = true'" ... >Edit</button>
<input ng-model="description" ng-show="editing" ... >
<span>{{description}}</span>

This modification requires you to move the person's description from the server-side ViewModel to the client-side $scope. That means changing the controller code as well, since Razor does not need @Model.Description anymore, but you need to pass it (though JSON) to $scope.description:

$http.get("/Person/DetailsData" + $scope.personId).
            success(function (data, status) {
                $scope.description = data. description;

This is not bad, and the code still stays readable and compact using the "pattern" I described a couple of posts ago. But I'd rather not touch the controller at all.
Using the couple of directives I wrote, it is as simple as:

<button ng-click="'editing = true'" ... >Edit</button>
<input ng-model="description" ng-show="editing" value='@Model.Description' ... >
<span>{{description}}</span>

or

<button ng-click="'editing = true'" ... >Edit</button>
<input ng-model="description" ng-show="editing" ... >
<span ng-model="description" ng-content>@Model.Description</span>

No need for an additional $http request, nor for another controller action.
The "value" attibute on the input, or the element text of the span, will be written out by Razor as before; the new directives will use the content of the value attribute (or the element text) to initialize $scope.description. And without the need to do another round-trip to the server!
You don't even need to change the controller and split model fields between
ViewResult Person::Details 

and

JsonResult Person::DetailsData.

#    Comments [0] |
# Saturday, January 11, 2014

Some programming jokes...

I found them hilarious :D

  • A UDP packet walks into a bar, no one acknowledges him
    Alternate version: A UDP packet walks into...
  • A guy walks into a bar and asks for 1.4 root beers. The bartender says "I'll have to charge you extra, that's a root beer float". The guy says "In that case, better make it a double."
  • Java and C were telling jokes. It was C's turn, so he writes something on the wall, points to it and says "Do you get the reference?" But Java didn't.
  • Why C gets all the girls and Java doesn't? Because C doesn't treat them like objects.
    • C could at least give Java some pointers...
  • Knock knock
    Who's there?
    ...
    ...
    ...
    Java
  • Knock knock
    Branch prediction.
    Who's there?
  • How many SEO engineers does it take to change a light bulb, lightbulb, globe, lamp, sex, xxx
  • 99 little bugs in the code
    99 bugs in the code
    patch one down, compile it around
    ...
    117 bugs in the code

#    Comments [0] |
# Thursday, January 09, 2014

Integration of Razor and AngularJS

In my previous post I described different ways of using AngularJS with ASP.NET MVC; in particular, some ways of sharing view data (the ViewModel) between ASP.NET and AngularJS.

While the last combination I used (ng-init for the ID(s) plus ajax requests for the bulk of data) satisfies me, there are three drawbacks with this approach:
  • the need to maintain two separate ViewModels: the ASP.NET @Model for Razor, which I put in *VM classes, and the Json model to initialize the AngulaJS $scope, which I put in *DTO classes.
  • two separate calls to the web server, one for the html content, one for the Json data
  • the data for the $scope is in the ajax request, and therefore it is difficult (impossible?) to make it available to "low-tech" clients; this means both browsers with no JavaScript and, more important in some cases, search engine robots.

One idea could be to put all the data inside the html page itself, by embedding everything in ng-init, in a Json object inside <script> or CDATA.. or (why not?) in the HTML itself!
For example, using "value" for inputs, the element text for paragraphs, and so on:

<input ... value="Some value">
<p ... >Some text</p>

Disclaimer: I know this is not "the Angular way"; the idiomatic way is to have data only in the $scope, not in the view. There are good reasons for it: clarity, testability are the ones to the top of my head. But mixing two paradigms may require compromises. Besides, it was fun to see if I could accomplish what I had in mind.

The way I did it was to create a couple of custom directives. For input, I did not create a new one, as input already has already a perfecly natural place (the "value" attribute) where I could put my initial value.

<input type="text" ng-model="someVariable" value="Some Value">

The purpose is to initialize $scope.someVariable with "Some Value", so that it can be used elsewhere, as in:

<p>{{someVariable}}</p>

The binding should be bi-directional too. That's quite easy with input, I just had to redefine the "input" directive:

app.directive('input', function ($parse) {
  return {
    restrict: 'E',
    require: '?ngModel',
    link: function (scope, element, attrs) {
      if (attrs.ngModel && attrs.value) {
        $parse(attrs.ngModel).assign(scope, attrs.value);
      }
    }
  };
});

For the element text I need a different directive; I wanted to write something like:

<span ng-model="anotherVariable" ng-content>Some nice text</span>

Whenever the "ng-content" directive is present, I wanted to initialize the model ("anotherVariable") with the element text. I wanted the binding to be by-directional too.

It wasn't much more difficult:

app.directive('ngContent', function ($parse) {
  return {
    restrict: 'A',
    require: '?ngModel',
    link: function (scope, element, attrs) {
      if (attrs.ngModel && element.text()) {
        $parse(attrs.ngModel).assign(scope, element.text());
      }
      scope.$watch(attrs.ngModel, function(value) {
        element.text(value);
      });
    }
  };
});

The "bi-directionality" is given by $watch; when the model changes, the element text is updated as well.
You can find a complete example that shows this behaviour at this plunker.

Enjoy! :)

#    Comments [0] |
# Wednesday, January 08, 2014

Mixing AngularJS and ASP.NET MVC

MiniPoint is a web application created using a mix of AngularJS and MVC.

In the past I have used JavaScript libraries (in particular, jQuery) in conjunction with other web frameworks (ASP.NET pages, mainly).
Before beginning to work on MiniPoint, more or less 5 months ago, I needed to create a very simple example application to show how an external developer could use our OAuth provider for authentication.

A former collegue pointed me to AngularJS, and I was very impressed by it.

Let me put it straight: I like jQuery, I think it's fantastic for two reasons: it just works everyswhere, taking upon itself the burden of cross-browser scripting, and it lets you work with your existing web pages and improve them, significantly and progressively.
But for complex web applications, AngularJS is just.. so much cleaner!

The philosophy is very different (you should read this excellent answer if you have not read it yet); as highlighted in it, you don't design your page, and then change it with DOM manipulations (that's the jQuery way). You use the page to tell what you want to accomplish. It's much closer to what you would do in XAML, for example, supporting very well the MVVM concept.
This difference between jQuery and AngularJS actually reminds me of WinForms (or old Java/Android) programming VS. WPF: the approach of the former is to build the UI (often with a designer) and then change it through code. The latter leverages the power of data-binding to declare in the view what you want to accomplish, what is supposed to happen.
The view directly presents the intent.

The first AngularJS application I created was a pure AngularJS application: all the views where "static" (served by the web server, not generated) and all the code, all the beaviour (routes, controllers, ...) was in AngularJS and in a bunch of REST services I build with ServiceStack. AngularJS and ServiceStack seem made for each other: the approach is was very clean and it works really well if you have a rich SPA (Single Page Application). It is different from what I was used to do, and I needed some time to wrap my head around it (I kept wishing I could control my views, my content, on the server).

So, for the next, bigger problem I said "well, let's have the best of both world: server-side generated (Razor) views with angular controllers, to have more control on the content; MVC controllers to server the Views and their content".

Seems easy, but it has a couple of issues. jQuery is great for pages produced by something else (ASP.NET), where you design a page, and then you make it dynamic using jQuery. This is because jQuery was designed for augmentation and has grown incredibly in that direction. AngularJS excels in building JavaScript applications, entirely in angular.

It is possible to integrate AngularJS with MVC, of course, but you have different choices of how to pass, or better transform, data from a MVC Controller, the (View)Model it generates, and the AngularJS controller and its "viewmodel" (the $scope). Choosing the right one is not always easy.

In plain MVC (producing a page with no client-side dynamic) you have a request coming in and (through routing) triggering and action (a method) on a Contoller. The Controller then builds a Model (the ViewBag, or a typed (View)Model), selects a View and passes the both the (View)Model and the View to a ViewEngine. The ViewEngine (Razor) uses the View as a template, and fills it with the data found in the (View)Model. The resulting html page is sent back to the client.

Why am I talking about (View)Model, instead of just plain Model? Because this is what I usually end up creating for any but the simplest Views. The data model, which hold the so-called "business objects", the one you are going to persist on your database, is often different from what you are going to render on a page. What you want to show on a page is often the combination of two (or more) objects from the data model. There is a whole pattern built on this concept: MVVM. The patter is widespread in frameworks with two-way binding mechanisms (Knockout.js, WPF, Silverlight); many (me included) find it beneficial even in frameworks like MVC; after all, the ViewBag is exactly that: a bag of data, drawn from your business objects, needed by the ViewEngine to correctly render a View.

However, instead of passing data through an opaque, untyped ViewBag, it is a good practice to build a class containing exactly the fields/data needed by the View (or, better, by Razor to build the View).

If you add AngularJS to the picture, you have to pass data not only to Razor, but to AngularJS as well. How can you do it? There are a couple of options:

Razor inside the script

<script>
   ...
   var id = @(Model.Id);
   ...
</script>

This approach works, of course: any data you need to share between the server and the client can be written directly in the script, by embedding it in the (cs)html page and generating the script with the page. I do not like this approach for a number of reasons, above all caching and clear separation of code (JavaScript) and view (html).

The url

I wanted to keep my controller code in a separate .js file, so I discarded this option. The next place where I looked at was the url; after all, it has been used for ages to pass information from the client to the server. For a start, I needed to pass a single ID, a tiny little number, to angular. I already had this tiny number on the URL, as ASP.NET routing passes it to a Controller action (as a parameter) to identify a resource. As an example, suppose we have a list of persons, and we want the details relative to a single person with ID 3. ASP.NET routing expects a URL like:

http://localhost/Person/Details/3

This maps "automatically" (or better, by convention) to:

ActionResult PersonController::Details(int id)

If I want to get the same number in my angular controller, I could just get the very same url using the location service, and parse it like MVC routing does:

var id = $location.path().substring($location.path().lastIndexOf("/"));

But it's kind of ugly, and I find it "hack-ish" to do it in this way, so I kept looking.

Razor inside the page (ng-init)

A better alternative is to use Razor to write "something" on the page, and then read it from the client script. You can use hidden fields, custom data- attributes and so on; fortunately angular already provide an attribute for this purpose, ng-init.

<div xmlns:ng="http://angularjs.org" id="ng-app" class="ng-app:MiniModule" ng-app="MiniModule">
    <div class="row-fluid" ng-controller="PersonDetailsController" ng-init="personId=@Model.Id">

Angular injects it in the scope during initialization, so you can refer to it as $scope.personId

Ajax

Finally, one of the most common way to trasfer data from the server to a script it through ajax calls. AngularJS has a great service ($http), very simple and powerful:

$http.get("/Person/GroupData/").
        success(function (data, status) {
            $scope.data = data;
        }).error(function (data, status) {
            $scope.alerts.push(data);
        });

On the server side, there is a

JsonResult PersonController::GroupData()

method which returns a JsonResult, encapsulating a Json object.


Mixing up

It is not convenient to use ng-init for large objects, or for many objects. On the other hand, you need a practical way to pass around an ID, to use ajax on resources that requires it (like http://localhost/Person/Details/3).

The more sensible approach, which I ended up using, seems to use ng-init to pass around the id, and ajax to actually retrieve the data. In the current implementation of MiniPoint it seems to work quite well.
In general, when I have a resource (like Person) and I want to show and edit information and details linked to it, I have:
  • an object model (Person)
  • a ViewModel (PersonVM) which is populated in controller actions and passed to the View:

ActionResult PersonController::Details(int id) {
   ...
   return View(new PersonVM { ... });
}

@model PersonVM
...

<div xmlns:ng="http://angularjs.org" id="ng-app" class="ng-app:MiniModule" ng-app="MiniModule">
    <div class="row-fluid" ng-controller="PersonDetailsController" ng-init="personId=@Model.Id">
...

  • a Person data transfer object (PersonDTO) which is populated requested by the angular controller, populated by a "Data" controller action and then returned as JSON to the controller:

    // Defer ajax call to let ng-init assign the correct values
    $scope.$evalAsync(function () {
        $http.get("/Person/DetailsData/" + $scope.personId).
            success(function (data, status) {
                $scope.data = data;
                // ...
            }).error(function (data, status) {
                // error handling
            });
    });


JsonResult PersonController::DetailsData(int id) {
   ...
   return Json(new PersonDTO { ... });
}

#    Comments [0] |
# Thursday, January 02, 2014

The MiniPoint Workflow language

The workflow language was, for me, the funniest part of MiniPoint. I love working on languages, small or big ones it's not important, as long as they are interesting. In fact, MiniPoint has more than one language/parser: workflows, document templates, AngularJS expressions, ... Most are tiny, but every one makes the code cleaner and the user experience more pleasant.

Take, as an example, the simple boolean language to express visibility of a field on a view; as I mentioned in my previous post, you can make individual fields visible or not (and required or not) using boolean expressions  (or, and, not, <, >, ==, <>) over constants and other fields in the view (or in the schema).
The expression is parsed and then analyzed  to produce two things: an AngularJS expression, which will be inserted into the ng-required and ng-show/ng-hide attributes to make it work entirely on the client side, and the list of affected fields.
Which is the purpose of this list? Remember that a view is only a subset of the schema, but in these visible/required expression you can refer to other member of the schema as well (from previous views, usually).
AngularJS initializes its "viewmodel" (the $scope) with an ajax request (getting JSON data from a ASP.NET controller); For efficiency, we keep this data at a minimum, which usually is a subset of the fields in the view (readonly fields, for example, are rendered on the server and not transmitted). When we have an expressions, however, fields referenced in the expression need to end up in the $scope too, hence the reason of parsing and analyzing the expressions.

But I am digressing; I will write more about the interaction and integration of AngularJS and Razor (and MVC) in another post.

Now I would like to talk about some aspects of the workflow language that needed a bit of thinking on how to best implement them.

I wanted it to be simple, natural to use (i.e. you can use statements/expressions/constructs wherever it makes sense and expect them to work) but still powerful enough. And have a clean grammar too :)

I wrote some langauges in the past, but this is the first one where statement terminators (';') are optional, and you can just use returns.
The people that are going to write the schemas and workflows (so not the end-users, but the "power-users", or site administrators) have a strong background in ... VBA. Therefore, when a decision about the language came up, I tried to use a VBA-like syntax, to give them a familiar look. So, for example, If-EndIf instead of braces { }.
And I wanted to do it because it was interesting, of course! I had to structure my semantic actions a bit differently, as I was getting reduce-reduce conflicts using my usual approach.

On the surface, you it seems that you have statements (very similar to other programming languages), choices (if-then-else-endif) and gotos. I know.. Ugh! gotos! Bear with me :)

step ("view1")

var i = 10

if (i + me.SomeField > 20)
  i = i - 20
  goto view1
else
  goto end
endif

//Generate a report, using the "report1" template
report ("report1")

step ("final"): end 

U
nder the hood, things are a bit.. different. Remember, this is a textual language for a flowchart. So, "step" is actually an input/output block (parallelogram); statements and reports are generic processing steps (rectangles); the "if-then-else" is a choice (rhombus). Therefore if-then-else has a stricter than usual syntax, and it's actually:
IF (condition) [statements] GOTO ELSE [statements] GOTO ENDIF
so that the two possible outcomes are always steps in the workflow.

Therfore, under the hood you have a list of "steps", which may be a statement list (like "var i = 10"), an input/output step ("step" or "delay"), or a branch.
As a consequence, the language has somehow two-levels; at the first level you have "steps"; then, among "steps" or inside them (look at the if-then-else in the example) you can have expressions and statements like in most other programming languages. The two levels appear quite clearly in the grammar, but I think it's difficult to tell from the syntax. And this is what I wanted to accomplish. Who used it to author the workflows was quite pleased, and used it with no problems after very little training.

Tranlsation to WF activities was fun as well: I built a custom Composite Activity to schedule all the steps; also statements (instead of receiving their own activity) where merged together and executed bu the main composite activity, to improve efficiency (and make it easier to add other statements: a new one does not require a new activity).

#    Comments [0] |