Archive for the ‘Architecturing’ Category

Using Loading contexts effectively

March 24th, 2011

Long long time ago I promised a few people in my twitter that someday I’d post somewhere a sample on how to deal with “dependency hell”.

By that I mean something you’ve probably experienced yourself. Suppose you’re happily using log4net, and then you start using NHibernate which happens to use a different version of log4net. Hmmm. Easy. You can switch your own copy to the NHibernate uses. Then you add another dependency to your project, say SupperCoolWidget, and it happens to use yet another version of log4net.

You can rely on binding redirects, as long as you’re damn sure the API surfaced touched by these projects haven’t changed on the dependency (in this case log4net), otherwise you’ll get exceptions in runtime (cannot find member).

IMHO it’s especially problematic to rely on binding redirects because some code, somewhere, isn’t used very often, and in some special circumstance it may try to use an API that isn’t there.

log4net is an interesting example, but the problem applies to any scenario where you have common dependencies in different versions.

Looking at another camp – Java in particular – OSGi brought some interesting ideas to this very situation. There it’s even worse since jars are way more loose than assemblies. OSGi’s solution is to either have independent versioned bundles which your bundle may explicit say it depends upon, _or_ your bundle carries everything it needs to work. Multiple bundles can be loaded and executed in a single VM, and they are guarantee to not step on each others foot.

Java’s enabler to this magic is the ClassLoader.

In .NET there isn’t a concrete equivalent of the Class loader, but we have a loader. And it has different contexts. In fact, as many as you want. The Load and LoadFrom contexts are the typical ones you’re exposed to. More resources: Choosing a Binding Context and LoadFile vs LoadFrom

By using a combination of the right loading context and the AssemblyResolve event, you can achieve the behavior of isolated silos loading the same assemblies (with different versions) in the same AppDomain.

I created a sample to demonstrate the idea and you take it from there. I’ve tried to minimize the concepts, so no MEF, no Windsor, and it’s not a web app. The file structure is like the following


The build folder contains the app, which doesn’t do much:

private static readonly CustomBinder _binder = new CustomBinder();

static void Main()
    var curDir = AppDomain.CurrentDomain.BaseDirectory;

    // Each module is loaded in its own isolated context
    // so they can have conflict dependencies and work
    var modules = LoadModules(_binder, curDir);

    foreach (var module in modules)

So it loads “modules” in a kind of late bound way, using a well-known contract: IModule.

Each module implementation depends on – guess what – log4net. But different versions of it. Each implementation of IModule looks pretty much the same, but the dependency version is quite different:

namespace FakeMod1
    using WellKnownContracts;

    public class Mod1Impl : IModule
        private static log4net.ILog logger = log4net.LogManager.GetLogger(typeof(Mod1Impl));

        public Mod1Impl()

namespace FakeMod2
    using WellKnownContracts;

    public class Mod2Impl : IModule
        private static log4net.ILog logger = log4net.LogManager.GetLogger(typeof(Mod2Impl));

        public Mod2Impl()
            logger.Info("Mod2Impl constructed");

When we run the app we expect the following to happen

  • module 1 is found
  • A logical context is created for it
  • Each dependency within the module 1 is satisfied within the boundary
  • module 2 is found
  • ditto ditto..

Running the app and watching the debugger confirms the expected behavior:

‘ParallelContexts.vshost.exe’: Loaded ‘ParallelContexts.exe’, Symbols loaded.

‘ParallelContexts.vshost.exe’: Loaded ‘WellKnownContracts.dll’, Symbols loaded.

‘ParallelContexts.vshost.exe’: Loaded ‘modules\mod1\FakeMod1.dll’, Symbols loaded.

‘ParallelContexts.vshost.exe’: Loaded ‘modules\mod1\log4net.dll’

log4net:ERROR No appenders could be found for logger (FakeMod1.Mod1Impl).

log4net:ERROR Please initialize the log4net system properly.

‘ParallelContexts.vshost.exe’: Loaded ‘modules\mod2\FakeMod2.dll’, Symbols loaded.

‘ParallelContexts.vshost.exe’: Loaded ‘modules\mod2\log4net.dll’

Notice that WellKnownContracts.dll isn’t loaded more than once, since it first loaded in the Load context, it’s always found – I’m a bit unsure if this one is even probed after it’s loaded for the first time.


How it works?

The code is simple. The class CustomBinder takes a “module folder”, and starts a new logical context for it.

public partial class CustomBinder : IDisposable

    public BindingContext Add(string modFolder)
        var ctx = new BindingContext(this);
        var files = Directory.GetFiles(modFolder);

        string entryPointFromManifest = null;

        foreach (var file in files)
            // manifest has an entry point which is the first type/assembly we load
            // this is just an optimization, so we dont have to load all assemblies found within a package/module
            if (Path.GetFileName(file).Equals("manifest.xml", StringComparison.InvariantCultureIgnoreCase))
                entryPointFromManifest = GetEntryPointFromManifest(file);

            if (!file.EndsWith(".dll")) continue;

            var name = AssemblyName.GetAssemblyName(file);
            ctx.AddAssemblyName(name.Name, file);

        if (entryPointFromManifest != null)
            string[] split = entryPointFromManifest.Split(',');
            Debug.Assert(split.Length == 2);
            ctx.EntryPointTypeName = split[0];

        return ctx;

Then, whenever the loader probes for an assembly, we use the “requesting assembly” to bring the existing context back, and use it to load the right assembly.

private Assembly CurrentDomain_AssemblyResolve(object sender, ResolveEventArgs args)
    if (args.RequestingAssembly == null)
        return null;

    BindingContext ctx = GetBindingContext(args.RequestingAssembly);
    if (ctx == null) return null;

    Assembly assembly;
    if (ctx.TryGetAssembly(new AssemblyName(args.Name), out assembly))
        return assembly;

    return null;

As I mentioned before, this is just a proof of concept that shows what is possible. The sky is the limit for modular/composable frameworks out there. Enjoy!

Download sample

Multi-dimensional Separation of Concerns

October 2nd, 2008

Some things seems to be way ahead of its time and are completely ignored. IMHO Multidimensional SoC seems to be one of this cases. Before diving into it, let me give you some context.

Back in 2002 (or 03?) I was in a very challenging project team, working with Java/j2ee. Our client at the time, a big insurance company knew exactly what they wanted, but not exactly how to achieve it. Fundamentally the app need to quote and process insurance premiums. There were three channels: web site, branches and partners. Each channel could interfere in different ways on the workflow of the insurance processing and the actual calculation operation – which was big.

Some of the requirements were ludicrous, but real. Things like: we don’t do motorcycle insurance. But any director’s children can have insurance for their motorcycle. Can you believe it?

The obvious solution suggested was to parametrize everything. On each deploy package (one for web, CDs for branches, CDs for partners) we would tweak the configuration, and that would affect the workflow and calculation. Didn’t take long to prove that it wouldn’t work. Back to research…

Someone suggested AOP. Remember, that was back in 02/03. We had aspectj – and others – but they were on their early stages. The experiments demonstrated that, as long as we put the right extensibility points, we could use AOP to produce a proper package to the right channel. It wasn’t that better compared to parametrization, but still…

It was then that I bumped into Hyper/J.

Hyper/J (or Hyperspaces) brings some interesting concepts. It states that each concern of your app (or in the most basic level, your class) is a dimension. You can compose several dimensions and have a final product.

This is extremely powerful.

Imagine that you have an app up and running, and at some point it becomes a requirement to have a set a classes supporting custom serialization. All you need to do is start a new dimension made of abstract classes matching the namespace/name of the target classes, that would implement that support. You can introduce new methods, fields, implement interfaces.

Later, for some reason you’d want xml serialization. Another dimension and you’re set.

All of that keeping your core code untangled of these new concerns.

Porting to our scenario at the time

  • We would concentrate effort into building the core of our app. The generic workflow, the default calculation. By breaking the wf and calculation into several methods, we enabled the extension points
  • Each new concern that affect any aspect is a new dimension. e.g. web_flavor.
  • We compile and assembly the core plus each dimension required for each channel package

Benefits: Complete separation of concerns that would allow the app to evolve with no friction; simple to test assembled package; parallelization of work.

Hyper/J worked in byte-code level. You would normally compile the core and each dimension individually, and run it to compose the final package.

For some reason, though, Hyper/J seems to be dead. It wasn’t popular even when it was very active. Shame. An overview page on alphaworks says it has evolved to another project: Concern Manipulation Environment. CME seems to go more into the AOP concepts, whilst chasing the same goal. Haven’t used it, though, so no opinions.

I’m quite happy to see that Fabian and Stefan are going in this exact direction with the re:motion project, which seems to use Castle DynamicProxy, so the composition happens in runtime. I’ve been thinking into support something like this natively on Windsor 2.0 as well, albeit I believe it’s easier to manage composition in development time.

Before someone jump in and say “you can do the same thing with partial classes” think again. You can’t. You can’t create a separated assembly with MyApp.CustomizationsForMyGoldClient made of partials. Conceptually it solves a different problem.

DDD infected

July 11th, 2007

Just caught myself writing this snippet just now:

public class SessionCart
	private List<CartLine> lines = new List<CartLine>();

	public CartLine AddLine(ProductVariation productVariation)
		CartLine line = GetLineByProductVariation(productVariation);
		if (line != null)
			line = new CartLine(...);

		return line;


Where IncrementQuantity is

public void IncrementQuantity()

…and said to myself “Ok, I’m Domain-Driven Design infected”

Domain Driven Design and Castle

April 19th, 2007

Rafael and I were discussing how to approach a rich domain model and how Castle could make it easier or harder. We’re both working on a project where the practices that Eric Evans demonstrate on his book can really make a difference.

What I want to avoid:

  • My domain model being just data containers that purely represent the database entities
  • Having the db or the UI influencing the domain design – usually for the worst
  • Contaminating the domain with things that it should not care about

There are situations that even the Aggregate pattern fits well. If you haven’t read the book yet (I assume you’re going to, really, you should), the Aggregate pattern dictates that a root entity should be responsible for entities and value objects that relates to it. The simplest example: suppose you have an Order class (that is part of your domain) and OrderItem. Does it make sense to have an OrderItem alone? Allowing one to create, fill and save an OrderItem can violate invariants that the Order class enforce? If so, applying the Aggregate pattern should force your design the expose only the Order class and its operations to manipulate OrderItems.

If the motivation is hazy, stop to think about a similar situation you had in one of your projects.

Eric also suggests the use of Factories (standalone or factory methods) to create and enforce invariants for complex objects. It also sounds right for a few situations.

Applying all that might be trick, though. We’re used to use domain classes directly on MonoRail, on the binder:

public void Create([DataBind("product")] Product prod)

However that forces us to expose a default “parameterless” constructor and writable properties for the fields on the form. No good.

We’re also used to decorate our classes with validation attributes:

public class Product
    private string name;

    public string Name
        get { return name; }
        set { name = value; }

But that contaminates my precious domain model. I won’t even mention ActiveRecord in this context.

One possibility is to use value objects (like the java camp does) to carry data from the presentation to the domain model. Instead of using the Product class direct I could use a ProductInfo. I can mess with it. Can expose all properties all I want. Can use all attributes I want. So far it’s the only solution I’ve found.

Persistence presents a whole new challenge. It doesn’t matter if I go with ActiveRecord or NHibernate. They impose constraints that I may not want to incorporate on my model. Rafael suggested using a Domain Model and a separated persistable model. That’s fine but then you’d have to maintain two models, aside from the value objects.

Another approach is to fallback on writing SQL code, or come up with a smart mapper (maybe

The thing is that implementing a repository with NHibernate or AR will eventually bypass factories. Also no good.

We settle on experimenting with prototypes before committing to anything. Nevertheless I loved what I read so far on DDD book.

Is EDM the unlearned EJB lesson?

February 10th, 2007

The Enterprise Javabeans 1.0 spec and implementation made history. In fact the whole heavyweight container idea that Sun came up with is something to be studied, to take lessons from it and never allow this kind of error, that led to mass programmers and projects suicide, to take place ever again.

As everything in Java, every 10 lines of java code have to be accompanied by at least a xml files with 30 lines of markup. The heavyweight nature of the application containers made things very difficult to test. I’ve used EJB in a project (my decision) and relied on XDoclet to easy the burden of all configuration and mapping generation.

To make things easier (or bearable) tools and editors were improved. See: the technology was flawed as it was made stupidly complex, so the tools did what a developer could not possibly deal with: the generation of everything that surrounded it to make it actually work (read configuration, remote/non-remote classes, stubs, exceptions).

In response the java community, which is reactive, came with alternatives. The best and most successful one is Spring, followed by Hibernate, that inspired the latest version of the EJB spec and the extinction of heavyweight containers.

Unfortunately Microsoft doesn’t seem to read Computer Science history books. The EDM framework present on the new ADO.Net version is a monster. Your brain can’t stop screaming YAGNI while you read the “our motivation” documents.

But first, the good things

Finally MS has made a move on providing a tool that goes beyond the core data access stuff and in a different direction than DataSets (although they are supported on LINQ statements). This is great and deserves a bow.

They have also tried to achieve what seems to be unique features on their Entity framework, which is the ability to compose an entity as a logical view of n tables.

Now being practical: who needs this? I’ve tried hard, for the past days since I read the documents, to come up with a situation that I said to myself “hey, ORMs are broken as I can’t make an abstraction of an entity based on multiple tables data”.

To use the EDM you’d have to sit and write:

  • a schema definition
  • the entity mapping
  • the code that maps each entity

But wait, guess who is going to generate everything for you? VS.Net.

Quoting the Entity Framework Overview document:

“When the mapping tool is used to create a conceptual to logical mapping, it produces an XML file that can be consumed by the run-time components of the ADO.NET Entity Framework. The appendix includes the XML representation of the mapping shown above in the section 6.2 “Sample mapping represented as XML”. Fortunately, tools will make it unnecessary for the vast majority of users to have to understand or deal with these XML files.”

If you have to rely on tools to use a technology which otherwise you just couldn’t do it, maybe there’s something wrong with it!

Microsoft should hire an YAGNI guy per product. That would be translated into more frequent releases and smaller and practical tools. I dare to say that stock holders will also appreciated it.

But nevertheless, there will be people welcoming EDM without any piece of criticism. Take for instance this thread on

Using events to reduce coupling and increase flexibility

February 4th, 2007

Those who know me are aware that I’m a big fan of the service pattern and domain objects (usually with a pinch of ActiveRecord), and I think that the Repository is the most overrated pattern nowadays. So now you know where I came from.

I was asked to disclose more information of something I mentioned lots of times on Castle Project list and also explained on Cuyahoga development mailing list, which is the use of events in order to decrease coupling and as a consequence maximize flexibility and extensibility. This is something that is rarely used even on Java camp, but when it’s used, they create some really cool stuff (wanna an example? Check SEDA and Apache Directory)

The goal is to have main players of your application firing notifications about important things that have just happened on it. On Java you’d use mostly interfaces or some magic with reflection, or embed a scripting engine like Rhino or Jython. On .net you have the option to use plain events, but the other options apply as well.

Now being pragmatic, when this kind of architecture is useful? IMHO it’s useful when you have non-related process that should take place in response to something that happened on the system. Last year I was porting an application that was complex as it had to deal with a legacy database and a new database. The authentication service authenticated the user using the legacy database, and if that passed we have to ensure a compatible token existed on the new database. Was that a task for the authentication service? Not at all (remember Separation of Concerns?).

So when an user successfully logged in, an UserLoggedIn notification was broadcasted. One of the components listening to this notification was the EnsureTokenExists process, which handled the existence of the user on the new database. Henry and Rafael (the guys at Stronghold) have told me about a system they designed that processed trading orders from a variety of data sources and relied on events to attach different processes/validations before committing the trade.

On the system I’m working on, an user that signs in is also translated into an event, along a lot of different notifications. This is specially significant to centralize, for instance, logging, email sending and data replication.

It’s important that you do not rely on event ordering, though. If your application do need to rely on event ordering, then you’d need to build and manage a pipeline. This is a little bit more sophisticated, but easily achievable.

A practical example

Consider a standard e-commerce application. Suppose we have a payment service. Its goal is to perform some business rules and delegate the payment processing to the payment method selected (paypal, credit card, ACH). Nothing more, nothing less.

Now we to send confirmation emails, send failures notices to the site administration and update some statistics based on the payment process result. All of this certainly does not belong on the payment service. My approach is to create a notification for the payment service. You might consider creating just a big notification service for the whole application, but this might lead to graph cycling (you’ve been warned).

We can use a common term to define the source of events (the publisher) and the consumers (the subscribers). The service notification service is the publisher, and we can create three subscribers: PaymentEmailSubscriber, PaymentFailureSubscriber and PaymentStatisticSubscriber.

Here is how the notification service might look like:

public interface IPaymentNotificationService
    void FirePaymentSucceeded(IPaymentMethod method, RegisteredUser user);
    void FirePaymentFailed(IPaymentMethod method, RegisteredUser user, Exception ex);

    event PaymentNotificationHandler PaymentSucceeded;
    event PaymentErrorNotificationHandler PaymentFailed;

And the subscribers:

public class PaymentEmailSubscriber
    public void OnPaymentSucceeded(IPaymentNotificationService notificationService, PaymentNotificationArgs args)
        Console.WriteLine("Sending email to user notifying about successful transaction");

public class PaymentFailureSubscriber
    public void OnPaymentFailed(IPaymentNotificationService notificationService, PaymentErrorNotificationArgs args)
        Console.WriteLine("Sending email to Admin about transaction failure");

public class PaymentStatisticSubscriber
    public void OnPaymentSucceeded(IPaymentNotificationService notificationService, PaymentNotificationArgs args)
        Console.WriteLine("Statistics: one more successful payment");

    public void OnPaymentFailed(IPaymentNotificationService notificationService, PaymentErrorNotificationArgs args)
        Console.WriteLine("Statistics: one more failed payment");

Finally, the payment service:

public class PaymentService
    private readonly IPaymentNotificationService notificationService;

    /// <summary>
    /// Initializes a new instance of the <see cref="PaymentService"/> class.
    /// </summary>
    /// <param name="notificationService">The notification service.</param>
    public PaymentService(IPaymentNotificationService notificationService)
        this.notificationService = notificationService;

    /// <summary>
    /// Process payment
    /// </summary>
    /// <param name="method">The payment method.</param>
    /// <param name="user">The registered user instance.</param>
    public void Pay(IPaymentMethod method, RegisteredUser user)
            if (user.IsBlocked)
                throw new PaymentException("Cannot proceed checkout as the user is blocked");

            if (!method.IsValid())
                throw new PaymentException("Payment information refused by third party. " + 
                                           "Please review the information");


            notificationService.FirePaymentSucceeded(method, user);
        catch(Exception ex)
            notificationService.FirePaymentFailed(method, user, ex);


Note how the design got very simple with almost no effort, you can plug subscribers to handle new requirements easily.

You can download the EventSample1 to study it in more detail. Note that I’m not using an IoC container here, just to keep things as simple as possible. The wiring is taking place in the application start up:

public static void Main()
    // Configure application (wire everything)

    IPaymentNotificationService notification = new DefaultPaymentNotificationService();

    PaymentEmailSubscriber subscriber1 = new PaymentEmailSubscriber();
    PaymentFailureSubscriber subscriber2 = new PaymentFailureSubscriber();
    PaymentStatisticSubscriber subscriber3 = new PaymentStatisticSubscriber();

    // subscribing

    notification.PaymentSucceeded += new PaymentNotificationHandler(subscriber1.OnPaymentSucceeded);
    notification.PaymentFailed += new PaymentErrorNotificationHandler(subscriber2.OnPaymentFailed);
    notification.PaymentSucceeded += new PaymentNotificationHandler(subscriber3.OnPaymentSucceeded);
    notification.PaymentFailed += new PaymentErrorNotificationHandler(subscriber3.OnPaymentFailed);

    PaymentService paymentService = new PaymentService(notification);


Using Windsor and the Event wiring facility

Windsor Container can simplify the wiring for you if you use the Event wiring facility. In this case the initialization code will be simplified:

WindsorContainer container = new WindsorContainer(new XmlInterpreter(new ConfigResource()));

And the configuration file:


    <facility id="event.wiring" 
          type="Castle.Facilities.EventWiring.EventWiringFacility, Castle.MicroKernel" />


    <!-- Subscribers -->

    <component id="email.subscriber" 
           type="EventsSample2.Subscribers.PaymentEmailSubscriber, EventsSample2" />
    <component id="failure.subscriber"
           type="EventsSample2.Subscribers.PaymentFailureSubscriber, EventsSample2" />
    <component id="statistics.subscriber"
           type="EventsSample2.Subscribers.PaymentStatisticSubscriber, EventsSample2" />

    <!-- Notification service -->

    <component id="payment.notification.service"
           service="EventsSample2.IPaymentNotificationService, EventsSample2"
           type="EventsSample2.DefaultPaymentNotificationService, EventsSample2" >
        <subscriber id="email.subscriber" 
        <subscriber id="failure.subscriber" 
        <subscriber id="statistics.subscriber" 
        <subscriber id="statistics.subscriber" 

    <!-- Services -->

    <component id="payment.service"
           type="EventsSample2.PaymentService, EventsSample2" />


Again, a sample is available if you want to download it.

Final thoughts

As with any pattern, do not overuse it. Again: do not overuse it. I only rely on it when I’m about to write code that is unrelated to the primary concern of a class. An authentication service should not be concerned about legacy and non-legacy database. That’s the line, and the good judgment shall lead you to the right path.

Hope you enjoyed the reading. Cheers!