Domain Driven Design and Castle

April 19th, 2007

Rafael and I were discussing how to approach a rich domain model and how Castle could make it easier or harder. We’re both working on a project where the practices that Eric Evans demonstrate on his book can really make a difference.

What I want to avoid:

  • My domain model being just data containers that purely represent the database entities
  • Having the db or the UI influencing the domain design – usually for the worst
  • Contaminating the domain with things that it should not care about

There are situations that even the Aggregate pattern fits well. If you haven’t read the book yet (I assume you’re going to, really, you should), the Aggregate pattern dictates that a root entity should be responsible for entities and value objects that relates to it. The simplest example: suppose you have an Order class (that is part of your domain) and OrderItem. Does it make sense to have an OrderItem alone? Allowing one to create, fill and save an OrderItem can violate invariants that the Order class enforce? If so, applying the Aggregate pattern should force your design the expose only the Order class and its operations to manipulate OrderItems.

If the motivation is hazy, stop to think about a similar situation you had in one of your projects.

Eric also suggests the use of Factories (standalone or factory methods) to create and enforce invariants for complex objects. It also sounds right for a few situations.

Applying all that might be trick, though. We’re used to use domain classes directly on MonoRail, on the binder:

public void Create([DataBind("product")] Product prod)
{
   ...

However that forces us to expose a default “parameterless” constructor and writable properties for the fields on the form. No good.

We’re also used to decorate our classes with validation attributes:

public class Product
{
    private string name;

    [ValidateNonEmpty]
    public string Name
    {
        get { return name; }
        set { name = value; }
    }
}

But that contaminates my precious domain model. I won’t even mention ActiveRecord in this context.

One possibility is to use value objects (like the java camp does) to carry data from the presentation to the domain model. Instead of using the Product class direct I could use a ProductInfo. I can mess with it. Can expose all properties all I want. Can use all attributes I want. So far it’s the only solution I’ve found.

Persistence presents a whole new challenge. It doesn’t matter if I go with ActiveRecord or NHibernate. They impose constraints that I may not want to incorporate on my model. Rafael suggested using a Domain Model and a separated persistable model. That’s fine but then you’d have to maintain two models, aside from the value objects.

Another approach is to fallback on writing SQL code, or come up with a smart mapper (maybe base4.net?)

The thing is that implementing a repository with NHibernate or AR will eventually bypass factories. Also no good.

We settle on experimenting with prototypes before committing to anything. Nevertheless I loved what I read so far on DDD book.

17 Responses to “Domain Driven Design and Castle”

MauricioC Says:

“But that contaminates my precious domain model. I won’t even mention ActiveRecord in this context.”

I agree about ActiveRecord, but I have no problem with this kind of validation attribute (well, the name “validate” in the attribute is not a great idea, but the concept is). It seems appropriate to me to have this kind of information in the domain class, as it probably reflects a business concern (a simple one, I know, but still). What do you think?

hammett Says:

I have mixed feelings. While it might be OK to declaratively expose some validation, I’m not sure it’s the right place. In the DDD world the invariant should be expressed in the code, in guess that in the imperative way.

I’m not sure. But even if I do use the attributes, who will run the validatos? MR will. But what if there’s more than one entry point to the Domain like a web service or a REST API?

I’d rather have this check in the constructor or in a factory. I can change my mind as I go further with this project, though.

Mark Says:

Hi

I just want to say that I think it is great that you grapple with the Castle Project’s current ‘limitations’ in such a public manner. It is such a refreshing change from the normal way of only bragging about the good bits!
Sad to say that I’m rather new to these paradigms so have no great answers, but I hope your last couple of posts spark some good ideas to help continue to build the Castle platform.

Cheers

Franco Ponticelli Says:

I think that the right way is to start thinking in a different way: using the db as a storage media is not always the best solution. Serialization is an underestimated solution to persist the state of objects. Prevalence layer and object database are alternatives too … Indeed there are not many available solutions for .Net in this area.

hammett Says:

Mark, some bits of Castle make me sick. :-)
The good thing is that we can always rewrite those bits instead of living with something you’re not proud of.

Franco, I know from where you’re coming from. I’ve tried persistence engines in the past, enough to not like them. Havent tried db4o, though.

And I think DB is fine. I just feel uncomfortable when it dictates how my entities should look like.

Alex James Says:

Very interesting…

Have you thought of what might happen if you had meta-data describing for example the mono-rail part of castle in the database? And you upgraded mono-rail so that you could update this meta-data and it would ensure that the structure you describe in meta-data matches the reality? I.e. it would (Picard) “make it so”!

More here: http://www.base4.net/blog.aspx?ID=384

Joe Ocampo Says:

“They impose constraints that I may not want to incorporate on my model. Rafael suggested using a Domain Model and a separated persistable model. That’s fine but then you’d have to maintain two models, aside from the value objects.”

I don’t know what you mean by maintaining two models. There is always one domain model and the repository layer isolates the persistence mechanism that you choose.

I am personally not a fan of decorators as they complicate the model from a model driven approach. But Evans does mention the use of services to validate objects. You may also use specifications and constraints which are talked about later on in the book. You chain the specifications and constraints as predicates to achieve validation.

My two cents.

I have recently started a blog series on DDD that address some of these concepts.

http://www.lostechies.com/blogs/joe_ocampo/archive/2007/04/02/a-discussion-on-domain-driven-design.aspx

Cheers!

sradack Says:

I ordered this book after you recommended it. I’ve really enjoyed all the great ideas contained within so far. Here’s my thoughts:

ActiveRecord is great for simple applications. It gets you up and running quickly. However, when you need your model outside of just a web application context (or just one application), things start get more complex. You start to realize that referencing your model assembly forces you to pull in references to ActiveRecord, NHibernate, Iesi.Collections, etc. All of your applications that utilize the model become coupled to ActiveRecord/NHiberate. I find this disconcerting.

I’ve heard people argue, and indeed I believe even Evans makes this concession, that your persistence framework forces you to have some knowledge of whats going on under the hood. I agree with this to some extent. I often find myself having to create specific queries to improve the performance of specific use cases in my application. I’ve largely started to remove the rich associations between domain model objects in favor of specialized queries. Evans recommends reducing application complexity by only maintaining associations that are absolutely necessary, especially between aggregates. In my experiences, navigation via these rich associations tends to lead to poor performance due to N+1 select lazy loading issues, not to mention data consistency and contention issues. I also often find that I need only a subset of a mapped collection, which when navigated to via an association, leads to loading more data from the database than is needed.

So since I end up writing all of these specialized queries anyway, maybe a way to sidestep these issues and to reduce coupling between applications and a common domain model is to introduce an application service layer, passing back and forth data transformation objects. When this layer is introduced, you can start to decouple client applications from the persistence concern. One drawback, I suppose, is that you have less flexibility at the application level. In general though, I haven’t found this to be an issue. The application service layer also introduces some great seams to inject logging and security concerns.

I think I could go on forever on this topic. I still currently use ActiveRecord but have contemplated moving to a truly persistence agnostic domain model and moving all persistence issues to repositories based on aggregates.

One side note, I think the ActiveRecordBase and the IRepository interface (ala Ayende and others) tend to encourage the idea that all objects in the domain model are entities and discourages aggregates. Evans illustrates that there are wide differences between entities, value objects, and services in the domain. In practice, I think there are a enough differences between entities in the model that this interface really loses a lot of power. Often repositories for performance reasons are more than just simple objects with simple Get(), GetAll(), and Save() methods. To be fair, ActiveRecordBase and the IRepository interface provide functionality for passing in ICriteria or detached queries, but in practice I think even these query methods still tightly couple you to a specific persistence mechanism, ActiveRecord/NHibernate in this case. A client application that queries objects from a repository using this mechanism knows “a little too much” about the persistence implementation in my opinion, not to mention is quite vulnerable to changes in your domain model, which optimally is easy to refactor and change.

Well, I went a little long again… I’d love to hear people’s thoughts…

hammett Says:

Alex, I’m a bit skeptical about it. I think you might end up losing predicability, which I find to be very important.

Joe, what I meant is maintaining the domain model and a separated persistable model (using whatever, AR, NH, EDM, EntitySpaces, LLBGenPro). I’ll check your articles, thanks.

Steven, I agree about AR’s “suitablity”. It’s also my impression that people claim to use DDD and just adopt the repository pattern. My goal with DDD is to help the team and myself. Centralize the problem the app is trying to solve, so even if I stay away from the app, will be easy to contextualize me again.

It’s also interesting the performance problems that you raised. I’ve thought about that when I was reading that paragraph about the aggregates and the object model navigation. It sounds right, but has side effects…

Christopher Bennage Says:

From a pragmatic standpoint, I feel that MR already supports DDD better than WebForms. If we wanted to be a DDD purist we could just not use the databinding feature. :-P

All in all though, I’m really excited to hear you talk about this. Evans’ book really knocked my socks off.

Hmm… what if we wrote a databinder that worked against Repositories instead of the O/RM, using Windsor to locate the correct Repository, etc, etc. I guess that would need more DDD awareness under the hood in Castle, but that’s kind of the whole point of your post, isn’t it? :-)

JoeyDotNet Says:

I’m still learning a huge amount about DDD and applying it to my projects, but here’s my .02, whatever its worth.

I’d agree that ActiveRecord will get you up and running quickly and for prototyping especially, that’s great. But if you’re trying to keep your domain objects “persistent ignorant” (PI) and stay as close to SRP as possible, vanilla AR doesn’t seem to be the best fit. The ActiveRecordMediator can help with this, but still some would say the mapping attributes would to0 closely tie persistence to the model.

I recently used AR for a couple weeks to see how it would play in my model and was able to do quite a bit and very little time. But in my experience so far using PI domain objects/repositories are much more testable and more closely follow SRP.

Alex James Says:

“Alex, I’m a bit skeptical about it. I think you might end up losing predicability, which I find to be very important.”

How exactly do you think you might lose predictability? I’m not 100% sure what you are comparing to what here?

What is the starting point? What about it is predictable? And what do you see as the result of this meta-data approach? What about it makes it less predicable exactly?

Just keen to understand your reasoning really…

MauricioC Says:

hammett: Declaratively exposing this is good in the sense that MonoRail can generate (for example) client side Javascript without the need to duplicate business logic in the model. I agree with your point, though. I guess what I’m thinking is some kind of call interception in an AOP way (but at runtime), so that the attribute can enforce itself. This sounds much like a proxy, but I can see this leading to all sorts of problems.

Darius Damalakas Says:

Very intersting post and all comments.

However, reading all comments and posts i got lost a bit.
Hammet – you mention three things you want to avoid, but what are the goals? To be more specific – what is the most important goal you seek?

We all seek shortest development cycles, scalability, predictability, etc, but what’s the ultimate goal?

Personally, at the moment our current goal is to programm in such a way, that the code base could be used in 100% situations.

For example, we currently build a ”framework” to create UI, which would do three things: provide easy databinding, a mechanism to undo / redo changes made to domain model, and a way to persist data easy.

So, our framework must be created in such a way, that it could be used in any situation. That means 85% of situations can be implemented very fast and get all bindings, undo/redo, and persistence. All other 15% percent can also be implemented and gain the same features, but with little more coding.

So, when you say you want to avoid “My domain model being just data containers that purely represent the database entities”, whats the goal behind this?
In the perspective of my goal, this sounds OK, as long as it suits all my needs. If I see that i can speed the development by introducing entity services (classes localising busines logic) and still being able to make 100% cases, then i introduce them.

Ernst Says:

Nice to see DDD beeing discussed here at Castle.

Applying all that might be trick, though. We’re used to use domain classes directly on MonoRail, on the binder:

public void Create([DataBind("product")] Product prod)
{
...

However that forces us to expose a default “parameterless” constructor and writable properties for the fields on the form. No good.

For very important object that are part of the core domain I usually don’t compromise. I provide a Builder and bind it to the form fields :


public void Create([DataBind("product")] ProductBuilder builder)
{
builder.Validate();
if (builder.HasErrors)
{
HandleErrors(builder.Errors);
}
else
{
Product prod = builder.Product;
...
}

This way, the builder may use whatever means provided by the domain the build the product (Factory, Repository, AR).
Validations are also incorporated in the builder that can once again use whatever means provided by the domain.

I would not do this for every domain object, just for the ones that are core to the domain. This is
the area where no matter what framework I’m using I don’t sacrifice the model integrity however convenient the framework might make it seem.

Patrick Says:

This is a very specific point to validation… From my experiences I think there are two different types of validation. Validation that is GUI dependent and validation I *need* in every situation on my domain model. The new validation stuff is great but it’s all lumped together.

For each model I weight it up and either put it all on the model or make something similar to Ernst describes above. It’s usually just a wrapper class with validation and the extra gui properties I need to grab. Use my service layer to handle the transfer into the model.

Interesting finding - 04/22/2007 « Another .NET Blog Says:

[...] Domain Driven Design and Castle [...]

Leave a Reply