Domain Driven Design: Entities, Value Objects, Aggregates and Roots with JPA (Part 3)
Where’s the application in the demo code? There isn’t one.
If you look at the source code there is no front-end, no web servlets, no screens, and no Java main class, and so no way to run it as an application. All that you can do is run the test class. So it is a library project. It is a rich “back-end” that can talk to a database. In this post, I will recommend that you don’t share such a library project between multiple teams.
The idea is that it exposes many root entities and that you write your own application code that choreographs the root entities. Any good back-end library should be agnostic to the specific screens or workflow that it is supporting. The idea of a good DDD library is that it models the problem domain concepts and invariants such that the library remains reasonably stable and usable even if the screens change wildly from one iteration to another.
It is a bad idea to directly share such a rich domain library between many front-ends or processes that collectively form a platform. If you needed to refactor the database schema for new business logic (or for performance), every process would need to upgrade. When a front-end maintained by another team tries to upgrade they may well break if they depend upon idiosyncrasies of a given version of the rich library.
Can we fix this by running all the downstream projects on a nightly build to see when the maintainers of the rich library make a change that breaks them? Sure. But that doesn’t solve the problem if we find that the many downstream projects break in strange ways when we make “basic” changes to the library. Too much coupling between teams is a killer. Even with one small team, sharing a rich domain model between processes with different upgrade cadences causes maintenance headaches.
Rather than distributing a rich domain model as a library wrap it in a restful business API. The business API should model a stand-alone platform service. The business API can use one or a few root entities that are enough to “do something” sufficiently stand-alone. Such services should expose as little as they can get away with at each release. The external view of a service is a public business APIs that should be as narrow as possible.
Such an approach is often described as a “share nothing” architecture. You cannot actually share nothing and be part of the same platform. Better to describe it as “share no implementation details” and try have any data exchanges use the natural keys what are understood by end users such as “user name,” “order number,” “product SKU” rather than any database generated primary keys. Why? Because the things that the user talks about are likely to change less than any purely technical details.
Examples? An e-commerce website can have one service that deals only with customers managing their addresses and payment details. Another manages only products. Another that only handles searches for products. Another doing product recommendations. Another doing order fulfillment (or more likely interfacing with a backend system which does the real work). In theory, each could be written in different programming languages, using different data stores, and be maintained by different teams only accessible using JSON over HTTP (or anything equivalent).
Then we can have both a public website and a secure customer support application, that can both use a set of common business services. The two front-ends can be deployed separately to both each other and the business services they use. Each business service can be deployed separately to add new capability to support one of the front-ends.
Critically services can choose to support two APIs if that makes the system evolve faster. An example would be where one or other front-ends requires a lot of new features quickly. We do not want to be breaking the more stable front-end for no good reason. So we can keep a stable service API working while evolving an unstable API. Happy days.
Of course, you should think very hard about splitting up your application into many independent parts before it has become stable. Some people advocate starting off with a monolith. Other people think that is a bad idea. I would advocate using the techniques outlined in this series of posts to make well-structured code. If your code is well structured then it can either be deployed altogether else redeployed as a series of independent services. The falacies of distributed computing teaches us that keeping everything in one location as long as possible is a very good idea. Conway’s Law teaches us that breaking things out into multiple teams before the architecture is stable is going to be very expensive. This implies you should write well-structured code and deploy it all at the same time for as long as you can.
In the next post, we will get into how the
public keyword is one of the most misused features of the Java language. Fighting hard against it’s use can help the compiler enforce business boundaries within your code.