Domain Driven Design: Entities, Value Objects, Aggregates and Roots with JPA (Part 3)
Where’s the application in the demo code? There isn’t one.
If you look at the sourcecode there is no front-end, no web servlets, no screens, and no Java main class, and so no way to run it as an application. All that you can do is run the test class. So it is a library project. It is a rich “back-end” that can talk to a database.
The idea is that it exposes many root entities and that you write your own application code that choreographs the root entities. Any good back-end library should be agnostic to the specific screens or workflow that it is supporting. The idea of a good DDD library is that it models the problem domain concepts and invariants such that the library remain reasonably stable and useable even if the screens change wildly from one iteration to another.
It is a bad idea to directly share such a rich domain library between many front-ends or processes that collectively form a platform. If you needed to refactor the database schema for new business logic (or for performance) every process would need to upgrade. When a front-end maintained by another team tries to upgrade they may well break if they depend upon idiosyncrasies of a given version of the rich library.
Can we fix this by running all the downstream projects on a nightly build to see when the maintainers of the rich library make a change that breaks them? Sure. But that doesn’t solve the problem if we find that the many downstream projects break in strange ways when we make “basic” changes to the library. Too much coupling between teams is a killer. Even with one small team, sharing a rich domain model between processes with different upgrade cadences causes maintenance headaches.
Rather than distributing a rich domain model as a library wrap it in a restful business API. The business API should model a stand-alone platform service. The business API can use one or a few root entities that are enough to “do something” sufficiently stand-alone. Such services should expose as little as they can get away with at each release. The outside of the public business APIs should be a narrow and long supported contract.
Such an approach is often described as a “share nothing” architecture. You cannot actually share nothing and be part of the same platform. Better to described it as “share no implementation details” and try have any data exchanges use the natural keys what are understood by end users such as “user name”, “order number”, “product SKU” rather than any database generated primary keys. Why? Because the things that the user talks about are likely to change less than any purely technical details.
Examples? An e-commerce website can have one service that deals only with customers managing their addresses and payment details. Another manages only products. Another that only handles searches for products. Another doing product recommendations. Another doing order fulfilment (or more likely interfacing with a backend system which does the real work). In theory each could be written in different programming languages, using different data stores, and be maintained by different teams only accessible using json over http (or anything equivalent).
Then we can have both a pubic website, and a secure customer support application, that can both use a set of common business services. The two front-ends can be deployed separately to both each other and the business services they use. Each business service can be deployed separately to add new capability to support one of the front-ends.
Critically services can choose to support two APIs if that makes things more agile. An example would be where one or other front-ends needs a lot of new features quickly. We don’t want to be breaking and fixed the more stable front-end for no good reason. So we can keep a stable service API working whist evolving an unstable API. Happy days.