Need help with microservices?

These days I’m focussed on microservices.

Contact me if you want to learn more or need help with developing microservice-based applications.

Here is some of my latest articles and presentations:

Posted in microservices | Tagged | Leave a comment

Does each microservice really need its own database?

The short answer is yes. However, before you start hyperventilating about the cost of all those extra Oracle licenses, lets first explore why it is essential to do this and then discuss what is meant by the term ‘database’.

The main benefit of the microservice architecture is that it dramatically improves agility and velocity. That’s because when you  correctly decompose a system into microservices, you can develop and deploy each microservice independently and in parallel with the other services. In order to be able to independently develop microservices , they must be loosely coupled. Each microservice’s persistent data must be private to that service and only accessible via it’s API . If two or more microservices were to share persistent data then you need to carefully coordinate changes to the data’s schema, which would slow down development.

There are a few different ways to keep a service’s persistent data private. You do not need to provision a database server for each service. For example,  if you are using a relational database then the options are:

  • Private-tables-per-service – each service owns a set of tables that must only be accessed by that service
  • Schema-per-service – each service has a database schema that’s private to that service
  • Database-server-per-service – each service has it’s own database server.

Private-tables-per-service and schema-per-service have the lowest overhead.  Using a schema per service is appealing since it makes ownership clearer. For some applications, it might make sense for database intensive services to have their own database server.

It is a good idea to create barriers that enforce this modularity. You could, for example, assign a different database user id to each service and use a database access control mechanism such as grants. Without some kind of barrier to enforce encapsulation, developers will always be tempted to bypass a service’s API and access it’s data directly.

It might also make sense to have a polyglot persistence architecture. For each service you choose the type of database that is best suited to that service’s requirements. For example, a service that does text searches could use ElasticSearch. A service that manipulates a social graph could use Neo4j. It might not make sense to use a relational database for every service.

There are some downsides to keeping a service’s persistent data private. Most notably, it can be challenging to implement business transactions that update data owned by multiple services. Rather than using distributed transaction, you typically must use an eventually consistent, event-driven approach to maintain database consistency.

Another problem, is that it is difficult to implement some queries because you can’t do database joins across the data owned by multiple services. Sometimes, you can join the data within a service. In other situations, you will need to use Command Query Responsibility Segregation (CQRS) and maintain denormalizes views.

Another challenge is that  services sometimes need to share data. For example, let’s imagine that several services need access to user profile data. One option is to encapsulate the user profile data with a service, that’s then called by other services. Another option is to use an event-driven mechanism to replicate data to each service that needs it.

In summary,  it is important that each service’s persistent data is private. There are, however,  a few different ways to accomplish this such as a schema-per-service. Some applications benefit from a polyglot persistence architecture that uses a mixture of database types.  A downside of not sharing databases is that maintaining data consistency and implementing queries is more challenging.

Posted in architecture, microservices | Tagged , , | 2 Comments

Monstrous monoliths – how bad can it get?

I recently posted a survey on twitter asking folks to describe their monolithic application:

So far I’ve had around 12 responses. As of 5/13/15, here are the “winning” answers to each question:

How many lines of code? 2.2M
How many jar files? 418
How long to startup? 12 minutes
How large is the WAR? 373 MB

The ages of the applications ranged from 2 to 13 years.

Do you have some winning metrics about your monstrous monolith you would like to share? If so, fill in this form.

Posted in microservices | Tagged | Leave a comment

Need to install MongoDB, RabbitMQ, or MySQL? Use Docker to simplify dev and test

Almost every interesting application uses at least one infrastructure service such as a database or a message broker. For example, if you tried to build and/or run the Spring Boot-based user registration service you would have discovered that it needs both MongoDB and RabbitMQ.

One option, of course, is to install both of those services on your machine. Unfortunately, installing a service is not always easy. Also, different projects might need different incompatible versions. Moreover, I’m not a fan of cluttering my machine with random services. Fortunately, there is a great way to solve this problem: Docker. You install Docker and use it to run the services that you need as containers.

Docker only directly runs on Linux so if you are using Mac OSX or Windows the first step is to install Boot2Docker (Mac OSX, Window). Boot2Docker installs the Docker command line locally but runs the Docker daemon in a Virtual Box VM.  As a result, the user experience is very similar to running Docker directly on Linux.

The following command runs a VM containing the Docker daemon:

$ boot2docker up

You then need to set some environment variables in order for the Docker command line to know the host and port of the Docker daemon.

$ boot2docker shellinit
Writing /Users/c/.boot2docker/certs/boot2docker-vm/ca.pem
Writing /Users/c/.boot2docker/certs/boot2docker-vm/cert.pem
Writing /Users/c/.boot2docker/certs/boot2docker-vm/key.pem
    export DOCKER_HOST=tcp://
    export DOCKER_CERT_PATH=/Users/c/.boot2docker/certs/boot2docker-vm
    export DOCKER_TLS_VERIFY=1

You can set those variables by simply running this command:

$ $(boot2docker shellinit)

Once you have done that you are ready to start using Docker from you Mac or Windows machine. For example, docker ps shows the running Docker containers.

$ docker ps
CONTAINER ID        IMAGE                             COMMAND                CREATED             STATUS              PORTS                                                                                        NAMES
6fcf0779a5e1        dockerfile/mongodb:latest         "mongod"               8 days ago          Up 8 days           28017/tcp,>27017/tcp                                                          mongodb                 

So what about running MongoDB and RabbitMQ? One of the great features of the Docker ecosystem is, which is a website where the community shares Docker images. In particular, you can find images for infrastructure services such as RabbitMQ and MongoDB. You can, for example, run MongoDB with the following command:

docker run -d -p 27017:27017 --name mongodb dockerfile/mongodb

This command will, if necessary, download the dockerfile/mongodb image from Docker Hub and launch the container running MongoDB listening on port 27017. Similarly, you can run RabbitMQ using this command:

docker run -d -p 5672:5672 -p 15672:15672 --name rabbitmq dockerfile/rabbitmq

These containers are, of course, running in the Virtual Box VM so you need to connect to them using the appropriate IP address. Fortunately, boot2docker makes that easy to configure:

export DOCKER_HOST_IP=$(boot2docker ip)
export SPRING_DATA_MONGODB_URI=mongodb://${DOCKER_HOST_IP}/userregistration

The “boot2docker ip” command outputs the IP address that you use to connect to the containers.

Now that you have setup the MongoDB and RabbitMQ, you can now run the User Registration Service:

docker run -d -p 8080:8080 -e SPRING_DATA_MONGODB_URI -e SPRING_RABBITMQ_HOST --name sb_rest_svc
Posted in docker, microservices, mongodb, rabbitmq | Tagged , , | Leave a comment

The world is an amazing place – the photos on my business cards

I like to order business cards from since they let you put photos on the back.

Here are the photos from the January 2015 order.









IMG_2864 - Version 2





Posted in photos, travel | Tagged , | Leave a comment

Example application for my #qconsf presentation about building microservices with event sourcing and CQRS

My presentation Building microservices with event sourcing and CQRS describes the challenges of functionally decomposing a domain model in a microservices based application. It shows that  a great way to solve those problems is with an event-driven architecture that is based on event sourcing and CQRS.

The presentation uses  a simple banking application, which transfers money between accounts, as an example. The application has the architecture shown in the following diagram:

Banking application architecture

The application consists of four modules that communicate only via events and can be deployed either together in a monolithic application or separately as microservices.

The four modules are as follows:

  • Accounts – the business logic for Accounts
  • Money Transfers – the business logic for Money Transfers
  • Account View Updater – updates a denormalized view of Accounts and Money Transfers in MongoDB
  • Account View Reader – queries the denormalized view

You can find the source code and documentation for the example application in this github repository. There are currently two versions of the application: Java+Spring Boot and Scala+Spring Boot. I plan to publish a Scala/Play version shortly. The application uses a simple, embedded event store.

Watch the presentation (slides+video), and take a look at the source code and documentation. It would be great to get feedback on the programming model.

Posted in cqrs, event-driven architecture, microservices | Tagged , , | 4 Comments

Microservices are to services, as organic food is to food

On Thanksgiving morning, I listened to a fascinating interview with Jacques Pepin, the celebrated French chef and PBS personality. Part of the interview touched on the topic of organic food. He joked that his parents were organic farmers even though at the time they were simply called farmers. That’s because the term ‘organic food’ didn’t exist until 1939. Before then all farming was organic. It wasn’t until the rise of industrialized agriculture, which uses pesticides, synthetic fertilizers, etc, that we needed a term to describe food grown without pesticides.

In many ways the organic food vs. food distinction mirrors the difference between microservices and service-oriented architecture (SOA). The term SOA has very negative connotations because it is strongly associated with commercialization and the baggage of WS* and ESBs. Consequently, in same way as we need to distinguish between organic food and food, we need to distinguish between microservices and SOA. It’s a shame that we need to do this because the word ‘microservices’  over-emphasizes service size.

Posted in microservices | Tagged , | Leave a comment