Building microservices with Spring Boot – part 1

This article introduces the concept of a microservice architecture and the motivations for using this architectural approach. It then shows how Spring Boot, a relatively new project in the Spring ecosystem can be used to significantly simplify the development and deployment of a microservice. You can find the example code on github.

What are microservices?

Since the earliest days of Enterprise Java, the most common way of deploying an application has been to package all the application’s server-side components as a single war or ear file. This so-called monolithic architecture has a number of benefits. Monolithic applications are simple to develop since IDEs and other tools are oriented around developing a single application. They are also simple to deploy since you just have to deploy the one war/ear file on the appropriate container.

However, the monolithic approach becomes unwieldy for complex applications. A large monolithic application can be difficult for developers to understand and maintain. It is also an obstacle to frequent deployments. To deploy changes to one application component you have to build and deploy the entire monolith, which can be complex, risky, time consuming, require the coordination of many developers and result in long test cycles. A monolithic architecture can also make it difficult to trial and adopt new technologies and so you are often stuck with the technology choices that you made at the start of the project.

To avoid these problems, a growing number of organizations are using a microservice architecture. The application is functionally decomposed into a set of services. Each service has a narrow, focused set of responsibilities, and is, in some cases, quite small. For example, an application might consist of services such as the order management service, the customer management service etc.

Microservices have a number of benefits and drawbacks. A key benefit is that services are developed and deployed independently of one another. Another key benefit is that different services can use different technologies. Moreover, since each service is typically quite small, it’s practical to rewrite it using a different technology. As a result, microservices make it easier to trial and adopt new, emerging technologies. One major drawback of microservices is the additional complexity – development, and deployment – of distributed systems. For most large applications, however, the benefits outweigh the drawbacks.

You can learn more about microservices by visiting microservices.io.

Developing with micro-services

Let’s imagine that you are implementing user registration as part of an application that has a micro-service architecture. Users register by entering their email address and a password. The system then initiates an account creation workflow that includes creating the account in the database and sending an email to confirm their address. We could deploy the user registration components (controllers, services, repositories, …. etc.) as part of some other service. However, user registration is a sufficiently isolated chunk of functionality and so it makes sense, to deploy it as a standalone micro-service. In a later blog post, we will look at the web UI part of user registration but for now we will focus on the backend service. The following diagram shows the user registration service and how it fits into the overall system architecture. user-reg-arch

The backend user registration service exposes a single RESTful endpoint for registering users. A registration request contains the user’s email address and password. The service verifies that a user with that email address has not previously registered and publishes a message notifying the rest of the system that a new user has registered. The notification is consumed by various other services including the user management service, which maintains user accounts, and the email service, which sends a confirmation email to the user.

It’s quite straightforward to implement the user registration backend using various projects in the Spring ecosystem. Here is the Spring framework controller, which is written in Scala, that implements the RESTful endpoint.

@RestController
class UserRegistrationController @Autowired()(…) {

@RequestMapping(value = Array("/user"),
             method = Array(RequestMethod.POST))
def registerUser(@RequestBody request: RegistrationRequest) = {

  val registeredUser =
    new RegisteredUser(null,
        request.emailAddress, request.password)

  registeredUserRepository.save(registeredUser)

  rabbitTemplate.convertAndSend(exchangeName, routingKey,
           NewRegistrationNotification(registeredUser.id,
                          request.emailAddress, request.password)
  RegistrationResponse(registeredUser.id, request.emailAddress)
}

@ResponseStatus(value = HttpStatus.CONFLICT,
        reason = "duplicate email address")
@ExceptionHandler(Array(classOf[DuplicateKeyException]))
def duplicateEmailAddress() {}
}

The @RestController annotation specifies that Spring MVC should assume that controller methods have an @ResponseBody annotation by default.

The registerUser() method records the registration in a database and then publishes a notification announcing that a user has registered. It calls the RegisteredUserRepository.save() method to persist a new registered user. Here is the RegisteredUserRepository, which provides access to the database of user registrations.

trait RegisteredUserRepository extends MongoRepository[RegisteredUser, String]

case class RegisteredUser(
    id : String,
    @(Indexed@field)(unique = true) emailAddress : String,
    password : String)

Notice that we do not need to supply an implementation of this interface. Instead, Spring Data for Mongo creates one automatically. Moreover, Spring Data for Mongo notices the @Indexed annotation on the emailAddress parameter and creates a unique index. If save() is called with an already existing email address it throws a DuplicateKeyException, which is translated by the duplicateEmailAddress() exception handler into an HTTP response with a status code of 409 (Conflict).

The UserRegistrationController also uses Spring AMQP to notify the rest of the application that a user has registered:

class UserRegistrationController @Autowired()(…) {
…
rabbitTemplate.convertAndSend(exchangeName, routingKey,
     NewRegistrationNotification(registeredUser.id,
                request.emailAddress, request.password))
…
}

case class NewRegistrationNotification(
  id: String, emailAddress: String, password: String)

The convertAndSend() method converts the NewRegistrationNotification to JSON and sends a message to the user-registrations exchange.

So far, so good! With just a few lines of code we have implemented the desired functionality. But in order to have a complete deployable application there are a few more things we need to take care of.

  • Configure Spring dependency injection to instantiate and assemble these components along with the needed infrastructure components (RabbitTemplate, MongoTemplate, etc and their dependencies) into an application.
  • Externalize message broker and MongoDB connection configuration so that we can build the war file once and run it in different environments: e.g. CI, QA, staging, production, etc.
  • Configure logging etc.
  • Decide how we are going to package and deploy the application.

And, oh yes, we had better write some tests.

Towards a deployable application

The Spring framework provide three main ways of configuring dependency injection: XML, annotations, and Java-based configuration. My preferred approach is to use a combination of annotations and Java-based configuration. I avoid XML-based configuration as much as possible unless it is absolutely necessary.

We could just launch an IDE, annotate the classes, and write the Java configuration classes and before long we would have a correctly configured application. The trouble with this old-style approach of manually crafting the each application’s configuration is that we regularly create new microservices. It would become quite tedious to create very similar configurations over and over again even if we did just copy and paste from one service to another.

Similarly, to deploy the service, we could install and configure Tomcat or Jetty to run this service. But once again, in the course of building many microservices, this is something we would have to do repeatedly. There needs to be better way of dealing with both application and web container configuration that avoids all this duplication. We need an approach that lets us focus on getting things done for both web and non-web (e.g. message-based) applications.

About Spring Boot

One technology that lets you focus on getting things done is one of the newer members of the Spring ecosystem: the Spring Boot project. This project has two main benefits. The first benefit is that Spring Boot dramatically simplifies application configuration by taking Convention over Configuration (CoC) in Spring applications to a whole new level. Spring Boot has a feature called auto-configuration that intelligently provides a set of default behaviors that are driven by what jars are on the classpath. For example, if you include database jars on the classpath then Spring Boot will define DataSource and JdbcTemplate beans unless you have already defined them. As a result, it’s remarkably easy to get a new micro-service up and running with little or no configuration while preserving the ability to customize your application.

The second benefit of Spring Boot is that it simplifies deployment by letting you package your application as an executable jar containing a pre-configured embedded web container (Tomcat or Jetty). This eliminates the need to install and configure Tomcat or Jetty on your servers. Instead, to run your micro-service you simply need to have Java installed. Moreover, the executable jar format provides uniform and self-contained way of packaging and running JVM applications regardless of type, which simplifies operations. If necessary, you can, however, configure Spring Boot to build a war file. Let’s illustrate these features by developing a Spring Boot version of the user registration microservice.

Using Spring Boot to implement user registration

The Spring Boot part of the application consists of four pieces: a build.gradle (or Maven pom.xml), one or more Java Configuration classes, a configuration properties file, which defines connection settings for the message broker and Mongo database, and a main() method class. Let’s look at each one in turn.

build.gradle

The build.gradle file configures the Spring Boot build plugin, which creates the executable jar file. The build.gradle file also declares dependencies on Spring Boot artifacts. Here is the file.

buildscript {
  repositories {
    maven { url "http://repo.spring.io/libs-snapshot" }
    mavenCentral()
  }
  dependencies {
    classpath("org.springframework.boot:spring-boot-gradle-plugin:1.0.0.RC5")
  }
}

apply plugin: 'scala'
apply plugin: 'spring-boot'

dependencies {
  compile "org.scala-lang:scala-library:2.10.2"
  compile 'com.fasterxml.jackson.module:jackson-module-scala_2.10:2.3.1'

  compile "org.springframework.boot:spring-boot-starter-web"
  compile "org.springframework.boot:spring-boot-starter-data-mongodb"
  compile "org.springframework.boot:spring-boot-starter-amqp"

  testCompile "org.springframework.boot:spring-boot-starter-test"
  testCompile "org.scalatest:scalatest_2.10:2.0"
}

repositories {
  mavenCentral()
  maven { url 'http://repo.spring.io/milestone' }
}

The Spring Boot build plugin builds and configures the executable war file to execute the main() method defined in the project.

What’s particularly interesting about build.gradle is that it defines dependencies on Spring Boot starter artifacts. Starter artifacts (aka. starters) use the naming convention spring-boot-starter-X, which X is the type of application that you are building. By depending on a starter you get a consistent set of dependencies for building applications of type X along with the appropriate auto-configuration behavior.

Since this service is a web application that uses MongoDB and AMQP, it defines the dependencies on the following starters:

  • spring-boot-starter-web – includes the jars required by a web application such as Tomcat and Spring MVC
  • spring-boot-starter-data-mongodb – includes the jars required by a MongoDB application including the MongoDB driver and Spring Data for Mongo.
  • spring-boot-starter-amqp – includes the jars required by an AMQP application including Spring Rabbit

All of these starters also depend on spring-boot-starter, which provides auto-configuration, logging, and YAML configuration file supports.

Java configuration class(es)

A typical Spring Boot application needs at least one Spring bean annotated with @EnableAutoConfiguration, which enables auto-configuration. For example, the Spring Boot Hello World  consists of a single class that’s annotated with both @Controller and @EnableAutoConfiguration. Since the user registration service is more complex it has a separate Java Configuration class.

@Configuration
@EnableAutoConfiguration
@ComponentScan
class UserRegistrationConfiguration {

  import MessagingNames._

  @Bean
  @Primary
  def scalaObjectMapper() = new ScalaObjectMapper

  @Bean
  def rabbitTemplate(connectionFactory : ConnectionFactory) = {
    val template = new RabbitTemplate(connectionFactory)
    val jsonConverter = new Jackson2JsonMessageConverter
    jsonConverter.setJsonObjectMapper(scalaObjectMapper())
    template.setMessageConverter(jsonConverter)
    template
  }

  @Bean
  def userRegistrationsExchange() = new TopicExchange("user-registrations")

}

The UserRegistrationConfiguration class has three annotations: @Configuration, which identifies the class as a Java Configuration class, @EnableAutoConfiguration, which was discussed above, along with @ComponentScan, which enables component scanning for the controller.

The UserRegistrationConfiguration class defines three custom beans:

  • scalaObjectMapper – A Jackson JSON ObjectMapper that registers the DefaultScalaModule, which provides support for Scala objects. The ObjectMapper is used by the RabbitTemplate for serializing outgoing messaging and by Spring MVC for request/response serialization/deserialization
  • rabbitTemplate – configures a RabbitTemplate that uses the ScalaObjectMapper so that NewRegistrationNotification messages are sent in JSON format
  • userRegistrationsExchange– ensures via RabbitAdmin that there is an AMQP Topic Exchange called user-registrations

There is remarkably little configuration for this kind of application. That’s because Spring Boot’s auto-configuration creates several beans for you:

  • Spring MVC – Dispatcher servlet and the HttpMessageConverters that are configured to use Jackson JSON and the ScalaObjectMapper
  • AMQP – RabbitAdmin and ConnectionFactory
  • Mongo – Mongo driver and MongoTemplate

UserRegistrationMain

This class defines the main() method that runs the application. It’s a one liner that calls the SpringApplication.run() method passing in the configuration class and the args parameter to main().

object UserRegistrationMain {

  def main(args: Array[String]) : Unit =
    SpringApplication.run(classOf[UserRegistrationConfiguration], args :_ *)

}

The SpringApplication class is provided by Spring Boot. It’s run() method creates and starts the web container that runs the application.

application.properties

This file contains property settings that define how the application connects to the RabbitMQ server and the MongoDB database. It currently defines one property:

spring.data.mongodb.uri=mongodb://localhost/userregistration

This property specifies that the application should connect to the Mongo host running locally on the default port and use the userregistration database rather than the default test database.

This default configuration can be overridden in a couple of different ways. One option is to specify properties values on the command line when running the application. The other option is to supply additional application.properties files, which override all or some of the properties. This is done using either system properties or by putting the files in the current directory or on the classpath. See the documentation for the exact details on how Spring Boot locates properties files.

Putting it all together

With these two files and two classes, we can now build the application. Running ./gradlew build compiles the application, builds the executable jar and runs the tests. You can then execute the jar file to start the application:

$ java -jar build/libs/spring-boot-restful-service.jar
…
2014-03-28 09:20:13.423 INFO 57472 --- [ main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 8080/http
2014-03-28 09:20:13.426 INFO 57472 --- [ main] n.c.m.r.main.UserRegistrationMain$ : Started UserRegistrationMain. in 5.44 seconds (JVM running for 6.893)

Once built, this jar can move through the deployment pipeline to production. You can, for example, change the MongoDB connection URL by specifying the property on the command line:

$ java -jar build/libs/spring-boot-restful-service.jar \
--spring.data.mongodb.uri=mongodb://productionMongo/userregistration

Quite remarkable, given how little effort was required! Don’t forget to look at the code on github.

Summary

As you can see, Spring Boot lets you focus on developing your microservices. It dramatically reduces the amount of application and server configuration that you would normally need to write. Furthermore, it’s extremely easy to build an executable jar file that can be run on any machine with Java installed – no need to install and configure an application server. In later posts, we will look at other aspects of developing microservices with Spring Boot including web application development, and automated testing, as well as look at how Spring Boot simplifies monitoring and management.

Next steps

Posted in Uncategorized | Tagged , , , | 1 Comment

Thoughts about #microservices – less micro, more service?

I’ve been giving talks on what are now called microservices for the past two years. The big idea is that in some situations rather than building a monolithic application (e.g. application = one huge WAR file) you should apply the Scale Cube (specifically y-axis splits aka. functional decomposition) and  design your application as a collection of independently deployable services.

I have often introduced the idea as “SOA light” since you are building a service-oriented architecture. The trouble with the term SOA, however, is that it is associated with a lot of baggage: SOAP, ESBs, heavyweight ceremony, etc. Instead, I’ve talked about “Decomposing the monolith” or “Decomposing the WAR”. I’ve also used the term modular, polyglot architecture but that’s a bit of a mouthful.

At Oredev 2012, I encountered Fred George who was giving a talk on what he called microservices, which was a variant of what I had been talking about. What was especially intriguing about his approach was about how it pushed things to the extreme with a very heavy emphasis on very small services. The term microservices along with the idea of tiny 100 LOC services has got a fair amount of buzz recently. Martin fowler blogged about microservices and there has been some discussion on twitter.

On the one hand, I like the term microservices. It’s short and it catchy. The problem, however, is that it IMHO places excessive emphasis on smallness. After all, as I described at the start of this post the big idea is to break up the otherwise monolithic application  into smaller, more manageable services by applying function decomposition. Some of those services can be just a few lines of code. For example, in one of my sample applications I have a 12 line Sinatra service that responds to SMS messages. While some services can be this small, others will need to be a lot larger. And that’s just fine. It all depends. Partitioning an application is a tricky design problem that has to balance a large number of constraints.  To paraphrase Einstein, “Services should be made as small as possible, but no smaller.”

Posted in Uncategorized | Leave a comment

NodeJS, Futures, and Rx Observables at DevNexus 2014

Last week I gave a couple of talks related to micro-services architecture at DevNexus 2014. The first talk was NodeJS: the good parts? A skeptic’s view (slides), which describes the pros and cons of JavaScript and NodeJS and how NodeJS is useful for building small-ish I/O intensive services such as API gateways and web applications.

The second talk was Futures and Rx Observables: powerful abstractions for consuming web services asynchronously (slides).  This talk first describes how Scala Futures (and forthcoming JDK 8 CompletableFutures) can be used to greatly simplify API gateway code that needs to call multiple backend services concurrently. The talk then describes how Rx Observables (i.e. RxJava) are a more general purpose concurrency abstraction that can be also used to process asynchronous event streams.

Posted in Uncategorized | Tagged , , , | Leave a comment

Oredev 2013: Goose blood soup and NodeJS the good parts.

A couple of weeks ago I gave my NodeJS: the good parts? A skeptic’s view talk at Oredev 2013. Check out the slides and the video of the presentation.

This was my second time at Oredev, which takes place in Malmo, Sweden. It’s an exceptionally good conference. Lots of great content and fun speaker activities. After a long flight from SFO, I wasn’t up for jumping naked into the Baltic with a lot of strangers, but I enjoyed the city tour (pictures below) and the speakers dinner in the city hall. I also had an excellent dinner, which included spicy Goose blood soup (tasted like ginger bread), with Cecilia, Klara and Marcus.

Interesting sessions at the conference included:

There was also the thought provoking (unlike any other) keynote by Anna Beatrice Scott.

There were a lot more sessions that I wanted to listen to but after the first day I had fly to Casablanca, Morocco for JMagreb 2013.

IMG_5088 IMG_5094 IMG_5099 IMG_5119 IMG_5137 OLYMPUS DIGITAL CAMERA

Posted in NodeJS | Tagged , , | Leave a comment

NodeJS: the good parts? A skeptic’s view at #javaconf

I recently gave a talk about NodeJS at the JAX conference in Santa Clara. Here are the abstract and slides.

JavaScript used to be confined to the browser. But these days, it’s becoming increasingly popular in server-side applications in the form of Node.js. Node.js provides event-driven, non-blocking I/O model that supposedly makes it easy to build scalable network application. In this talk you will learn about the consequences of combining the event-driven programming model with a prototype-based, weakly typed, dynamic language. We will share our perspective as a server-side Java developer who wasn’t entirely happy about JavaScript in the browser, let alone on the server. You will learn how to use Node.js effectively in modern, polyglot applications.

And, here is the video

I also gave my Developing with cloud services talk.

I also enjoyed John Kodumal’s Scala Typeclassopedia talk on Scala type classes. A great explanation of some concepts that are covered well.

Posted in Uncategorized | Leave a comment

Polyglot persistence talk at #gluecon

Last month, at the excellent GlueCon conference, I gave a 30 minute version of my polyglot persistence talk. Here are the slides:

There is also a longer version of the talk as well as the source code the example application.

I listened to a number of great talks at GlueCon. My favorite was Confessions of a tech CEO who still loves to code by Lew Cirne of New Relic. Very inspiring!

Posted in databases, mysql, nosql, persistence, redis | Leave a comment

Cloudy, hot air likely: thoughts about #deploycon and other cloud events

I spent tuesday at DeployCon, a conference about enterprise platform services. Some of the sessions were great. Most notably,  Dave McCrory’s Data Gravity talk, Das Kamhout’s talk about Intel’s PaaS journey, and David Mortman Cover your PaaS talk about PaaS security. Here are a few tweets from the conference.

Those talks were great but there were also several panel sessions that were, to put it politely, very unsatisfying. This problem is nothing specific to DeployCon. Most generic (not focussed on a particular product/service) cloud events that I’ve attended over the past five years have been equally unsatisfying. While the “What is a cloud?” discussion that was prevalent in 2008-2009 has mostly gone away, so much of what is said during the panel discussion consists of vague, high-level generalities. Periodically, I wanted to shout: “I don’t know what you are saying“.

So why is this? Recently I’ve been reading the excellent book To Save Everything, Click Here: The Folly of Technological Solutionism. One interesting point in the book is that “the Internet” is a vague, nebulous concept and that when we talk about it rather than the specific inventions, people, and companies that are utilizing the network, “our technological debates will remain lazy, shallow, and unproductive”.

Perhaps the same is true when it comes to “Cloud” and “PaaS”. Those terms are simply too vague and nebulous and that if you want to have a meaningful discussion then you need to talk about specific products, people and companies in that space. So, for example, if you are organizing a Cloud/PaaS conference then have real users talk about their experiences deploying applications with a specific public PaaS; or describe how they built a private PaaS. Have users describe their failures with PaaS. And instead of talking about PaaS and big data, what about their experiences with NoSQL database X and PaaS Y. Instead of lots of handwaving fill the schedule with one concrete example after another.

Posted in Uncategorized | 1 Comment