If you are part of an Integration Center of Excellence in your IT organization I know what is going through your mind every time a business stake holder walks up to you and asks the dreaded question – “How long will it take to connect the latest SaaS app that we bought to an on premise home grown system?”
Before you say anything, the requirements keep getting bigger and bigger with every sentence of the description and you already know that if your answer is anything longer than a month, the conversation is not going to go too far. You know that one of the major reasons for the long tail integration projects is related to technology and I feel your pain because we have been doing this for last 15 years and living this. For the last couple of years though we have been exploring something really exciting in the integration architecture world and now we are at a stage where we would like to share our experience.
Imagine if I told you to build out a cloud based application that needs to take a document from a location, map and transform to a different data model/format then drop it off to another location across the Internet over a different protocol. Simple. Right? If this was early 90s you would have probably built this out in a giant c application using some prototypes and functions for different functionalities. But now we have a fairly mature Object Oriented methodology and a number of established languages to choose from so you pick a language (Java) and build out the application.
Great. Now the application probably looks something like this:
Now imagine I ask you to add authentication, encryption and tokenization into the process because we are dealing with sensitive information and need to have audit capabilities due to PCI and other compliance requirements. You write another Java object that does this and integrate with the rest of the application.
Now imagine I ask you to write another functionality to persist the payload (contents of the document) and also associate with some user defined meta-data. More objects and more dependency wiring to do. Also, I want you to keep track of what happened to the entire process each step of the way till it gets delivered and I want guaranteed delivery of all documents plus support for multiple protocols.
One last thing – make sure your application scales up for millions of such transactions in an hour.
By this time you have probably realized that this is a classic integration use case and if you have been doing this long enough you are already seeing the challenges to the approach.
I feel your pain as you try and articulate to the business stake holder why it takes longer and longer to add integration end points and why it becomes more and more expensive with each iteration. Here are some major technical challenges with this architecture, to list a few:
- Scalability – You end up doing X axis scaling but there are some inherent challenges. The more applications you add behind a load balancer, the better the caching strategy needs to be and higher the complexity. Furthermore if only part of your application needs scaling, you have to scale the entire application and this could get gnarly and expensive.
- Maintainability – Every time you need to make a change to the application functionality, including a bug fix, you end up rebuilding the entire application stack. There are some major consequences to this. First, you need to have a pretty good idea of the entire application architecture and pieces of code that others have written or in the process of writing. If you have dependencies on other objects you have to manage that. Second, if you or someone else adds something new and it has a bug, it has the potential to bring down your entire application.
- Quality control – QA and deployment gets tougher and tougher with every release. The entire stack now needs to go through build and regression testing from environment to environment before it reaches Production.
- Deployments – Due to the above two reasons, you can now forget about frequent deployments. You are trying to follow agile but release cycles tend to get longer and longer and you could be at a risk of not innovating and fixing issues fast enough.
- Web container bloating – You typically start with one war and then over time as you add functionalities your application becomes bulkier and bulkier with third party libraries and not only does the dependency and version management becomes a nightmare the startup of the application container takes a massive hit.
- Technology stack lock in – Finally, with this application architecture you tend to lock into one technology stack. Every new functionality that now gets added need to be supported in that technology stack.
So what is the alternative? Is there a way to build something light weight, agile, fast but still be able to manage complex use cases?
Imagine if the functionalities of the application were all built as their own independent applications and they all talked to each other through an API layer. The architecture would then look something like this:
Major characteristics of this architecture has been written by Martin Fowler in his paper.
More importantly, here are the major advantages over monolithic applications:
- Maintainability: Easy for developers to understand and maintain the code since the microservices follow the Single Responsibility Principle. In addition, the microservices are smaller in footprint and faster to start up etc. at a container level as compared to a monolithic application.
- Easy to scale: If the microservices are containerized using something like Docker, they can be scaled independently and on demand.
- Deployment and maintenance: Easy to deploy to different environment, faster to QA and even easier to throw away when not needed. Here, the added advantage is to be able deploy only one or a few microservices and more often without affecting the overall application.
- Best of the breed technology stacks: Since microservices are independent and addressable through APIs, you can choose to write some microservices in Python and/or Node vs building everything in one single technology stack. This helps you build and deliver innovation to your end users without being constrained to a single technology stack.
- Resilience: If some microservices break it still doesn’t bring down your entire application vs in a monolithic application where one bug can bring down an entire application.
At Liaison we started building our Alloy platform using microservice a couple of years ago and very recently added compliance and security to it. Most importantly we built the platform in a way so that it can be customizable for any integration and data management use cases that come to us. Imagine a workflow builder where we could assemble microservices on demand to build out an architecture that works for our customers. This would not be tenable on a monolithic cloud based application but based on microservices, this is now attainable.
As a customer of Alloy, here are some advantages that you can benefit from based on our architecture:
- System resilience
- Scalable and responsive
- Flexible and customizable to meet your needs
- Faster deployments and painless addition/changes to your integration architecture
- Ability to take advantage of best of the breed technology stacks for different stacks.
If you are interested in some of the specifics around the framework, contact us today and we will be happy to schedule a call to discuss how you can benefit from our years of experience in building the next generation architecture and apply to your use cases.