Introduction
World of microservice is vast. Because of its wide spectrum the concepts varies a lot too. The same set of tasks can be done in infinite combination of tech stacks while building the microservices. If we talk about platform we can go completely serverless or have our own servers.In case of tech stack we can choose java,scala,kotlin,python,ruby While building app we can have as many of 12 factors in place. In case of pipeline we can use Jenkis,CircleCI,travis and the others. To orchestrate the cloud we can have Ansible,Chef,Cloud providers own formation templates etc. The list is actually huge but the ideology behind all these are more or less lies how effectively we implement the concepts of microservices At the end of the day tech stack doesn’t matter for business and doesn’t have the value if not delivered in right proportion in right time.
What we cover here
This document intent to look into various tools and techniques that can be leveraged to migrate to cloud
.
Although this design patterns are written keeping PCF in mind, they are also valid for any deployment platform of interest.
Points marked with POC are the areas I will try to do POC and update this blog with new threads.
As per microservice guru Chris Richardson there are below patterns
Patterns
- Decomposition Patterns
- Communication Patterns
- Containerization Patterns
- Transactional Patterns
- Migration Patterns
- Security Patterns
- Observable Patterns
- Deployment patterns
Decomposition Patterns
In my view decomposition pattern have 2 major ways
- A complete lift and shift (with partially 12 factor enabled)
- Strangling and Incremental way
Lift and Shift
Lift and Shift can only be done if the app is already isolated and have db that is not shared across service.
We need to ask these questions while evaluating the app:
- Is this application is a spring boot app?
- Run CF push in the root directory of the legacy app. Can the errors be converted to story points?
- Check if this application can be containerized?
Strangling
Strategy 2 can be applied to large scale application which can be decomposed using DD techniques and needs to be strangled into a microservices progressively.
- Do event storming with the business operators,use snap analysis.Come of with core events under the bounded context.
- Extract the domain aggregate with least transactions
- Aim for the “read” aspects of the separated domain and make it as microservice.This will simplify things as we dont have to deal with the state of the system immediately.
So MVP 0 will try to cover by
- For schema, use strategy as described in the db migration patterns
- For
OLTP
data,we need to sync the legacy schema to thede-normalized view
of microservices (CQRS)(Use techniques fromevent sourcing
) - Expose a single endpoint at microservice and query using java streams to the de-normalized view
- Write adapter at monolith ACL which will query the endpoint at microservice and reconcile the existing read logic,reconciliation happens on aggregate id and monolith reference entity object.
- Use DevOps strategy to use canary model and run existing test cases
MVP 1
- Start migrating the writes to microservices
- Store states in the microservices and not the data (see
event sourcing
) - Expose write endpoint in the microservice to store the
event streams
- Continue to use denormalized view for querying(see event sourcing).This time stop listening from events (from the bridge queue) which are directly exposed from the microservices.
- Use DevOps strategy to use
canary model
and run existing test cases.
Containerization Patterns
- PCF v2.5 > allow usage of docker images as alternative to PAS Linux container.
- We should be able to containerize our application and deploy to PCF.This helps in future proofing for PKS migration as well
- Configure Docker Registry Access both Public and Private registry POC Required
- In case we use our containers , PCF will not monitor health checks and drain log/metrics out to traffic controller we need to take care of this .POC POC Required
Communication patterns
For PCF Container-to-container
should be enabled from the platform.
Spring Cloud Services Direct Registration Method allows to connect between containers using internal IP directly. POC Required
Apart from this we need to leverage following communication patterns
- Request-Response using AMQP using Spring cloud amqp starter POC Required
- Use Spring cloud streams for Request response (Use channels) POC Required
as a pub sub mode as message broker with LB dead letter queue error queue
- User Spring cloud streams reactive with Spring cloud function POC Required [Source,Processor and Sink]
Transactional patterns
SAGA orchestration and choreography are the two key patterns in distributed transaction.Below are the POCs that we should look at taking PCF as the platform.
Choreography pattern
using a message broker (Non spring app) using topic exchange,fanout exchange,domain events and Error queues POC Required- Choreography pattern using a message channel (Spring cloud streams) using source,processor and sink model POC Required
- Orchestration
hybrid pattern
(Spring Commander +backend other langs) with BPM tool Cumanda POC Required - Spring pipeline using reactive producer,function,Subscribe (Java lambda functions) using Spring cloud streams and Spring cloud functions POC Required
Migration patterns
Migrating from monolith can be quite complex.Just for an example in the below scenario I tried extracting delivery service from monolith into a microservices,and below is what I came up with.
I will try to explain this above diagram in the following sections zooming into specific areas of concerns
My understanding of db migration have the following phases
1.Identify the schema for the normalized database and the event sourcing database for the microservices.We need to event source from monolith during the migration phase of events.
For this we use flyway to start versioning the schema for the new dbs we create. POC Required This will help us to keep our db versioned and also stop it from corrupting from the monolith.
Also the event store is better to be SQl as we need to do a lot of aggregation on top of this data and update the de-normalize view for current data.
2.Create Data flow pipelines to migrate historical data POC Required
While migrating db to the microservices we should be careful about the schemas we migrate.Then we need to get the historical data using a data pipeline with a pull function.
Alternative to Spring cloud data flow, we can use Akka stream and scala slick,periodically pull date and publish this streams to a queue.
One more alternative to pooling can use Speedment to use Java native stream APIs use that as a source for queue and rest of the flow remains the same.
3.Create Data flow pipelines for real time replication of data from monoliths to microservices(for the events which are yet to migrate to the microservices) POC Required
We need to do some kind of event streaming.Good option is to use db (like mongo) which supports this feature OOTB.Polling realtime is resource intensive and should be avoided.
- Create a custom table in monoliths table with event id,event data,created date,processed flag.
- Update this table transitionally whenever there is an update/insert in monolith persistency.
- Use Spring cloud data flow and do S(Source) ->P(process) -> Sink (ETL) to the microservices de-normalized view.
We could have also create adapter so that in the monoliths inside a transaction we can send out the event directly to the queue. But going by Richardson patterns, any new functionality in existing system should be avoided during transition phase.If monoliths already have a AMQP mechanism available we can reuse it otherwise we shouldn’t implement it and follow the above pattern.
Once we started moving the events which incurs a state change, we need to modify the migration pipeline so that it filters out the events which are available on the new microservice for direct consumption.
4.Create cloud streams for CQRS pattern inside the microservice POC Required Important thing to note here is the event insert and notification to the event queue should happens transactionally.
Regarding the data flow pipelines
While creating our pipelines we can make use of reactive streams(Spring Boot 2.2 >) supports java streams natively as Spring cloud functions.This will help streaming data as fluxs and utilize threads without blocking. POC Required
Both imperative and reactive uses Spring Cloud Streams which does all the boilerplate coding.
Querying the denormalized view db
For reads use tool like or Slick or No SQL db streams.Using off-HEAP location to load Data to JVM for faster retrieval.Also because it is totally isolated from writes we can scale it multiple replicas for better availability.
Anti patterns We shouldn’t lift and shift data(data migration with existing schema),it might lead to a lift-n-shift instead of decomposition.Decomposition of db to CQRS is necessary and this is the right time for it.Use de-normalized view and aggregate id as reference to the monolith tables.
Security patterns
Tokens generated at SCG should be relayed to downstream services
- Use token relay to relay token to the downstream services, so that individual microservices does not create tokens on its own. POC Required
- Use JWT signature validation to validate incoming token request natively POC Required (this will reduce remote token validation between resource server and authentication server,hence reduce network traffic). Resolve userDetailService from Spring Authentication token.Enable client registration for clients to validated
Logging and Monitoring
- Use fire hose and nozzles to get metric data streamed out POC Required
- Generate custom gauges and use that as a scaling event from AutoScaler trigger point. POC Required
- Incase of Docker container, we need to take care of the log drains POC Required
Links
https://dzone.com/articles/event-driven-orchestration-an-effective-microservi
https://github.com/flowable/flowable-engine/tree/master/modules/flowable-spring-boot
https://activiti.gitbook.io/activiti-7-developers-guide/components/spring-cloud
http://progressivecoder.com/saga-pattern-implementation-with-axon-and-spring-boot-part-1/
https://codeburst.io/using-rabbitmq-for-microservices-communication-on-docker-a43840401819
https://speedment.com/pricing/
https://speedment.com/events/
https://www.youtube.com/watch?v=oTTfaynD1Xc&t=178s
https://www.youtube.com/watch?v=x4PImMjPa7k
https://dzone.com/articles/distributed-sagas-for-microservices
https://github.com/lucasdeabreu/saga-pattern-example
https://codeburst.io/using-rabbitmq-for-microservices-communication-on-docker-a43840401819
https://www.javainuse.com/spring/cloud-stream-rabbitmq-2
https://piotrminkowski.com/2018/06/15/building-and-testing-message-driven-microservices-using-spring-cloud-stream/
https://stackabuse.com/spring-cloud-stream-with-rabbitmq-message-driven-microservices/
https://springbootdev.com/2018/07/29/message-driven-microservices-with-spring-cloud-stream-and-rabbitmq-publish-and-subscribe-messages-part-1/
https://github.com/spring-cloud/spring-cloud-stream/blob/master/docs/src/main/asciidoc/spring-cloud-stream.adoc#spring_cloud_function
https://www.javainuse.com/spring/cloud-stream-rabbitmq-1
https://www.cloudamqp.com/blog/2015-05-18-part1-rabbitmq-for-beginners-what-is-rabbitmq.html
https://www.youtube.com/watch?v=PlWqy6StFwA
https://github.com/spring-cloud/spring-cloud-stream/blob/master/docs/src/main/asciidoc/spring-cloud-stream.adoc
https://howtodoinjava.com/spring-webflux/spring-webflux-tutorial/
https://github.com/spring-cloud/spring-cloud-stream-sample
https://piotrminkowski.com/2018/06/15/building-and-testing-message-driven-microservices-using-spring-cloud-stream/
https://www.youtube.com/watch?v=x4PImMjPa7k&t=1906s
https://pusher.com/tutorials/mongodb-change-streams
https://codelabs.developers.google.com/codelabs/cloud-spinnaker-kubernetes-cd/#
- Understanding Request, RITM, Task in ServiceNow
- Steps to create a case in ServiceNow (CSM)
- Performance Analytics in 10 mins
- Event Management in 10 minutes - part1
- Event Management in 10 minutes - part2
- Custom Lookup List
- Script includes in 5 minutes
- Interactive Filter in 5 minutes
- UI Policy in 6 Minutes
- Client Side Script Versus Server Side Script in 3 minutes
- Java
- ACL
- Performance analytics(PA) Interactive Filter
- Various Configurations in Performance analytics(PA)
- Service Portal
- Performance Analytics(PA) Widgets
- Performance Analytics(PA) Indicator
- Performance Analytics(PA) Buckets
- Performance Analytics(PA) Automated Breakdown
- Client Script
- Rest Integration
- Understanding the Request, RITM, Task
- Service Catalogs
- Events in ServiceNow
- Advance glide script in ServiceNow
- CAB Workbench
Comments