Susheel Kumar, Author at Perficient Blogs https://blogs.perficient.com/author/skumar/ Expert Digital Insights Tue, 28 Sep 2021 19:15:22 +0000 en-US hourly 1 https://blogs.perficient.com/files/favicon-194x194-1-150x150.png Susheel Kumar, Author at Perficient Blogs https://blogs.perficient.com/author/skumar/ 32 32 30508587 7 Reasons Why Organizations Struggle with Microservices Adoption https://blogs.perficient.com/2017/06/26/7-reasons-why-organization-struggle-with-microservices-adoption/ https://blogs.perficient.com/2017/06/26/7-reasons-why-organization-struggle-with-microservices-adoption/#respond Tue, 27 Jun 2017 02:30:42 +0000 http://blogs.perficient.com/integrate/?p=3964

Granularity – The word micro in microservices is not completely related to the number of operations, apis or endpoints that will be exposed from your service. Some people tend to think that they can have a single operation per microservice which will leads to a number of services. Managing those would be a nightmare.  It’s difficult to implement microservices immediately without the proper requirements in place, but you also want to ensure that your scope is not too narrow or too broad.

Architectural Complexity – Microservice architecture is fairly complex. Anyone who is planning to adopt it needs to understand the various components involved in addition to development efforts. Apart from the new development and architecture style, you also need to consider that things like automation, service discovery, and service registration rely on a service choreography other than service orchestration.

Service Ownership – Organizations must assign owners for each microservice. Service owners are not the people who are involved in every day development, but rather who are responsible for service governance, lifecycle, promotion etc. Service owners are those people who decide what functionality goes into those services, when to create a new version, when to retire a particular version and to decide if another microservice should be created instead of adding functionality to the existing service.

Culture of Automation – With microservice adoption, the number of components involved increases dramatically and managing them manually will be inefficient, error prone, and next to impossible. We need to make sure that we have automated builds or CI/CD using tools like Jenkins or TeamCity. If your organization is using Docker, then you need to set up a docker registry and find an automated way to create docker images and keep those images updated. You can use tools like Chef or Ansible for automated environment preparation.

Refactoring Services and Databases Together – You need to avoid migrating existing services to microservices along with database because it takes time to find the right granularity of service. If you change the service and database together, then you will need to handle the database schema change over and over. If your service is too narrow, then you will need to break your schema to multiple schema, but if  you decide later on to merge your microservices then you will need to combine your schema too. It’s always ideal to test your service granularity for a while. Once you feel confident enough, then move forward with your database schema breakdown.

Follow the Crowd –  If you think you should use microservices because they are new and popular, you may end up in deep trouble. Before you proceed with microservices, you need to understand the pros and cons. Be sure to consider carefully what problem you are looking to solve and what the benefits could be.

Going Full Speed –  Organizations need to start with just a few services to see how they are working. Determine what the right granularity is, what new components are required, what new skills are required, what kind of automation is needed, and what organizational and management changes may be needed. Once you are satisfied, you can add a few more services and repeat the same testing process. Initially organizations need to be patient so they can see the benefits. The journey can take a couple of months to a year, but once you address the challenges and implement the changes, the rest of the journey should be fine.

]]>
https://blogs.perficient.com/2017/06/26/7-reasons-why-organization-struggle-with-microservices-adoption/feed/ 0 196391
Jenkins Delivery Pipeline and Build Promotion https://blogs.perficient.com/2017/06/14/jenkins-delivery-pipeline-and-build-promotion-2/ https://blogs.perficient.com/2017/06/14/jenkins-delivery-pipeline-and-build-promotion-2/#comments Wed, 14 Jun 2017 19:55:24 +0000 http://blogs.perficient.com/integrate/?p=3821

Jenkins is a well known and frequently used Continuous Integration tool. With wide community support, Jenkins has a very large set of plugins available to accomplish almost anything you need.

In this post, I will use the Job-DSL plugin to build a delivery pipeline which is a very common use case for a team following CI/CD practices.

While most organization prefer to use Jenkins as the overall orchestrator in the workflow, some organizations may use Jenkins for the complete lifecycle. These decisions depend on the availability of existing tools in your organization or on affordability. For example, you may use Jenkins to build an artifact then trigger Bamboo or UrbanCode for the deployment process as well as a security scan process like WhiteHat. Some of these tools may not be available in your organization and one can very well use scripting as an alternative for these tasks.

Even with Jenkins you have options to create Jobs manually or automate through Job-DSL, but both the devops and developer need to contribute to automation. The DevOps team needs to have some understanding of how the development process works and the developer needs to have some understanding of how Devops will join the  pieces together to build an automation workflow.

The Job-DSL plugin in Jenkins allows you to automate job creation and configuration which makes it easy to maintain hundreds of jobs more than manually, which would not be fun for sure.  The Job-DSL plugin uses groovy for scripting, which means both the Developer and the Devops team should have groovy knowledge.

Delivery Pipeline or Pipeline is often interpreted differently by various people and in some ways, everybody is right as there is no standard definition or rule for it. A Pipeline which works for one group may or may not work in same way for another. Most of the time a Pipeline will include stages/steps such as Build Automation, Test Automation, Security Checks (for e.g. WhiteHat scan), Deployment Automation (for e.g. Bamboo, Urbancode, scripting), Quality Checks (for e.g. SonarQube), Approval (for e.g. manual approval to deploy in UAT, Preprod or Prod environment), Notification, or Group Chat notification (for e.g. Slack, Hipchat).

Sample Application Delivery Pipeline

Let’s discuss one sample application delivery pipeline. In this sample app, assume that our pipeline consists of the following steps.

Build -> Automated Test -> Dev Deploy -> QA Approval -> QA Deploy

The Seed Job is the one which will be creating other Jenkins job automatically but the seed job itself will be configured manually. The Seed Job will read the DSL script, parse those and create appropriate Job configurations in Jenkins.

After the seed job runs successfully, we will have a job created for our sample app.

The Seed job will create the following set of jobs which will eventually be part of the pipeline. The Seed job will also create a Jenkins pipeline view.

Pipeline View: Using this view you can visualize your delivery pipeline and see at which steps failure occurred.

After the first build started, the pipeline will look like below:

Build:  This job includes the configuration for the building project, job triggers, scm location, jdk version to use, maven goals, artifact upload to repo like Artifactory, and  email notification.

Test: This job can call test suites and decide to call a downstream job or not.

Dev Deploy: Simple job with a trigger to the Promotion Job if the deployment was successful.

This job can call the script to perform a deployment or use tools like Bamboo or Urbancode.

Usually a Dev deploy doesn’t need promotion, but we can add that step if required.

QA Promotion: This job includes a send email notification to the person/group responsible for approval. The email contains a link for promotion and an optional comment for approval notes. The Pipeline before approval:

The Promotion Email link can look like this:  http://localhost:8080/job/SampleAppQApromotion/1/promotion/

Once the approver clicks on the link, the approval screen will come as below.

Once approved, the deploy job will run and the pipeline may look like below.

If you are a Jenkins Admin or have global privileges,  you will see Force Execution Option as below.

After Re-execute Promotion

Jenkins Admin will see the Force Promotion option

In Pipeline, the star icon will show that a particular build is promoted and by which user.

QA Deploy:  QA environment deployment job. This job can call a script to perform a deployment or use tools like Bamboo or Urbancode.

We can chain multiple Promotion Jobs and Deploy jobs to accomplish the need for another environment, for e.g. UAT Promotion -> UAT Deploy ->PreProd Promotion -> PreProd Deploy -> Prod Promotion -> Prod Deploy

Not every organization/team has fully automated test suites, in some cases it may be a combination of automated and manual testing. In such cases, we may introduce promotion steps anywhere in the pipeline to let the individual decide to promote or not.

]]>
https://blogs.perficient.com/2017/06/14/jenkins-delivery-pipeline-and-build-promotion-2/feed/ 2 196385
DevOps and Open Source Technologies https://blogs.perficient.com/2017/06/08/devops-and-open-source-technologies/ https://blogs.perficient.com/2017/06/08/devops-and-open-source-technologies/#respond Fri, 09 Jun 2017 03:28:47 +0000 http://blogs.perficient.com/digitaltransformation/?p=11405

As everybody is adopting various cloud solutions in some shape or form which may be Public, Private or Hybrid or even changing the architecture or approach to do things (for example microservices), it’s essential that we implement a very high degree of automation. With the traditional approach and architecture, we had few components which makes it easy to monitor, implement and support the infrastructure. But with a growing number of components and various pieces, it is no longer easy, reliable and feasible to do so.

We have a lot of proprietary solutions available to solve many of the obstacles, but many times we feel either they are too costly or not flexible enough to do things the way we want. We also have various open source solutions available which have advantages and disadvantages too. With open source solutions you can customize and change or make some tweaks to meet your needs, but that comes with pain that you need to spend time in development efforts.

Various companies and organizations also provide paid support for some open source technologies and we get a log of community support too. While it’s not possible to list all of the open source tools and technologies in one article I will try to list a few which are the more commonly used solutions.

Build Tools:

There are various tools available in this category like Ant, Make, Maven, Ivy, Gradle, Grunt, Rake and Gulp.

Apache Ant is mostly used to build java projects, but it can very well be used for C and C++. It uses XML files for build configuration.

GNU Make uses make file for build configuration. It can be used for almost any language.

Apache Maven is another XML based tool which used POM format. It is widely popular for java projects because of its ease for dependency management features. It also includes artifact release features.

Apache Ivy is similar in many respect to Maven and Ant. It is highly flexible and extensible.

Gradle is based on groovy. It combines many features from tools like Ant and Maven. Gradle scripts are nothing like groovy scripts which means you can customize your builds any way you like.

Grant and Gulp are used in JavaScript related build and automation tasks.

Rake is similar to Make in some aspects. It is written in Ruby and popular in the Ruby community.

Continuous Integration Tools:

Jenkins is a well known and very popular CI solution. Jenkins has a large number of plugins available for almost anything we need and if we can’t find any, there is a way to create a plugin yourself. It does lack some basic features out of the box and it needs plugins to accomplish anything.

TeamCity is an enterprise grade solution which will be available for free but with limited use. Team City is extensible through plugins. It provides various out of the box features like user and access management, user audits, code quality.

Travis CI is another very popular CI solution available in market.

GO CD is another open source tool which is widely used. It was originally created at ThoughtWorks and later made open source.

Some other available solution available are Integrity, Buildbot.

System Configuration or Infrastructure as a Code:

With growing infrastructure components, servers, applications, containers, and virtual machines it’s very hard to create and maintain all of these manually. We need to consider that systems are disposable and we need the ability to create the same system repeatedly and fast to meet our needs. We can accomplish a variety of needs using Docker, Vagrant, Ansible, Packer, Chef, Puppet.

Docker is not the only containerization technology, but certainly it has made it popular and easy to adopt container technology.

Vagrant makes virtual machine setup and maintenance easy using configuration files. Developers can have multiple configurations to meet their development and debugging needs or to match certain environments.

Packer takes the configuration approach one step ahead in a sense that we can create environments on multiple systems with the same configuration files.

Ansible uses simple text files like the configuration approach. Playbooks are a simple form of orchestration for systems or environments we would like to build.

Chef and Puppet are in the same category as Ansible.

Logs Analytics and Monitoring:

ELK stack combines Elasticsearch, Logstash and Kibana together to build a powerful solution for log analytics, monitoring, and troubleshooting.

Graylog is another log management and analysis tool.

Service Discovery: With hundreds of services running an organization, it’s no easy task to manage the endpoints, supply them to services and know who is using what endpoint.

Consul can be used for Service discovery and health monitoring.

Eureka and Zookeeper are also available solutions in the same space.

If you like this article feel free to read another blog on jenkins pipeline at https://blogs.perficient.com/digitaltransformation/2017/06/13/jenkins-delivery-pipeline-and-build-promotion/

]]>
https://blogs.perficient.com/2017/06/08/devops-and-open-source-technologies/feed/ 0 186609
Why and What to Validate in a Maven POM xml File https://blogs.perficient.com/2016/09/29/why-and-what-to-validate-in-maven-pom-xml-file/ https://blogs.perficient.com/2016/09/29/why-and-what-to-validate-in-maven-pom-xml-file/#respond Thu, 29 Sep 2016 18:38:24 +0000 http://blogs.perficient.com/delivery/?p=6261

Following are some of the ways you can validate a pom.xml file:

  • Maven itself validates some basic things that it need to build your project
  • Manually Review pom.xml for projects and modules against a checklist to make sure everything is correct.
  • Automate using scripting language like Python, Jython.

Buy why would someone perform an extra set of activities that would complicate the development and build process?

Every organization has a set of standards and best practices which should be followed by every person. Organizations may also have development standards or best practices which need to be followed by all in order to make the process consistent.

Following are some cases where you may want to have an automated method of verifying that these conditions are met:

  • All pom.xml have scm element. While this is not needed for the build, if you want to use the maven release you need to make sure it is present.
  • Artifact finalname is not consistent. Some developers want to make the final different and some leave it as default so that maven will automatically combine the artifact id and version to make the final name.
  • We may not want to have a repository location in every pom.xml as this can be controlled by maven settings.xml
  • We may not want developers to have skiptests set as true to skip Junit just to make the build pass.
  • All projects should have the correct groupId which might start with something related to company, org, department etc. All projects should follow in the same way.
  • All projects should have a consistent version naming convention.
  • We may want all projects to have a name element as the full name of the project. For e.g. artifact id can be a shortened name for the application, but name can describe the full name.
  • We may want to make sure developers add description elements to describe details or the purpose of that application.
  • Every project should have a packaging type since maven will default to jar if nothing is specified.
]]>
https://blogs.perficient.com/2016/09/29/why-and-what-to-validate-in-maven-pom-xml-file/feed/ 0 210821
API Lifecycle https://blogs.perficient.com/2016/09/26/api-lifecycle/ https://blogs.perficient.com/2016/09/26/api-lifecycle/#respond Tue, 27 Sep 2016 03:57:00 +0000 http://blogs.perficient.com/delivery/?p=6252

Requirement – This is the stage where we want to expose some functionality using APIs.

Analysis – We analyze what system will be required to fulfill functionality, API Contract etc.

Development – APIs will be developed as per the contract. The contract or documentation will be published so that the consumer knows what is needed to consume APIs.

Deployed – API is deployed to the production system

Active – This is the version that should be used by consumers

Deprecated – This version is deprecated, but still supported for bug fixes. It is no longer recommended for new consumers to start with.

Retired – After a certain amount of time, the old API version can be retired. For e.g. if a new version with the same functionality is available, you can retire the old version.

API Lifecyle Diagram:

apilifecycle

]]>
https://blogs.perficient.com/2016/09/26/api-lifecycle/feed/ 0 210820
When to break Monolithic https://blogs.perficient.com/2016/09/25/when-to-break-monolithic/ https://blogs.perficient.com/2016/09/25/when-to-break-monolithic/#respond Mon, 26 Sep 2016 01:56:17 +0000 http://blogs.perficient.com/delivery/?p=6230

Often we wonder when we should break monolithic service into micro-service or small services. When is it time to make this change and what will it will take to accomplish.

We all know services should be more cohesive and loosely coupled. While we may have started with that in mind, over time we look back and wonder if it really is the case. Sometimes we end up adding code or functionality that was never meant to be part of particular service. This can happen because of business timelines or because we are in a hurry to launch functionality sooner than other companies.

Are we taking too much time for a small change?

Large services require more risk. We need to be cautious and complete a thorough analysis to ensure a backup plan is ready. With large services, if something goes wrong, it could impact a large quantity of users. These considerations can lead to a slower release cycle.

Are we impacting other functionality every time we release something?

As services have more functionality, making one small change could lead to unexpected changes in other functions that you weren’t anticipating.

Not Enough time to run full test suites?

Building more functionality into a single service could result in hundreds of test cases. Depending on the complexity of your service, running test suites can be fast or they may take hours or days if more complex data setup is required or if not all systems are automated.

Multiple teams making change to same codebase?

Over time we keep adding functionality in existing services and those functionalities may be owned or maintained by separate teams. Every time we need to make a change, one team might not be available, which could delay the functionality release. All teams need to run a regression and make sure the new functionality doesn’t break existing functionalities.

If you run into these issues repeatedly, then it’s time to break them into smaller services.

]]>
https://blogs.perficient.com/2016/09/25/when-to-break-monolithic/feed/ 0 210818
Automating Docker Image builds https://blogs.perficient.com/2016/09/23/automating-docker-image-builds/ https://blogs.perficient.com/2016/09/23/automating-docker-image-builds/#respond Sat, 24 Sep 2016 01:18:00 +0000 http://blogs.perficient.com/delivery/?p=6208

Why automate?

  • Similar to any other automation, we get consistent results once we have automation in place.
  • It speeds up the development process.
  • You can produce the same result by following the same steps, so there is less of a change that it will work sometimes and not work other times.

What is required to automate?

Dockerfile – Docker file contains the steps to produce the Docker image. These steps include things like: what base image, if other installations are required, installing your app etc.

Depending on the requirements, we use an existing image to build a new image such as MyAppV1, MyAppV2 etc.

Any CI/CD tool like Jenkins – Tools like Jenkins can help you schedule your task, send notifications, run tasks from script etc.

Automated test suite – Run some tests to make sure your Docker image is really useful or does what we intended to do.

Docker registry – Once the Docker Image looks good, we can upload it to the Docker registry so that it can be used by others.

Overall Workflow-

dockerimageautomation

]]>
https://blogs.perficient.com/2016/09/23/automating-docker-image-builds/feed/ 0 210817
Docker Swarm Quick Overview https://blogs.perficient.com/2016/09/21/docker-swarm-quick-overview/ https://blogs.perficient.com/2016/09/21/docker-swarm-quick-overview/#respond Thu, 22 Sep 2016 03:11:02 +0000 http://blogs.perficient.com/delivery/?p=6154

Docker Swarm helps you create and manage Docker Clusters.

Docker Swarm automatically handle scaling up and down depending on number of task you want to run.

Docker swarm can handle load balancing internally among containers.

Docker Swarm Components

Docker Swarm contains following key components.

Node – Special Container that run on each Swarm host. Node talk to other Docker Host/Node which are participating in Swarm.

Swarm Manager/Manager Node – Swarm Manager manages swarm host. Manager node delegate work/task to worker nodes.

Swarm Host/Worker Node – Swarm Host are nothing but Docker Hosts where containers run.

docker-swarm

]]>
https://blogs.perficient.com/2016/09/21/docker-swarm-quick-overview/feed/ 0 210813
Docker Container Best Practices https://blogs.perficient.com/2016/09/20/docker-container-best-practices/ https://blogs.perficient.com/2016/09/20/docker-container-best-practices/#respond Wed, 21 Sep 2016 03:30:34 +0000 http://blogs.perficient.com/delivery/?p=6145
  • It is preferable to use Dockerfile to create an image.
  • Using Dockerfile is the only way you can be sure of reproducing the same image every time.
  • Use a Layered approach, but keep in mind that Docker imposes limits on the number of layers an image can have. To minimize layer, you can combine commands in Dockerfile.
  • Use .dockerignore file to avoid any extra files/directories when building images.
  • Avoid very large images because they will take too long to download/upload..
  • Use Data volume to share data between containers.
  • Do not store temporary Docker images in the Docker registry. It’s best to save the images and tar file to share with other developers. We can use standard Docker repo but we need to make sure they get deleted using an automated process.
  • If you are using multiple containers for your application, for example Database, Some Service and Web App Container, it’s better to user Docker Compose so that all containers can be managed together like starting, stopping etc.
]]>
https://blogs.perficient.com/2016/09/20/docker-container-best-practices/feed/ 0 210812
An Introduction to Docker and Containers https://blogs.perficient.com/2016/09/20/an-introduction-to-docker-and-containers/ https://blogs.perficient.com/2016/09/20/an-introduction-to-docker-and-containers/#respond Tue, 20 Sep 2016 14:13:10 +0000 http://blogs.perficient.com/delivery/?p=6139

Containers: Containers represent operating system level virtualizations which help you run multiple isolated systems i.e. containers in same machine.

Docker: Docker is a containerization engine which means that it lets you create containers to achieve operating system level virtualization. Docker allows you to automate the containerization process.
We can store Docker images in a Docker repository and share Docker images within your company or with the outside world, depending on how you set up the repository or if the image is shareable outside the company.

Virtualization vs Containerization

Virtualization
It represents hardware level virtualization
It is Heavyweight compared to containers
Slow provisioning compared to containers
More secure since it is fully isolated
More overhead than containers

Containerization –
It represents operating system virtualization
Lightweight compared to Virtual Machine
Faster provisioning and scaleability
Fast like Native performance
Less secure since it’s a Process level isolation

The most common Docker solutions include Docker Engine, Docker Hub or Docker repository.

Docker Engine – The core component which is responsible for running Docker Containers.

Docker Hub – The Docker repository. Holds Docker container images which are available to the public. You can find the Docker Hub here.

]]>
https://blogs.perficient.com/2016/09/20/an-introduction-to-docker-and-containers/feed/ 0 210811
Spring Boot Actuator – Application Monitoring Made Easy https://blogs.perficient.com/2016/07/16/spring-boot-actuator-application-monitoring-made-easy/ https://blogs.perficient.com/2016/07/16/spring-boot-actuator-application-monitoring-made-easy/#respond Sat, 16 Jul 2016 21:03:48 +0000 http://blogs.perficient.com/digitaltransformation/?p=10353

Spring Boot Actuator helps you manage and monitor your applications using various way like HTTP, JMX and SSH. It allows you to monitor various aspects of your application.
Actuator exposes various endpoint to provides different details about environment and application. Following are the endpoints exposed.

All the endpoint can be accessed using <host>:port/endpoint for e.g.  http://localhost:8080/info , http://localhost:8080/health
1.actuator – This display all the endpoint available. This is disabled by default though. To enable this you need to add spring-hateoas dependency in your application pom.xml
<dependency>
<groupId>org.springframework.hateoas</groupId>
<artifactId>spring-hateoas</artifactId>
</dependency>
Sample Screenshot:
acutator

2.autoconfig – This display auto-configuration report for your application
Sample Screenshot:
autoconfig
3.beans – Display all beans in your application
Sample Screenshot:
beans
4.configprops – Display the configuration properties that are defined by the @ConfigurationProperties beans.
Sample Screenshot:
configprops
5.docs –  Display documentation for spring actuator. Need acutator-docs dependency in application pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-actuator-docs</artifactId>
</dependency>
Sample Screenshot:
docs
6.dump – Display thread dump. For e.g. running thread, stacktrace etc.
Sample Screenshot:
dump
7.env – Display all the properties from the Spring’s ConfigurableEnvironment.
Sample Screenshot:
env
8.health – Display application health information like status, diskspace etc.
Sample Screenshot:
health
9.metrics – Display metrics of the current application. For e.g. memory used, memory available, uptime of your application, number of threads etc.
Sample Screenshot:
metrics
10.info – display information about application. Custom value can be configured in application.properties file. See example below.
info.app.name=Spring Boot Web Actuator Application
info.app.description=This is an example of the Actuator module
info.app.version=1.0.0
Sample Screenshot:
info
11.mappings – Display all the mapping for RequestMapping.
Sample Screenshot:
mappings
12.trace – Display trace information. For e.g. request made.
Sample Screenshot:
trace
13.flyway – Display information about database migration scripts
14.liquidbase – Display liquidbase database migration if applied. To enable this endpoint add following liquibase-core  dependency in your application pom.xml
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-core</artifactId>
</dependency>
15.logfile – Returns the contents of the logfile. Logfile should be specified in application.properties file usinf logging.file or logging.path
16.shutdown – Shutdown application. By default it is not enabled.  To enable this endpoint add following in application.properties
endpoints.shutdown.enabled=true

]]>
https://blogs.perficient.com/2016/07/16/spring-boot-actuator-application-monitoring-made-easy/feed/ 0 186569
Most Common NoSQL Databases https://blogs.perficient.com/2016/07/15/most-common-nosql-databases/ https://blogs.perficient.com/2016/07/15/most-common-nosql-databases/#respond Fri, 15 Jul 2016 23:56:11 +0000 http://blogs.perficient.com/digitaltransformation/?p=10346
  1. Key-value pair NoSQL databases

Picture1

  • Most basic type of NoSQL database
  • Two main concepts are Keys and Values
  • Keys are nothing but identifiers through which you can refer values
  • Values are values corresponding to a key
  • Values can be like string, blob, image etc.
  • Some database support buckets to provide separate placeholder to logically separate data. For e.g. account data goes to account bucket, orders go to order bucket

 

Picture2

 

  • Databases: Oracle NoSQL, Redis, Cassandra

      2. Document NoSQL Database

  • Document database are similar to key-value databases with one different that they treat value as document of various types like JSON, XML etc.

For e.g.

{

“orderno”:”12345”,

“orderdate”:”08-10-2015”

}

Picture3

  • Values don’t need to be in predefined format.
  • It can offers API to run against values to give custom results. For e.g. you can make query like give me order which was placed on date 08-10-2015
  • Databases:  CouchDB, MongoDB

 

     3. Column style NoSQL databases

  • Somewhat similar to relational databases as it stores value in columns.
  • Little bit more complex than Key-value database or document store databases.
  • Column have Name and Value

Picture4

  • One or more columns make Row
  • Rows can have different columns
  • Don’t need to have predefined schema
  • Databases:  HBase , Cassandra

 

     4. Graph NoSQL Databases

  • Little complex than other NoSQL databases
  • Uses concept from graph theory called vertices and edges or nodes/relations to represent data
  • Node is nothing but an object which have set of attributes
  • Relationship is link between two nodes
  • Attributes/Properties are the piece of information that relates two nodes
  • Databases: Neo4J, OrientDB

 

]]>
https://blogs.perficient.com/2016/07/15/most-common-nosql-databases/feed/ 0 186568