Cloud & DevOps Archives - sysgenpro Blog https://www.sysgenpro.com/blog/category/cloud-devops/feed/ Wed, 26 Jun 2024 10:05:09 +0000 en-US hourly 1 Docker Best Practices https://www.sysgenpro.com/blog/docker-best-practices/ https://www.sysgenpro.com/blog/docker-best-practices/#respond Wed, 26 Jun 2024 10:05:05 +0000 https://www.sysgenpro.com/blog/?p=13086 Several container tools and platforms have evolved to facilitate the development and operation of containers, even though Docker has become interchangeable with containers and used by many software development companies during the development process. Protecting container-based apps developed with technologies follows similar security principles as Docker.

The post Docker Best Practices appeared first on sysgenpro Blog.

]]>
Several container tools and platforms have evolved to facilitate the development and operation of containers, even though Docker has become interchangeable with containers and used by many software development companies during the development process. Protecting container-based apps developed with technologies follows similar security principles as Docker. To create more secure containers, we have assembled some of the most important Docker best practices into one blog post, making it the most thorough practical advice available. Shall we begin?

1. Docker Development Best Practices

In software development, adhering to best practices while working with Docker can improve the efficiency and reliability of your software projects. Below-mentioned best practices help you better optimize images, measure security in the docker container runtime, and host OS, along with ensuring smooth deployment processes and maintainable Docker environments.

1.1 How to Keep Your Images Small

Small images are faster to pull over the network and load into memory when starting containers or services. There are a few rules of thumb to keep the image size small:

  • Begin with a suitable basic image: If you require a JDK, for example, you might want to consider using an existing Docker Image like eclipse-temurin instead of creating a new image from the ground up.
  • Implement multistage builds: For example, you may develop your Java program using the maven image, then switch to the tomcat image, and finally, deploy it by copying the Java assets to the right place, all within the same Dockerfile. This implies that the built-in artifacts and environment are the only things included in the final image, rather than all of the libraries and dependencies.
  • To make your Docker image smaller and utilize fewer layers in a Dockerfile with fewer RUN lines, you can avoid using Docker versions without multistage builds. Just use your shell’s built-in capabilities to merge any commands into one RUN line, and you’ll be good to go. Take into account these two pieces. When you use the first, you get two image layers; when you use the second, you get just one.
RUN apt-get -y update
RUN apt-get install -y python

or

RUN apt-get -y update && apt-get install -y python
  • If you have multiple images with a lot in common, consider creating your base image with the shared components, and basing your unique images on that. Docker only needs to load the common layers once, and they are cached. This means that your derivative images use memory on the Docker host more efficiently and load faster.
  • To keep your production image lean but allow for debugging, consider using the production image as the base image for the debug image. Additional testing or debugging tools can be added on top of the production image.
  • Whenever deploying the application in different environments and building images, always tag images with useful tags that codify version information, intended destination (prod or test, for instance), stability, or other useful information. Don’t rely on the automatically created latest tag.

1.2 Where and How to Persist Application Data

  • Keep application data out of the container’s writable layer and away from storage drivers. Compared to utilizing volumes or bind mounts, storing data in the container’s writable layer makes it larger and less efficient from an I/O standpoint.
  • Alternatively, use volumes to store data.
  • When working in a development environment, bind mounts can be useful for mounting directories such as source code or newly produced binaries into containers. Instead of mounting a volume in the same spot as a bind mount during development; use that spot for production purposes.
  • During production, it is recommended to utilize secrets for storing sensitive application data that services consume, and configs for storing non-sensitive data like configuration files. You may make use of these capabilities of services by transforming from standalone containers to single-replica services.

1.3 Use CI/CD for Testing and Deployment

Use Docker Hub or another continuous integration/continuous delivery pipeline to automatically build, tag, and test Docker images whenever you make a pull request or check in changes to source control.

Make it even more secure by having the teams responsible for development, testing, and security sign images before they are sent into production. The development, quality, and security teams, among others, can test and approve an image before releasing it to production.

2. Docker Best Practices for Securing Docker Images

Let’s have a look at the best practices for Docker image security.

2.1 Use Minimal Base Images

When creating a secure image, selecting an appropriate base image is the initial step. Select a small, reputable image and make sure it’s constructed well.

Over 8.3 million repositories are available on Docker Hub. Official Images, a collection of Docker’s open-source and drop-in solution repositories, are among these images. Docker publishes them. Images from verified publishers can also be available on Docker.

Organizations that work with Docker produce and maintain these high-quality images, with Docker ensuring the legitimacy of their repository content. Keep an eye out for the Verified Publisher and Official Image badges when you choose your background image.

Pick a simple base image that fits your requirements when creating your image using a Dockerfile. A smaller base image doesn’t only make your image smaller and faster to download, but also reduces the amount of vulnerabilities caused by dependencies, making your image more portable.

As an additional precaution, you might want to think about creating two separate base images: one for use during development and unit testing, and another for production and beyond. Compilers, build systems, and debugging tools are build tools that may not be necessary for your image as it progresses through development. One way to reduce the attack surface is to use a minimal image with few dependencies.

2.2 Use Fixed Tags for Immutability

Versions of Docker images are often managed using tags. As an example, the most recent version of a Docker image may be identified by using the “latest” tag. But since tags are editable, it’s conceivable for many images to have the same most recent tag, which can lead to automated builds acting inconsistently and confusingly.

To make sure tags can’t be changed or altered by later image edits, you can choose from three primary approaches:

  • If an image has many tags, the build process should choose the one with the crucial information, such as version and operating system. This is because more specific tags are preferred.
  • A local copy of the images should be kept, maybe in a private repository, and the tags should match those in the local copy.
  • Using a private key for cryptographic image signing is now possible with Docker’s Content Trust mechanism. This ensures that both the image and its tags remain unaltered.

2.3 Use of Non-Root User Accounts

Recent research has shown that the majority of images, 58% to be exact, are using the root user ID (UID 0) to run the container entry point, which goes against Dockerfile recommended practices. Be sure to include the USER command to alter the default effective UID to a non-root user, because very few use cases require the container to run as root.

In addition, Openshift necessitates extra SecurityContextConstraints, and your execution environment may automatically prohibit containers operating as root.

To run without root privileges, you might need to add a few lines to your Dockerfile, such as:

  • Verify that the user listed in the USER instruction is indeed present within the container.
  • Make sure that the process has the necessary permissions to read and write to the specified locations in the file system.
# Base image
FROM alpine:3.12

# Create a user 'app' and assign ownership and permissions
RUN adduser -D app && chown -R app /myapp-data

# ... copy application files

# Switch to the 'app' user
USER app

# Set the default command to run the application
ENTRYPOINT ["/myapp"]

It is possible to encounter containers that begin as root and then switch to a regular user using the gosu or su-exec commands.

Another reason containers might use sudo is to execute certain commands as root.

Although these two options are preferable to operating as root, they might not be compatible with restricted settings like Openshift.

3. Best Practices for Local Docker

Let’s discuss Local Docker best practices in detail.

3.1 Cache Dependencies in Named Volumes #

Install code dependencies as the machine starts up, rather than baking them into the image. Using Docker’s named volumes to store a cache significantly speeds up the process compared to installing each gem, pip, and yarn library from the beginning each time we resume the services (hello NOKOGIRI). The configuration mentioned above might evolve into:

services:
  rails_app:
    image: custom_app_rails
    build:
      context: 
      dockerfile: ./.docker-config/rails/Dockerfile
    command: ./bin/rails server -p 3000 -b '0.0.0.0'
    volumes:
      - .:/app
      - gems_data:/usr/local/bundle
      - yarn_data:/app/node_modules

  node_app:
    image: custom_app_node
    command: ./bin/webpack-dev-server
    volumes:
      - .:/app
      - yarn_data:/app/node_modules
      
volumes:
  gems_data:
  yarn_data:

To significantly reduce startup time, put the built dependencies in named volumes. The exact locations to mount the volumes will differ for each stack, but the general idea remains the same.

3.2 Don’t Put Code or App-Level Dependencies Into the Image #

When you start a docker-compose run, the application code will be mounted into the container and synchronized between the container and the local system. The main Dockerfile, where the app runs, should only contain the software that is needed to execute the app.

You should only include system-level dependencies in your Dockerfile, such as ImageMagick, and not application-level dependencies such as Rubygems and NPM packages. When dependencies are baked into the image at the application level, it becomes tedious and error-prone to rebuild the image every time new ones are added. On the contrary, we incorporate the installation of such requirements into a starting routine.

3.3 Start Entrypoint Scripts with Set -e and End with exec “$@” #

Using entrypoint scripts to install dependencies and handle additional setups is crucial to the configuration we’ve shown here. At the beginning and end of each of these scripts, you must incorporate the following elements:

  • Next to #!/bin/bash (or something similar) at the start of the code, insert set -e. If any line returns an error, the script will terminate automatically.
  • Put exec “$@” at the end of the file. The command directive will not carry out the instructions you provide unless this is present.

4. Best Practices for Working with Dockerfiles

Here are detailed best practices for working with Dockerfiles.

4.1 Use Multi-Stage Dockerfiles

Now imagine that you have some project contents (such as development and testing tools and libraries) that are important for the build process but aren’t necessary for the application to execute in the final image.

Again, the image size and attack surface will rise if you include these artifacts in the final product even though they are unnecessary for the program to execute.

The question then becomes how to partition the construction phase from the runtime phase. Specifically, how can the build dependencies be removed from the image while remaining accessible during the image construction process? In that case, multi-stage builds are a good option. With the functionality of multi-stage builds, you can utilize several temporary images during the building process, with only the most recent one becoming the final artifact:

Example:-

# Stage 1: Build the React app
FROM node:latest as react_builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Stage 2: Create the production image
FROM nginx:stable
COPY --from=react_builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

4.2 Use the Least Privileged User

Now, which OS user will be utilized to launch the program within this container when we build it and run it?

Docker will use the root user by default if no user is specified in the Dockerfile, which may pose security risks. On the other hand, running containers with root rights is usually unnecessary. This creates a security risk as containers may get root access to the Docker host when initiated.

Therefore, an attacker may more easily gain control of the host and its processes, not just the container, by launching an application within the container with root capabilities. This is true unless the container’s underlying application is susceptible to attacks.

The easiest way to avoid this is to execute the program within the container as the specified user, as well as to establish a special group in the Docker image to run the application.

Using the username and the USER directive, you can easily launch the program.

Tips: You can utilize the generic user that comes packed with some images. This way, you may avoid creating a new one. For instance, you can easily execute the application within the container using the node.js image, which already includes a generic user-named node. 

4.3 Organize Your Docker Commands

  • Combine commands into a single RUN instruction whenever possible; for instance, you can run many instructions using the && operator.
  • To minimize the amount of file system modifications, arrange the instructions in a logical sequence. For instance, group operations that modify the same files or directories together.
  • If you want to utilize fewer commands, you need to reevaluate your current approach.
  • Reducing the number of COPY commands in the Apache web server example, as described in the section Apache Web Server with Unnecessary COPY Commands is one way to achieve this.

5. Best Practices for Securing the Host OS

Below are a few best practices for securing the host OS with Docker.

5.1 OS Vulnerabilities and Updates

It is critical to establish consistent procedures and tools for validating the versioning of packages and components within the base OS after selecting an operating system. Take into consideration that a container-specific operating system may have components that could be vulnerable and necessitate fixing. Regular scanning and checking for component updates using tools offered by the operating system vendor or other reputable organizations.

To be safe, always upgrade components when vendors suggest, even if the OS package does not contain any known security flaws. You can also choose to reinstall an updated operating system if that is more convenient for you. Just like containers should be immutable, the host running containerized apps should also maintain immutability. Data should not only persist within the operating system. To prevent drift and drastically lower the attack surface, follow this best practice. Finally, container runtime engines like Docker provide updates with new features and bug fixes often. Applying the latest patches can help reduce vulnerabilities.

5.2 Audit Considerations for Docker Runtime Environments

Let’s examine the following:

  • Container daemon activities
  • These files and directories:
    • /var/lib/docker
    • /etc/docker
    • docker.service
    • docker.socket
    • /etc/default/docker
    • /etc/docker/daemon.json
    • /usr/bin/docker-containerd
    • /usr/bin/docker-runc

5.3 Host File System

Ensure that containers operate with the minimum required file system permissions. Mounting directories on a host’s file system should not be possible for containers, particularly when such folders include OS configuration data. This is not a good idea since an attacker might gain control of the host system if they were to get their hands on the Docker service, as it runs as root.

6. Best Practices for Securing Docker Container Runtime

Let’s follow these practices for the security of your Docker container runtime.

6.1 Do Not Start Containers in Privileged Mode

Unless necessary, you should avoid using privileged mode (–privileged) due to the security risks it poses. Running in privileged mode allows containers to access all Linux features and removes limitations imposed by certain groups. Because of this, they can access many features of the host system.

Using privileged mode is rarely necessary for containerized programs. Applications that require full host access or the ability to control other Docker containers are the ones that use privileged mode.

6.2 Vulnerabilities in Running Containers

You may add items to your container using the COPY and ADD commands in your Dockerfile. The key distinction is ADD’s suite of added functions, which includes the ability to automatically extract compressed files and download files from URLs, among other things.

There may be security holes due to these supplementary features of the ADD command. A Docker container might be infiltrated with malware, for instance, if you use ADD to download a file from an insecure URL. Thus, using COPY in your Dockerfiles is a safer option.

6.3 Use Read-Only Filesystem Mode

Run containers with read-only mode enabled for their root filesystems. This will allow you to quickly monitor writes to explicitly designated folders. One way to make containers more secure is to use read-only filesystems. Furthermore, avoid writing data within containers because they are immutable. On the contrary, set a specified volume for writes.

7. Conclusion

With its large user base and many useful features, Docker is a good choice to the cloud-native ecosystem and will likely remain a dominant player in the industry. In addition, Docker offers significant benefits for programmers, and many companies aspire to use DevOps principles. Many developers and organizations continue to rely on Docker for developing and releasing software. For this reason, familiarity with the Dockerfile creation process is essential. Hopefully, you have gained enough knowledge from this post to be able to create a Dockerfile following best practices.

The post Docker Best Practices appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/docker-best-practices/feed/ 0
Kubernetes Deployment Strategies- A Detailed Guide https://www.sysgenpro.com/blog/kubernetes-deployment-strategies/ https://www.sysgenpro.com/blog/kubernetes-deployment-strategies/#respond Tue, 09 Apr 2024 12:04:42 +0000 https://www.sysgenpro.com/blog/?p=12877 Kubernetes is a modern-age platform that enables business firms to deploy and manage applications. This container orchestration technology enables the developers to streamline infrastructure for micro-service-based applications that eventually help in managing the workload. Kubernetes empowers different types of deployment resources like updating, constructing, & versioning of CD/CI pipelines.

The post Kubernetes Deployment Strategies- A Detailed Guide appeared first on sysgenpro Blog.

]]>

Key Takeaways

  1. To deliver resilient apps and infrastructure, shorten time to market, create deployments without downtime, release features & apps faster and operate them with greater flexibility, choosing the right Kubernetes deployment strategies is important.
  2. All the K8s deployment strategies use one or more of these use cases:
    • Create: Roll out new K8s pods and ReplicaSets.
    • Update: Declare new desired state and roll out new pods and ReplicaSets in a controlled manner.
    • Rollback: Revert the K8s deployment to its previous state.
    • Scale: Increase the number of pods and Replica Sets in the K8s deployment without changing them.
  3. Following factors one should consider while selecting any K8s deployment strategy:
    • Deployment Environment
    • How much downtime you can spare
    • Stability of the new version of your app
    • Resources availability and their cost
    • Project goals
  4. Rolling or Ramped deployment method is Kubernetes default rollout method. It scaled down old pods only after new pods became ready. Also one can pause or cancel the deployment without taking the whole cluster offline.
  5. Recreate Deployment, Blue/Green Deployment, Canary Deployment, Shadow Deployment, and A/B Deployment are other strategies one can use as per requirements.

Kubernetes is a modern-age platform that enables business firms to deploy and manage applications. This container orchestration technology enables the developers to streamline infrastructure for micro-service-based applications that eventually help in managing the workload. Kubernetes empowers different types of deployment resources like updating, constructing, & versioning of CD/CI pipelines. Here, it becomes essential for the Kubernetes deployment team to use innovative approaches for delivering the service because of frequent updates in Kubernetes.

For this, software development companies have to choose the right deployment strategy as it is important for deploying production-ready containerized applications into Kubernetes infrastructure. For this, there are three different options available in the market and they are canary releases, rolling, and blue/green deployments. Kubernetes helps in deploying and autoscaling the latest apps by implementing new code modifications in production environments. To know more about this Kubernetes deployment and its strategies, let’s go through this blog.

1. What is Kubernetes Deployment?

A Deployment allows a description of an application’s life cycle such as images to be used, the number of pods required, and how to update them in the application. In other words, deployment in Kubernetes is a resource object that can be used to specify the desired state of the application as deployment is generally declarative which means that the developers cannot dictate the achievement steps of this phase. Besides, in Kubernetes, a deployment controller can be used to reach the target efficiently.

2. Key Kubernetes Deployment Strategies

The Kubernetes deployment process is known as a declarative statement that helps development teams configure it in the YAML file that specifies the several Kubernetes deployment strategies and life of the application and how it will get updated from time to time. While deploying applications to a K8s cluster the selected deployment strategy would determine which applications have been updated from an older version to a newer version. Also, some Kubernetes deployment strategies involve downtime while some deployments would introduce testing concepts and enable user analysis.

2.1 Rolling Deployment Strategy

Rolling Deployment Strategy
  • Readiness probes

This type of deployment strategy helps the developers to monitor when any application gets activated and what happens if the probe fails & no traffic is sent to the pod. This approach is mostly used when there is a need for specific types of initialization steps in an application before it goes live. There are chances where applications may be overloaded with traffic and cause probe failure. With this, it also safeguards the application from getting more traffic.

Once the availability of the new version of an application is detected by the readiness probe, the older version gets removed. If there are any challenges then the rollout can be stopped and rollback of the previous version is deployed to avoid downtime of the application across the Kubernetes cluster. This is because each pod in the application gets replaced one by one. Besides this, even the deployment can take some time when the clusters are larger than usual. When a new deployment gets triggered before the previous one is finished, the previous deployment gets ignored and the new version gets updated as per the new deployment. 

When there is something specific to the pod and it gets changed, the rolling deployment gets triggered. The change here can be anything from the environment to the image to the label of the pod. 

  • MaxSurge

It specifies a maximum number of pods that are permitted at the time of deployment.

  • MaxUnavailable

It defines the maximum number of pods that are granted by the deployment creation process to be inaccessible while the rollout is being processed. 

In this example:

  • replicas: 3 indicates that there are initially three replicas of your application running.
  • rollingUpdate is the strategy type specified for the deployment.
  • maxUnavailable: 1 ensures that during the rolling update, at most one replica is unavailable at a time.
  • maxSurge: 1 allows one additional replica to be created before the old one is terminated, ensuring the desired number of replicas is maintained
apiVersion: E-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce
  template:
    metadata:
      labels:
        app: e-commerce
    spec:
      containers:
        - name: e-commerce-container
          image: sysgenpro/e-commerce:latest
          ports:
            - containerPort: 80
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1

2.2 Recreate Deployment

Recreate Deployment

Here is a digital representation of Recreate Deployment. It’s a two-step process that includes deleting all old pods, and once that is done, new pods are created. It may lead to downtime as users will have to wait until old services are deleted and new ones are created. However, this strategy is still used by Kubernetes for performing deployment.

Recreate deployment strategy helps eliminate all the pods and enables the development team to replace them with the new version. This recreate deployment strategy is used by the developers when a new and old version of the application isn’t able to run at the same time. Here, in this case, the downtime amount taken by the system depends on the time the application takes to shut down its processing and start back up. Once the pods are completely replaced, the application state is entirely renewed. 

In this example, the strategy section specifies the type as Recreate, indicating that the old pods will be terminated before new ones are created.

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce
  template:
    metadata:
      labels:
        app: e-commerce
    spec:
      containers:
        - name: e-commerce-container
          Image: sysgenpro/e-commerce:latest
          ports:
            - containerPort: 80
  strategy:
    type: Recreate

2.3 Blue-Green Deployment

In the Blue-Green Kubernetes deployment strategy, you can release new versions of an app to decrease the risk of downtime. It has two identical environments, one serves as the active production environment, that is, blue, and the other serves as a new release environment, that is, green.

Blue-Green Deployment

A Blue-Green deployment is one of the most popular Kubernetes deployment strategies that enable the developers to deploy the new application version which is called green deployment with the old one which is called blue deployment. Here, when the developers want to direct the traffic of the old application to the new one, they use a load balancer. In this case, it is utilized as the form of the service selector object. Blue/Green Kubernetes deployments are costly as they require double the resources of normal deployment processes. 

Define Blue Deployment (blue-deployment.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: blue-e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: blue-e-commerce
  template:
    metadata:
      labels:
        app: blue-e-commerce
    spec:
      containers:
        - name: blue-e-commerce-container
          image: sysgenpro/blue-e-commerce:latest
          ports:
            - containerPort: 80

Define Green Deployment (green-deployment.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: green-e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: green-e-commerce
  template:
    metadata:
      labels:
        app: green-e-commerce
    spec:
      containers:
        - name: green-app-container
          image: sysgenpro/green-e-commerce:latest
          ports:
            - containerPort: 80

Define a Service (service.yaml):

apiVersion: v1
kind: Service
metadata:
  name: e-commerce-service
spec:
  selector:
    app: blue-e-commerce  # or green-app, depending on which environment you want to expose
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Define an Ingress (ingress.yaml):

apiVersion: e-commerce/v1
kind: Ingress
metadata:
  name: e-commerce-ingress
spec:
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: e-commerce-service
                port:
                  number: 80

Now, the blue environment serves traffic. Whenever you are switching to green env, you can make an update in the service selector to match the green deployment labels. After this update is made, Kubernetes will begin routing traffic in the green environment.

2.4 Canary Deployment

In this strategy, you can route a small group of users to the latest version of an app, running in a smaller set of pods. It tests functions on a small group of users and avoids impacting the whole user base. Here’s the visual representation of the canary deployment strategy.

Canary Deployment

A Canary deployment is a strategy that the Kubernetes app developers can use to test a new version of the application when they aren’t fully confident about the functionality of the new version created by the team. This canary deployment approach helps in managing the deployment of a new application version with the old one. Here, the previous version of the application tends to serve the majority of the application users, and the newer application version is supposed to serve a small number of test users. This lets the new deployment rollout maximum number of users when it is successful.

For instance, in the Kubernetes cluster with 100 running pods, 95 could be for v1.0.0 while 5 could be for v2.0.0 of the application. This means that around 95% of the users will be directed to the app’s old version while 5% of them will be directed to the new one. 

Version 1.0 Deployment (v1-deployment.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-v1-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce-v1
  template:
    metadata:
      labels:
        app: e-commerce-v1
    spec:
      containers:
        - name: e-commerce-v1-container
          image: sysgenpro/e-commerce-v1:latest
          ports:
            - containerPort: 80

Version 2.0 Deployment (v2-deployment.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-v2-deployment
spec:
  replicas: 1  # A smaller number for canary release
  selector:
    matchLabels:
      app: e-commerce-v2
  template:
    metadata:
      labels:
        app: e-commerce-v2
    spec:
      containers:
        - name: e-commerce-v2-container
          image: sysgenpro/e-commerce-v2:latest
          ports:
            - containerPort: 80

Service (service.yaml):

apiVersion: v1
kind: Service
metadata:
  name: e-commerce-service
spec:
  selector:
    app: e-commerce-v1  # Initially pointing to the version 1.0 deployment
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Gradually Shift Traffic to Version 2.0 (canary-rollout.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: canary-rollout
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce-v1  # Initially pointing to the version 1.0 deployment
  template:
    metadata:
      labels:
        app: e-commerce-v1
    spec:
      containers:
        - name: e-commerce-v1-container
          Image: sysgenpro/e-commerce-v1:latest
          ports:
            - containerPort: 80
---
apiVersion:  e-commerce/v1
kind: Deployment
metadata:
  name: canary-rollout
spec:
  replicas: 2
  selector:
    matchLabels:
      app: e-commerce-v2  # Gradually shifting to the version 2.0 deployment
  template:
    metadata:
      labels:
        app: e-commerce-v2
    spec:
      containers:
        - name: e-commerce-v2-container
          image: sysgenpro/e-commerce-v2:latest
          ports:
            - containerPort: 80

This example gradually shifts traffic from version 1.0 to version 2.0 by updating the number of replicas in the Deployment. Adjust the parameters based on your needs, and monitor the behavior of your application during the canary release.

2.5 Shadow Deployment

Shadow deployment is a strategy where a new version of an app is deployed alongside the new production version, primarily monitoring and testing the product.

Shadow deployment is another type of canary deployment that allows you to test the latest release of the workload. This strategy of deployment helps you to split the traffic between a new and current version, without users even noticing it.

When the performance and stability of the new version meet in-built requirements, operators will trigger a full rollout of the same.

One of the primary benefits of shadow deployment is that it can help you test the new version’s non-functional aspects like stability, performance, and much more.

On the other hand, it has a downside as well. This type of deployment strategy is complex to manage and needs two times more resources to run than a standard deployment strategy.

2.6 A/B Deployment

Just like Canary deployment, the A/B deployment strategy helps you to target a desired subsection of users based on some target parameters like HTTP headers or cookies.

It can distribute traffic amongst different versions. It is a technique that is widely used to test the conversion of a given feature and then the version converting the most popular.

In this strategy, data is usually collected based on the user behavior and is used to make better decisions. Here users are left uninformed that testing is being done and a new version will be made available soon.

This deployment can be automated using tools like Flagger, Istio, etc.

3. Resource Utilization Strategies:

Here are a few resource utilization strategies to follow:

3.1 Resource Limits and Requests:

Each container in the pod in Kubernetes can define limits and resource requests for both memory and CPU. These settings are crucial for resource isolation and allocation.

Resource Requests:

  • A particular amount of resources that Kubernetes guarantees to the container
  • If the container exceeds the requested resources, it might feel throttled.

Resource Limits:

  • It helps in setting an upper limit on the exact amount of resources that a container can utilize.
  • In case this limit is exceeded, it might terminate or face other consequences.

So, it is necessary to set these values appropriately to ensure fair resource allocation among various containers on the same node.

Ex. In the following example, the pod specifies resource requests of 64MiB memory and 250 milliCPU (0.25 CPU cores). It also sets limits to 128MiB memory and 500 milliCPU. These settings ensure that the container gets at least the requested resources and cannot exceed the specified limits.

apiVersion: e-commerce/v1
kind: Pod
metadata:
  name: e-commerce
spec:
  containers:
  - name: e-commerce-container
    image: e-commerce:v1
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

3.2 Horizontal Pod Autoscaling (HPA)

In this technique, you get a feature of automatic adjustment of the number of replicas of a running pod that is based on particular metrics.

Target Metrics:

  • HPZ can scale depending on different metrics like memory usage, CPU utilization, and custom metrics.
  • The targetAverageUtilization field determines the required average utilization for CPU or memory.

Scaling Policies:

  • Define the maximum and minimum number of pod replicas for social deployment.
  • Scaling the decisions that are made based on whether the metrics break the decided thresholds.

HPA is useful for handling different loads and ensuring decided resource utilization by managing the number of pod instances in real-time.

Ex. This HPA example targets a Deployment named example-deployment and scales based on CPU utilization. It is configured to maintain a target average CPU utilization of 80%, scaling between 2 and 10 replicas.

apiVersion: e-commerce/v1
kind: HorizontalPodAutoscaler
metadata:
  name: example-hpa
spec:
  scaleTargetRef:
    apiVersion: e-commerce/v1
    kind: Deployment
    name: e-commerce-deployment
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 80

3.3 Vertical Pod Autoscaling

While HPA adjusts the number of replicas, VPA will focus on dynamically adjusting the resource requests of each pod.

Observation and Adaptation:

  • VPA observes the real resource usage of pods and adjusts resource requests based on that.
  • It optimizes both memory and CPU requests based on historical information.

Update Policies:

  • With the updateMode function, you can determine how aggressively VPA should update resource requests.
  • With alternatives like Auto, Off, and Recreate, the updated policies help the users officially.
  • It also helps in fine-tuning resource allocation for adapting to the actual runtime behavior.

Ex. This VPA example targets a Deployment named example-deployment and is configured to automatically adjust the resource requests of the containers within the deployment based on observed usage.

apiVersion: e-commerce/v1
kind: VerticalPodAutoscaler
metadata:
  name: e-commerce-vpa
spec:
  targetRef:
    apiVersion: "e-commerce/v1"
    kind: "Deployment"
    name: "e-commerce-deployment"
  updatePolicy:
    updateMode: "Auto"

3.4 Cluster Autoscaler

The cluster autoscaler is responsible for dynamically adjusting the node pool size in response to the resource requirements of your workloads.

Node Scaling:

  • When a node lacks resources and cannot accommodate new pods, the Cluster Autoscaler adds more nodes to the cluster.
  • Conversely, when nodes are not fully utilized, the Cluster Autoscaler scales down the cluster by removing unnecessary nodes.

Configuration:

  • Configuration parameters such as minimum and maximum node counts vary depending on the cloud provider or underlying infrastructure.

The Cluster Autoscaler plays a crucial role in ensuring an optimal balance between resource availability and cost-effectiveness within a Kubernetes cluster.

Ex. This example includes a simple Deployment and Service. Cluster Autoscaler would dynamically adjust the number of nodes in the cluster based on the resource requirements of the pods managed by the Deployment.

apiVersion: e-commerce/v1
kind: Service
metadata:
  name: e-commerce-service
spec:
  selector:
    app: e-commerce
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---
apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce
  template:
    metadata:
      labels:
        app: e-commerce
    spec:
      containers:
      - name: e-commerce-container
        image: e-commerce:v1
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

4. How to Deploy Kubernetes

Most of the Kubernetes deployments and functions are specified in YAML (or JSON) files. These are developed using ‘kubectl apply’. 

For example, when it comes to Ngnix deployment, the YAML is called ‘web deployment’ with four copies of replicas. This will look like the code given below:

$ kubectl set image deployment mywebsite nginx=nginx:1.22.1

In the above example, the metadata shows that the ‘web-deployment’ was developed with four copies of pods which are replicas of each other (replicas: 4), and the selector defines how the deployment process will find the pods using the label (app: nginx). Besides this, here the container (nginx) runs its image at version 1.17.0, and the deployment opens port 80 for the pod’s usage.

In addition to this, here, the environment variables for the containers get declared with the use of the ‘envFrom’ or ‘env’ field in the configuration file. After the deployment gets specified, it gets developed from the YAML file with: kubectl apply -f https://[location/web-deployment.yaml]

5. Update Kubernetes Deployment

When it comes to Kubernetes deployment, the developers can use the set command to make changes to the image, configuration fields, or resources of an object. 

For instance, to update a deployment from nginx version 1.22.0 to 1.22.1, the following command can be considered.

apiVersion: apps/v1;

kind: Deployment

metadata:

name: web-deployment 

spec: 

selector:

matchLabels:

app: nginx

replicas: 4

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx:1.17.0

ports:- containerPort: 80

6. Final Words

In this blog, we saw that there are multiple ways a developer can deploy an application. When the developer wants to publish the application to development/staging environments, a recreated or ramped deployment is a good choice. But when production is considered, a blue/green or ramped deployment is an ideal choice. When choosing the right Kubernetes deployment strategy, if the developer isn’t sure about the stability of the platform, a proper survey of all the different ways is required. Each deployment strategy comes with its pros and cons, but which to choose depends on the type of project and resources available.

FAQs:

1. What is the Best Deployment Strategy in Kubernetes?

There are mainly 8 different Kubernetes deployment strategies:

Rolling Deployment, Ramped slow rollout, Recreate Deployment, Best-effort controlled rollout, canary deployment, A/B testing, and Shadow deployment. 

You can choose the one that’s most suitable to your business requirements.

2. What Tool to Deploy k8s?

Here’s the list of tools to deploy by Kubernetes professionals:

  • Kubectl
  • Kubens
  • Helm
  • Kubectx
  • Grafana
  • Prometheus
  • Istio
  • Vault
  • Kubeflow
  • Kustomize, and many more

3. What is the Difference Between Pod and Deployment in Kubernetes?

A Kubernetes pod is the smallest unit of Kubernetes deployment. It is a cluster of one or more containers that has the same storage space, and even similar network resources.

On the other hand, Kubernetes deployment is the app’s life cycle that includes the pods of that app. It’s a way to communicate your desired state of Kubernetes deployments.

4. What is the Life Cycle of Kubernetes Deployment?

There are majorly ten steps to follow in a Kubernetes deployment life cycle:

  • Containerization
  • Container Registry
  • YAML or JSON writing
  • Kubernetes deployment
  • Rollbacks and Rolling Updates
  • Scaling of the app
  • Logging and Monitoring
  • CI/CD

The post Kubernetes Deployment Strategies- A Detailed Guide appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/kubernetes-deployment-strategies/feed/ 0
Kubernetes Best Practices to Follow https://www.sysgenpro.com/blog/kubernetes-best-practices/ https://www.sysgenpro.com/blog/kubernetes-best-practices/#respond Tue, 06 Feb 2024 11:53:40 +0000 https://www.sysgenpro.com/blog/?p=12583 Kubernetes is one of the most widely used and popular container orchestration systems available in the market. It helps software development companies to create, maintain, and deploy an application with the latest features as this platform is the de-facto standard for the modern cloud engineer.

The post Kubernetes Best Practices to Follow appeared first on sysgenpro Blog.

]]>

Key Takeaways

  • When it comes to working with Kubernetes and following its best practices, there are some complications that developers face in deciding which best practice can help in which circumstances. To help with this confusion, in this blog, we will go through some of the top Kubernetes practices, and here is what a Kubernetes developer will take away with this blog:
    1. Developers will learn that security isn’t the afterthought for any Kubernetes app development process as DevSecOps can be used to emphasize the importance of integrating security at every phase of the process.
    2. These practices of Kubernetes combine authorization controllers for the developers to create modern applications.
    3. Though there are many security professionals available in the app development field, the key factor for being the best developer, automation, and DevSecOps practices is to know how to secure software.
    4. These best practices can help supply chain software to address emerging security issues.

Kubernetes is one of the most widely used and popular container orchestration systems available in the market. It helps software development companies to create, maintain, and deploy an application with the latest features as this platform is the de-facto standard for the modern cloud engineer.

This is how Kubernetes master, Google staff developer advocate, and co-author of Kubernetes Up & Running (O’Reilly) Kelsey Hightower acknowledges it:

“Kubernetes does the things that the very best system administrator would do: automation, failover, centralized logging, monitoring. It takes what we’ve learned in the DevOps community and makes it the default, out of the box.”Kelsey Hightower

In a cloud-native environment, many of the more common sysadmin duties, such as server upgrades, patch installations, network configuration, and backups, are less important. You can let your staff focus on what they do best by automating these tasks with Kubernetes. The Kubernetes core already has some of these functionalities, such as auto scaling and load balancing, while other functions are added via extensions, add-ons, and third-party applications that utilize the Kubernetes API. There is a huge and constantly expanding Kubernetes ecosystem.

Though Kubernetes is a complex system to work with, there are some of its practices that can be followed to have a solid start with the app development process. These recommendations cover issues for app governance, development, and cluster configuration. 

1. Kubernetes Best Practices

Here are some of the Kubernetes best practices developers can follow:

1.1 Kubernetes Configuration Tips

Here are the tips to configure Kubernetes:

  • The very first thing to do while defining Kubernetes configurations is to specify the latest version of the stable API.
  • Then the developer must see that the configuration files are getting saved in the version control before they get pushed to the Kubernetes cluster. This thing will help the development team to roll back changes in a configuration quickly and also help in the restoration and re-creation of a cluster.
  • Another tip to configure Kubernetes is that objects must be grouped into a single file whenever it is possible. This helps in managing files easily.
  • The developer must write the application configuration files by using  YAML technology rather than JSON. These formats can be interchanged and used in the majority of situations, but YAML is more user-friendly.
  • Another tip for configuring Kubernetes is to use many kubectl commands by calling them on a directory.
  • The developer must put the description in annotations if he wants to offer better introspection.
  • When values are specified without requirements, even the minimal and simple configuration makes errors.

1.2 Use the Latest K8s Version

Another best practice of Kubernetes is to use the latest version. The reason behind it is that the majority of the time, developers worry about having the latest new features in the new Kubernetes version and their unfamiliarity or limited support or incompatibility with the current application setup.

For this, the most important thing to do is to update Kubernetes with the latest version that is stable and offers performance and security to all the issues. Besides this, if any issues are faced while using the latest version, the developers must find community-based support.

1.3 Use Namespaces

Using namespaces in Kubernetes is also a practice that every Kubernetes app development company must follow. For this, the developers of any company must use namespaces to organize the objects of the application and create logical partitions within the Kubernetes cluster to offer high security. In addition to this, Kubernetes comes with three different namespaces Kube-public, Kube-system, and default. In this case, RBAC is used by the developers to control the access of some specific namespaces whenever they need to reduce the group’s access to control the blast radius.

Besides this, in the Kubernetes cluster, even the LimitRange objects can be configured against namespaces. This is done to specify the container’s standard size that needs to be deployed in the namespace. Here, the developers can use ResourceQuotas to limit the total resource consumption.

YAML Example:
# this yaml is for creating namespace
apiVersion: v1
kind: Namespace
metadata:
  name: my-demo-namespace
  labels:
    name: my-demo-namespace
# this yaml is for creating pod in the above created namespace
apiVersion: v1
kind: Pod
metadata:
  name: my-demo-app
  namespace: my-demo-namespace
  labels:
    image: my-demo-app
spec: 
  containers:
    - name: my-demo-app
    image: nginx

1.4 Avoid Using HostPort and HostNetwork

Avoiding the use of hostPort and hostNetwork is another best practice of Kubernetes, here is what can be done with it:First of all the developers must create a service before deployments or ReplicaSets and before any workloads that require access. Now, whenever Kubernetes starts a container, it will offer environment variables that point to the services that are running on the container. For instance, if a service called “foo” is present, then all the containers will get the below-specified variables in their environment:

FOO_SERVICE_HOST=
FOO_SERVICE_PORT=
  • This implies ordering requirements or services that a Pod requires to access the environment. Here, the DNS must not have any restrictions.
  • Besides this, the developers can also add an optional cluster add-on is a DNS server. This will help the DNS server to look after the Kubernetes API for all the new Services that are created. It also helps in creating a set of DNS records for each cluster.
  • Another best practice is to avoid the use of hostNetwork along with hostPort. The main reason behind not specifying a hostPort for a Pod is that when a Pod is bound to a hostPort, it will start limiting the number of places the Pod is being scheduled as each combination in the hostPort must be unique.
  • The developer must also consider the use of headless services to discover the service when there is no need for kube-proxy load balancing.

1.5 Using kubectl

Using kubectl is also a practice that can be considered by Kubernetes developers. Here, the development team can make use of the following things:

  • First of all, the developers must consider using kubectl apply -f <directory>. This is essential for configuring Kubernetes in all .yml, .yaml, and .json files in <directory>.
  • Then, Use kubectl create deployment and kubectl expose to quickly create single-container Deployments and Services.
  • Here, the label selectors must be used to get and delete operations. This can be used instead of specific object names.

1.6 Use Role-based Access Control (RBAC)

Using RBAC is another best practice that helps develop and create Kubernetes applications with ease. The general approach while working with Kubernetes is that minimal RBAC rights must be assigned to service accounts and users. Only Permissions Explicitly required for the use of operation should be used.  Here, as each cluster will be different, some general rules can be applied to all, and they are:

  • Whenever possible, the development team must avoid offering wildcard permissions to all the resources. The reason behind this is that as Kubernetes is an extensible system when rights are given to all object types of the current system version, the object types of the future system version will also be assigned to the users.
  • Permission must be assigned at the namespace level if possible by using RoleBindings instead of ClusterRoleBindings to offer rights as per the namespace.
  • The development team must avoid adding users to the master group of the system as any user who is a member of this group gets the authority to bypass all the RBAC rights.
  • Unless it is not required, cluster-admin accounts must not be used by the administrators. The reason behind this is that when the low-privileged account is offered with impersonation rights, it can help avoid accidental modification of cluster resources.
YAML Example:
# this yaml is for creating role named “pod-reading-role”
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reading-role
rules:
- apiGroups: [""]      # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
# This yaml is for creating role binding that allows user "demo" to read pods in the "default" namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: demo-user-read-pods
  namespace: default
subjects:
- kind: User
  name: demo    #name is case sensitive
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reading-role   # this must match the name of the Role
  apiGroup: rbac.authorization.k8s.io

1.7 Follow GitOps Workflow

Another Kubernetes best practice is to follow GitOps workflow. This means that to have a successful deployment of Kubernetes, developers must give thought to the workflow of the application. For this, the use of git-based workflow is an ideal choice as it helps in offering automation by using CI/CD (Continuous Integration / Continuous Delivery) pipelines. This also helps in increasing the application deployment process with efficiency. In addition to this, when CI/CD is used, it tends to offer an audit trail to the developers for the software deployment process.

1.8 Don’t Use “Naked” Pods

Not using naked pods is another best practice that must be considered. Here, there are a few points to look out for and they are as below:

  • Naked pods must not be used if avoidable as it will not be rescheduled if the node fails in an event.
  • Generally, deployment helps in creating a ReplicaSet to make sure that the number of Pods that are required are available and specifying a strategy that can help in replacing Pods. In this case, naked pods can create an issue.
YAML Example:
apiVersion: apps/v1
kind: Deployment
metadata:
   name: my-demo-deployment
spec:
   template:
      metadata:
         name: my-demo-app
         namespace: my-demo-namespace
	     labels:
            image: my-demo-app
      spec:
         containers:
          - name: my-demo-app
            image: nginx

1.9 Configure Least-Privilege Access to Secrets

Configuring least-privilege access to secrets is also a best practice as it helps developers plan the access control mechanism like Kubernetes RBAC (Role-based Access Control). In this case, the developers must follow the below-given guidelines to access Secret objects.

  • Humans: In this case, the software development teams must restrict watch, get, or list access to Secrets. Cluster administrators are the only ones that should be allowed access.
  • Components: Here, the list or watch access must be restricted access to only the most privileged components of the system.

Basically, in Kubernetes, the user who has access to create a Pod uses a Secret and can see the value of that particular Secret. Here, even if the Kubernetes cluster default policies don’t allow the user to react to the Secret, the same user can get access to the Secret when he runs the Pod. For this, limiting the impact caused by Secret data exposure is why the following recommendations should be considered:

  • Implementation of audit rules that alert the admin on some specific events.
  • Secrets that are used must be short-lived.

1.10 Use Readiness and Liveness Probes

Readiness and Liveness probes are known as the most important parts of the health checks in Kubernetes. They help the developers to check the health of the application.

A readiness probe is a popular approach that enables the development team to make sure that the requests sent to a pod are only directed when the pod is ready to serve it. If the pod is not ready to serve a request, it should be directed somewhere else. On the other hand, the Liveness probe is a concept that helps in testing if the application is running as expected by the health check protocols.

YAML Example:
apiVersion: apps/v1
kind: Deployment
metadata:
   name: my-demo-deployment
spec:
   template:
      metadata:
         name: my-demo-app
         namespace: my-demo-namespace
	     labels:
            image: my-demo-app
      spec:
         containers:
          - name: my-demo-app
            image: nginx:1.14.2
            readinessProbe:
               httpGet:
                  path: /ready
                  port: 9090
	   initialDelaySeconds: 30
	   periodSeconds: 5
	livenessProbe:
               httpGet:
                  path: /health
                  port: 9090
	   initialDelaySeconds: 30
	   periodSeconds: 5

1.11 Use a Cloud Service to Host K8s

Kubernetes cluster hosting on the hardware can be a bit complex but its cloud services can offer platform as a service (PaaS) like EKS (Amazon Elastic Kubernetes Service) on Amazon Web Services and AKS (Azure Kubernetes Service) on Azure. This means that the infrastructure of the application can be handled by the cloud provider along with other tasks like adding and removing nodes from the cluster can also be achieved by cloud services.

1.12 Monitor the Cluster Resources

Another Kubernetes best practice is to monitor cluster resource components in the Kubernetes version control system to keep them under control. As the control plane is known as the core of the Kubernetes, these components can help in keeping the system up and running. Besides this, Kubelet, Kubernetes API, controller-manager, etcd, kube-dns, and Kube-proxy make up the control plane.

1.13 Secure Network Using Kubernetes Firewall

The last in our list of Kubernetes best practices is securing the network using a Kubernetes firewall and using network policies to restrict internal traffic. When the firewall is put in front of the Kubernetes cluster, it will help restrict resource requests that are sent to the API server.

2. Conclusion

As seen in this blog, many different best practices can be considered to design, run, and maintain the Kubernetes cluster. These practices help developers in putting modern applications into the world. But which practice to put into action and which practice will help the application become a success needs to be decided by the Kubernetes app developers for which the engineers need to be experts in Kubernetes.

3. FAQs:

What is the main benefit of Kubernetes?

Some of the main benefits of Kubernetes are efficient use of namespaces, robust security through Firewall & RBAC, and monitoring control panel components. 

How do I improve Kubernetes?

To improve Kubernetes performance, the developer needs to focus on using optimized container images, defining resource limits, and more. 

What is a cluster in Kubernetes?

In Kubernetes, the cluster is an approach that contains a set of worker machines called nodes. 

What is the biggest problem with Kubernetes?

The biggest issue with Kubernetes is its vulnerability and complexity.

The post Kubernetes Best Practices to Follow appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/kubernetes-best-practices/feed/ 0
Microservices Testing Strategies: An Ultimate Guide https://www.sysgenpro.com/blog/microservices-testing-strategies/ https://www.sysgenpro.com/blog/microservices-testing-strategies/#respond Tue, 23 Jan 2024 05:41:36 +0000 https://www.sysgenpro.com/blog/?p=12295 In today's time, software development companies prefer to use the latest approaches for application development, and using microservices architecture is one such initiative. Developers use microservices architecture and divide functional units of the application that can work as individual processes. These singular functions can easily address user traffic and still remain lightweight.

The post Microservices Testing Strategies: An Ultimate Guide appeared first on sysgenpro Blog.

]]>

Key Takeaways

  1. For microservices, formulating an effective testing strategy is a challenging task. A Combination of Testing Strategies with right tools that provide support at each layer of testing is a key.
  2. Post integration of the service, the risk of failures and cost of correction is high. So a good testing strategy is required.
  3. Tools like Wiremock, Goreplay, Hikaku, VCR, Mountebank, and many others are used for microservices testing purposes.
  4. For the effective approach, there should be a clear Consensus on the test strategy. Required amount of testing should be focused at the correct time with suitable tools.
  5. For the microservices architecture, there is a scope of unit testing, integration testing, component testing, contract testing, and end to end testing. So the team must utilise these phases properly as per the requirements.

In today’s time, software development companies prefer to use the latest approaches for application development, and using microservices architecture is one such initiative. Developers use microservices architecture and divide functional units of the application that can work as individual processes. These singular functions can easily address user traffic and still remain lightweight. But they need to be frequently updated to scale up the application. Besides this, the developers also have to carry out microservices testing strategies to make sure that the application is performing the way it is expected to.

Let’s first understand what types of challenges developers are facing while using a microservices architecture.

1. Challenges in Microservices Testing

Microservices and monolithic architectures have many differences. It also comes with some challenges that every developer should know before testing microservices.

Challenge Description
Complexity
  • Though single services are very simple, the entire microservices system is a bit complex which means that to work with the developers one needs to be careful in choosing and configuring the databases and services in the system.
  • Even testing and deploying each service can be challenging as this is a distributed nature.
Data Integrity
  • Microservices offer distributed databases which are problematic for data integrity as business applications might require updates with time, but here database upgrade becomes compulsory.
  • This is the case when there is no data consistency. So, testing becomes more difficult.
Distributed Networks
  • Microservices can be deployed on various servers with different geographical locations by adding latency and making the application know about network disruptions.In this case, when tests rely on the network, it will fail if there is any fault in the code and this will interrupt the CI/CD pipeline.
Test Area
  • Every microservice usually points to many API endpoints which means that testable surfaces increase and developers have to work on more areas which is a bit time-consuming task.
Multiple frameworks used for development
  • Though the developers choose the best-suited microservices frameworks and programming languages for each microservice when the system is big, it becomes difficult to find a single test framework that can work for all the components.
Autonomous
  • The app development team can deploy microservices anytime but the only thing they need to take care of is that the API compatibility doesn’t break.
Development
  • Microservices can be independently deployable, so extra checks are required to ensure they function well.Even the boundaries need to be set correctly for microservices to run perfectly fine.

2. Microservices Testing Strategy: For Individual Testing Phases

Now let us understand the testing pyramid of microservices architecture. This testing pyramid is developed for automated microservices testing. It includes five components. 

Microservices Testing Strategies

The main purpose of using these five stages in microservices testing is: 

Testing Type Key Purpose
Unit Testing
  • To test various parts (class, methods, ) of the microservice. 
Contract Testing
  • To test API compatibility. 
Integration Testing
  • To test the communication between microservices, third-party services, and databases. 
Component Testing
  • To test the subsystem’s behavior. 
End-to-End Testing
  • To test the entire system. 

2.1 Unit Testing

The very first stage of testing is unit testing. It is mainly used to verify the function’s correctness against any specification. It checks a single class or a set of classes that are closely coupled in a system. The unit test that is carried out either runs with the use of the actual objects that are able to interact with the unit or with the use of the test doubles or mocks.

Basically, in unit tests, even the smallest piece of the software solution is tested to check whether it behaves the way it is expected to or not. These tests are run at the class level. Besides this, in unit testing one can see a difference in whether the test is performed on an isolated unit or not. The tests carried out in this type of testing method are written by developers with the use of regular coding tools, the only difference is in its types as shown below.

Solitary Unit Testing: 

  • Solitary unit tests ensure that the methods of a class are tested.  
  • It mainly focuses on the test result to always be deterministic. 
  • In this type of unit testing, collaborations and interactions between an object of the application and its dependencies are also checked.
  • For external dependencies, mocking or stubbing to isolate the code is used.
Solitary Unit Testing

Sociable Unit Testing: 

  • These tests are allowed to call other services. 
  • These tests are not deterministic, but they provide good results when they pass. 
Sociable Unit Testing

Basically, as we saw here, unit tests when used alone do not offer a guarantee of the system’s behavior. The reason behind it is that in unit testing, the core testing of each module is covered but it doesn’t cover the modules when they are in collaborative mode. Therefore, in such cases, to make sure that the unit tests are run successfully, developers make use of test doubles and ensure that each module works correctly.

2.2 Integration Testing

The second stage of microservices testing is Integration tests. This type of testing is used when the developer needs to check the communication between two or more services. Integration tests are specially designed to check error paths and the basic success of the services over a network boundary. 

In any software solution, there are different components that interact with one another as they may functionally depend on each other. Here, the integration test will be used to verify the communication paths between those components and find out any interface defects. All the test modules are integrated with each other and they are tested as a subsystem to check the communication paths in this testing method.

There can be three types of communications that happen in the microservices architecture: 

  1. Between two different microservices
  2. Between Microservice and third-party application
  3. Between Microservice and Database
Integration Testing

The aim of integration testing is to check the modules and verify if their interaction with external components is successful and safe. While carrying out such tests, sometimes it becomes difficult to trigger the external component’s abnormal behavior like slow response time or timeout. In such cases, special tests are written by the developers to make sure that the test can respond as expected. 

Basically, integration tests come with the goal of providing assurance that the coding schema matches the stored data.

2.3 Component Testing

Component tests are popular when it comes to checking the full function of a single microservice. When this type of testing is carried out, if there are any calls made by the code to the external services, they are mocked. 

Basically, a component is a coherent, well-encapsulated, and independently replaceable part of any software solution. But when it comes to a microservice architecture, these component tests become a service. This means that developers perform component tests in microservices architecture to isolate any component’s complex behavior.

Besides this, component tests are more thorough in comparison to integration testing as it has the capability to travel on all the paths. For instance, here we can know how the component responds to the network’s malformed requests. This process can be divided into two parts.

In-process Component Testing

  • Test runners exist in the same process or thread as microservices.
  • Microservices can be started in an offline test mode.
  • This testing works only with single-component microservices.
In-process Component Testing

Out-process Component Testing

  • Appropriate for any size of components.
  • Components here are deployed unaltered in a test environment.
  • All dependencies in microservices are stubbed or mocked out.
Out-of-Process Component Test

2.4 Contract Testing

This testing type is carried out when two microservices gather via an interface and they need a bond to specify all the possible transactions with their data structures. Here, even the possible side effects of the inputs and outputs are analyzed to make sure that there is no security breach in the future. This type of testing can be run by the client, the producer, or both.

Consumer-side 

  • The downstream team writes and executes the tests.
  • The testing method connects microservices in a mocked version of the producer service.
  • Microservices are checked to see if they can consume client-side API. 

Producer-side 

  • Producer-side contract tests run in upstream services. .
  • Clients’ API requests are checked along with the producer’s contract details.

2.5 End-to-End Testing

The last type of testing in our list is End-to-End Testing. This approach is used for testing microservices completely. This means that end-to-end testing checks the entire microservices application. It checks whether the system meets the client’s requirements and helps in achieving the goal. When this test is carried out by the developers, it doesn’t bother about the internal architecture of the application but just verifies that the system offers a business goal. In this case, when the software is deployed, it is treated as a black box before getting tested.

End-to-End Testing

Besides this, as end-to-end testing is more about business logic, it checks the proxies, firewall, and load balancer of the application because generally they are affected by the public interference from API and GUIs. In addition to this, end-to-end testing also helps developers to check all the interactions and gaps that are present in microservice-based applications. This means that testing microservices applications completely can be possible with end-to-end testing.

Now, let’s look at various scenarios and how these phases can apply. 

3. Microservices Testing Strategies for Various Scenarios

Here we will go through various microservices testing strategies for different scenarios to understand the process in a better way. 

Scenario 1: Testing Internal Microservices Present in the Application

This is the most commonly used strategy to test microservices. Let’s understand this with an example. 

For instance, there is a social media application that has two services like

  1. Selecting Photos and Videos
  2. Posting Photos and Videos 

Both services are interconnected with each other as there is close interaction between them in order to complete an event. 

Testing Internal Microservices Present in the Application
Testing Scopes  Description
Unit Testing
  • For each individual microservice, there is a scope of unit testing.
  • We can use frameworks like JUnit or NUnit for testing purposes.
  • First, one needs to test the functional logic.
  • Apart from that, internal data changes need to be verified.
  • For Example: If Selecting Photos and Videos Service returns a selectionID then the same needs to be verified within the service.
Integration Testing
  • Both the microservices are internally connected in our case.
  • In order to complete an event, both need to be executed in a perfect manner.
  • So, there is a scope for Integration testing.
Contract Testing
  • It is recommended to use testing tools that enable user-driven contract testing.Tools like Pacto, Pact, and Janus are recommended.In this testing, data passed between services needs to be validated and verified. For the same, one can use tools like SOAPUI.
End-to-End Testing
  • End to End Testing, commonly referred to as E2E testing, ensures that the dependency between microservices is tested at least in one flow.
  • For example, an event like making a post on the app should trigger both the services i.e. Selecting Photos and Videos and Posting Photos and Videos.

Scenario 2: Testing Internal Microservices and Third-party Services

Let’s look at the scenario where third-party APIs are integrated with Microservices. 

For Example, in a registration service, direct registration through the Gmail option is integrated. Here registration is modelled as a microservice and interacts with gmail API that is exposed for authenticating the user. 

Testing Internal Microservices and Third-party Services
Testing Scopes Descriptions 
Unit Testing
  • The developers can perform unit tests to check the changes that happened internally.
  • Frameworks like xUnit are used to check the functional logic of the application after the change.
  • The TDD approach can also be considered whenever possible.
Contract Testing
  • The expectations from the consumer’s microservices are checked which decouples itself from the external API.
  • Test doubles can be created here using Mountebank or Mockito to define Gmail API.
Integration Testing
  • Integration tests are carried out if the third-party developers offer sandbox API.This type of testing checks whether the data is being passed perfectly from one service to another and to see if the services are integrated as required.
End-to-End Testing
  • With end-to-end testing, the development team ensures that there are no failures in the workflow of the system.
  • One checks the dependencies between the microservices and ensures that all the functions of the application are working correctly.

Scenario 3: Testing Microservices that are Available in Public Domain

Let’s consider an e-commerce application example where users can check the items’ availability by calling a web API.  

Testing Microservices that are Available in Public Domain
Testing ScopesDescriptions 
Unit Testing
  • Here, the development team can carry out unit testing to check all functions of the application that the services have defined.
  • This testing helps to check that all the functions of the services work perfectly fine as per the user’s requirements.
  • It also ensures that the data persistence is taken care of.
Contract Testing
  • This testing is essential in such cases.
  • It makes sure that the clients are aware of the contracts and have agreed upon them before availing of the facilities provided by the application.
  • Here, the owner’s contracts are validated, and later consumer-driven contracts are tested.
End-to-end Testing
  • Here we can test the workflow using End-to-end Testing. It enables software testing teams to make sure that the developed application offers facilities as per the requirement. End-to-end testing also ensures that the integration of services with external dependencies is secured.

4. Microservices Testing Tools

Here are some of the most popular Microservices testing tools available in the market.

  • Wiremock: It is a very popular simulator that is used by developers when they want to do integration tests. Unlike any other general-purpose mocking tool, Wiremock has the capability to work by developing an actual HTTP server that the code that is being tested can connect to as a real web service.  
  • Goreplay: It is an open-source tool for network monitoring. It helps in recording live traffic of the application and this is why it is used by the developers to capture and replay live HTTP traffic.
  • Mountebank: It is a widely used open-source tool that enables software development companies to carry out cross-platform and multi-platform test doubles over the wire. With the help of Mountebank, the developers can simply replace the actual dependencies of the application and test them in the traditional manner.
  • Hikaku: It is a very popular test environment for microservices architecture. It helps the developers to ensure that the implementation of REST-API in the application actually meets its specifications. 
  • VCR: Developers use the VCR tool to record the tests that they carry out on the suite’s HTTP interactions. This recording can be later played for future tests to get accurate, fast, and reliable test results.

5. Conclusion

Microservices testing plays a very important role when it comes to modern software development tactics. It enables developers to offer applications that have greater flexibility, agility, and speed. There are some essential strategies that need to be carried out by the development teams when it comes to testing microservices applications in order to deploy a secure application and some of those microservices testing strategies are discussed in this blog. These automated tests enable the developers to easily cater to customer requirements by offering a top-notch application.

The post Microservices Testing Strategies: An Ultimate Guide appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/microservices-testing-strategies/feed/ 0
AWS Cost Optimization Best Practices https://www.sysgenpro.com/blog/aws-cost-optimization/ https://www.sysgenpro.com/blog/aws-cost-optimization/#respond Wed, 20 Dec 2023 09:39:17 +0000 https://www.sysgenpro.com/blog/?p=12373 In today’s tech world where automation and cloud have taken over the market, the majority of software development companies are using modern technologies and platforms like AWS for offering the best services to their clients and to have robust in-house development.

The post AWS Cost Optimization Best Practices appeared first on sysgenpro Blog.

]]>

Key Takeaways

  1. AWS Cloud is a widely used platform and offers more than 200 services. These cloud resources are dynamic in nature and their cost is difficult to manage.
  2. There are various tools available like AWS Billing Console, AWS Trusted Advisor, Amazon CloudWatch, Amazon S3 Analytics, AWS Cost Explorer, etc. that can help in  cost optimization.
  3. AWS also offers flexible purchase options for each workload. So that one can improve resource utilization.
  4. With the help of Instance Scheduler, you can stop paying for the resources during non operating hours.
  5. Modernize your cloud architecture by scaling microservices architectures with serverless products such as AWS Lambda.

In today’s tech world where automation and cloud have taken over the market, the majority of software development companies are using modern technologies and platforms like AWS for offering the best services to their clients and to have robust in-house development.  

In such cases, if one wants to stay ahead of the competition and offer services to deliver efficient business values at lower rates, cost optimization is required. Here, we will understand what cost optimization is, why it is essential in AWS and what are the best practices for AWS cost optimization that organizations need to consider.

1. Why is AWS More Expensive?

The AWS cloud is the widely used platform by software development companies that offers more than 200 services to their clients. This cloud resource is dynamic in nature and because of this, its cost is difficult to manage and is unpredictable. Besides this, here are some of the main reasons that make AWS a more expensive platform to use for any organization. 

  • When any business organization does not use Elastic Block Store (EBS) volumes, load balancers, snapshots, or some other resources, they will have to pay more as these resources will still be incurring costs whether you use them or not.  
  • Some businesses are paying for computer instance services like Amazon EC2 but not utilizing them properly.  
  • Reserved or spot instances are not used when they are required and these types of instances generally offer discounts of 50-90%. 
  • Sometimes it happens that the auto-scaling feature is not implemented properly or is not optimal for the business. For instance, as the demand for something increases and you scale up your business to fulfill it but becomes too much as there are many redundant resources. This can also cost a lot.  
  • Savings Plans that come with AWS are not used properly which can affect the cost as it will not minimize the total spend on AWS. 

2. AWS Cost Optimization Best Practices

Here are some of the best practices of AWS cost optimization that can be followed by all the organizations opting for AWS.  

2.1 Perform Cost Modeling

One of the top practices that must be followed for AWS cost optimization is performing cost modeling for your workload. Each component of the workload must be clearly understood and then cost modeling must be performed on it to balance the resources and find out the correct size for each resource that is in the workload. This offers a specific level of performance.  

Besides this, a proper understanding of cost considerations can enable the companies to know more about their organizational business case and make decisions after evaluating the value realization outcomes.  

There are multiple AWS services one can use with custom logs as data sources for efficient operations for other services and workload components. For example:  

  1. AWS Trusted Advisor 
  2. Amazon CloudWatch 

This is how AWS Cost Advisor Works: 

Now, let’s look at how Amazon CloudWatch Works:  

how Amazon CloudWatch Works
Source: Amazon

These are some of the recommended practices one can follow:  

  1. Total number of matrices associated with CloudWatch Alarm can incur cost. So remove unnecessary alarms. 
  2. Delete the dashboards those are not necessary. In ideal case, dashboards should be three or less.  
  3. Also checkout your contributor insight reports and remove any non-mandatory rules.   
  4. Evaluating logging levels and eliminating unnecessary logs can also help to reduce ingestion costs. 
  5. Keep monitor of custom metrics off when appropriate. It will also reduce unnecessary charges. 

2.2 Monitor Billing Console Dashboard

AWS billing dashboard enables organizations to check the status of their month-to-date AWS expenditure, pinpoint the services that cost the highest, and understand the level of cost for the business. Users can get a precise idea about the cost and usage easily with the AWS billing console dashboard. The Dashboard page consists of sections like –  

  • AWS Summary: Here one can find an overview of the AWS costs across all the accounts, services, and AWS regions.  
  • Cost Trend by Top Five Services: This section shows the most recent billing periods.  
  • Highest Cost and Usage Details: Here you can find out the details about top services, AWS region, and accounts that cost the most and are used more. 
  • Account Cost Trend: This section shows the trend of account cost with the most recent closed billing periods. 

In the billing console, one of the most commonly viewed pages is the billing page where the user can view month-to-date costs and a detailed services breakdown list that are most used in specific regions. From this page, the user can also get details about the history of costs and usage including AWS invoices.  

In addition to this, organizations can also access other payment-related information and also configure the billing preferences. So, based on the dashboard statistics, one can easily monitor and take actions for the various services to optimize the cost. 

2.3 Create Schedules to Turn Off Unused Instances

Another AWS cost optimization best practice is to pay attention to create schedules to turn off the instances that are not used on the regular bases. And for this, here are some of the things that can be taken into consideration.  

  • At the end of every working day or weekend or during vacations, unused AWS instances must be shut down. 
  • The usage metrics of the instances must be evaluated to help you decide when they are frequently used which can eventually enable the creation of an accurate schedule that can be implemented to always stop the instances when not in use.  
  • Optimizing the non-production instances is very essential and when it is done, one should prepare the on and off hours of the system in advance.  
  • Companies need to decide if they are paying for EBS quantities and other relevant elements while not using the instances and find a solution for it. 

Now let’s analyze the different scenarios of AWS CloudWatch Alarms.

ScenarioDescription
Add Stop Actions to AWS CloudWatch Alarms
  • We can create an alarm to stop the EC2 instance when the threshold meets.
  • Example:
    • Suppose you forget to shut down a few development or test instances.
    • You can create an alarm here that triggers when CPU utilization percentage has been lower than 10 percent for 24 hours indicating that instances are no longer in use.
Add Terminate Actions to AWS CloudWatch Alarms 
  • We can create an alarm to terminate the EC2 instance when a certain threshold meets.
  • Example:
    • Suppose any instance has completed its work and you don’t require it again. In this case, the alarm will terminate the instance.
    • If you want to use that instance later, then you should create an alarm to stop the instance instead of terminating it.
Add Reboot Actions to AWS CloudWatch Alarms
  • We can create an alarm that monitors EC2 instances and automatically reboots the instance.
  • In case of instance health check failure, this alarm is recommended.
Add Recover Actions to AWS CloudWatch Alarms
  • We can create an alarm that monitors EC2 instances and automatically recovers the instance if it becomes nonfunctional due to hardware failure or any other cause.

2.4 Supply Resources Dynamically

When any organization moves to the cloud, it will have to pay for its requirements. But for that, the company will have to supply resources that can match the workload demand at the time of the requirement. This can help in reducing the cost that goes behind overprovisioning. For this, any organization will have to modify the demand for using buffer, throttle, or queue in order to smooth the demand of the organization processes via AWS and serve it with fewer resources. 

This benefits the just-in-time supply and balances it against the need to have high availability, resource failures, and provision time. Besides this, in spite of the demand being fixed or variable, the plan to develop automation and metrics will be minimal. In AWS, reducing the cost of optimization by supplying resources dynamically is known as the best practice. 

PracticeImplementation Steps
Schedule Scaling Configuration
  • When the changes in demand are predictable, time-based scaling can help in offering a correct number of resources.
  • If the creation and configuration of resources are not fast to respond to the demands generated, schedule scaling can be used.
  • Workload analysis can be configured using AWS Auto Scaling and even predictive scaling can be used to configure time-based scheduling.
Predictive Scaling Configuration
  • With predictive scaling, one can increase instances of Amazon EC2 in the Autoscaling group at an early stage.
  • Predictive analysis helps applications start faster during traffic spikes.
Configuration of Dynamic Automatic Scaling
  • Auto scaling can help in configuring the scaling as per the active workload in the system
  • Auto-scaling launches the correct resources level after the analysis and then verifies the scale of the workload in the required timeframe.

2.5 Optimizing your cost with Rightsizing Recommendations

One of the best practices of cost optimization in AWS is rightsizing recommendations. It is a feature in Cost Explorer that enables companies to identify cost-saving opportunities. This can be carried out by removing or downsizing instances in Amazon EC2 (Elastic Compute Cloud). 

Rightsizing recommendations is a process that will help in analyzing the Amazon EC2 resources of your AWS and check its usage to find opportunities to lower the spending. One can check the underutilized Amazon EC2 instances in the member’s account in a single view to identify the amount you can save and after that can take any action. 

2.6 Utilize EC2 Spot Instances

Utilizing Amazon EC2 Spot Instances is known as one of the best practices of AWS cost optimization that every business organization must follow. In the AWS cloud, this instance enables companies to take advantage of unused EC2 capacity.  

Spot Instances are generally available at up to a 90% discount in the cloud market in comparison to other On-Demand instances. These types of instances can be used for various stateless, flexible, or fault-tolerant applications like CI/CD, big data, web servers, containerized workloads, high-performance computing (HPC), and more.

How spot instances work Amazon EC2
Source: Amazon

Besides this, as Spot Instances are very closely integrated with AWS services like EMR, Auto Scaling, AWS Batch, ECS, Data Pipeline, and CloudFormation, any company will have to select the way they want to launch and maintain the apps that are running on Spot Instances. And for this, taking the below-given aspects need to be taken into consideration.  

  • Massive scale: Spot instances have the capability to offer major advantages for conducting massive operating scales of AWS. Because of this, it enables one to run hyperscale workloads at a cost savings approach or it also allows one to accelerate the workloads by running various tasks parallelly. 
  • Low, predictable prices: Spot instances can be purchased at lower rates with up to 90% of discount than other On-Demand instances. This enables any company to have provision capacity across Spot, RIs, and On-Demand by using the EC2 Auto Scaling approach in order to optimize workload cost. 
  • Easy to use: When it comes to Spot Instances, launching, scaling, and managing them by utilizing the AWS services like ECS and EC2 Auto Scaling is easy. 

2.7 Optimizing EC2 Auto Scaling Groups (ASG) Configuration

Another best practice of AWS cost optimization is to configure EC2 auto-scaling groups. Basically, ASG is known as a collection of Amazon EC2 instances and is treated as a group of logical approaches for automatic scaling and management of tasks. ASGs have the ability to take advantage of Amazon EC2 Auto Scaling features like custom scaling and health check policies as per the metrics of any application.  

Besides this, it also enables one to dynamically add or remove EC2 instances from predetermined rules that are applied to the loads. ASG also enables the scaling of EC2 fleets as per the requirement to conserve the cost of the processes. In addition to this, you can also view all the scaling activities by either Auto Scaling Console or describe-scaling-activity CLI command. In order to optimize the scaling policies to reduce the cost of scaling the processes up and down, here are some ways.  

  • For scaling up the processes, instances must be added which are less aggressive in order to monitor the application and see if anything is affected or not.  
  • And for scaling down the processes, reducing instances can be beneficial as it allows for minimizing the necessary tasks to maintain current application loads.  

This is how AWS auto scaling works:  

AWS Auto scaling works
Source: Amazon

2.8 Compute Savings Plans

Compute Savings Plans is an approach that is very beneficial when it comes to cost optimization in AWS. It offers the most flexibility to businesses using AWS and also helps in reducing costs by 66%. The computer savings plans can automatically be applied to the EC2 instance usage regardless of the size, OS, family, or region of the instances. For example, one can change the instances from C4 to M5 with the help of Compute Savings Plans or move the workload from EC2 to Lambda or Fargate. 

This is the snapshot of the Computation of AWS savings plan rates

AWS Saving plans rates
Source: Amazon

2.9 Delete Idle Load Balancers

One of the best practices of AWS cost optimization is to delete the ideal load balance and to do that initially the user needs to check the Elastic Load Balancing configuration in order to check which load balancer is not being used. Every load balancer that is working in the system incurs cost and if there is any that doesn’t have any backend instances or network traffic, it won’t be in use which will be costly for the company. This is why the first thing to do is identify the load balancer that is not in use for which one can use AWS Trusted Advisor. This tool will identify load balancers with a low number of requests. After identifying the balancer with less than 100 requests in a week, you can remove it to reduce the cost.  

2.10 Identify and Delete Orphaned EBS Snapshots

Another best practice for AWS cost optimization is to identify and remove orphaned EBS snapshots. To understand this and learn how to delete the snapshots, let’s go through the below points and see how AWS CLI allows businesses to search certain types of snapshots that can be deleted.  

  • The very first thing to do is use the describe-snapshots command. This will help the developers to get a list of snapshots that are available in your system which includes private and public snapshots owned by other Amazon Web Services accounts. These snapshots will require volume permissions which you will need to create and then in order to filter the created snapshots, one needs to add a JMESPath expression as shown in the below commands. 
aws ec2 describe-snapshots 
--query "Snapshots[?(StartTime<=`2022-06-01`)].[SnapshotId]" --output text 
  • Now it’s time to find old snapshots. For this, one can add a filter to the command while using tags. In the below example, we have a tag named “Team” which helps in getting back snapshots that are owned by the “Infra” team. 
aws ec2 describe-snapshots --filter Name=tag:Team,Values=Infra 
--query "Snapshots[?(StartTime<=`2020-03-31`)].[SnapshotId]" --output text 
  • After this, as you get the list of snapshots associated with a specific tag mentioned above, you can delete them by executing the delete-snapshot command. 
aws ec2 delete-snapshot --snapshot-id  

Snapshots are generally incremental which means that one snapshot is deleted which consists of data that has the reference of another one, the data won’t get deleted but will be transferred to another snapshot. This clearly means deleting a snapshot will not reduce the storage but if there is a block in the data, it will be captured and no longer will be a problem.  

2.11 Handling AWS Chargebacks for Enterprise Customers

The last practice in our list to optimize AWS cost is to handle the chargebacks for enterprise customers. The main reason behind doing this is that as AWS product portfolios and features start growing, so does the migration of an enterprise customer in the existing workloads to the new products on AWS. And in this situation, keeping the cloud charges low is very difficult. And when the resources and services of your business are not tagged correctly, the complexity grows. In order to help the businesses normalize the processes and reduce their costs after implementing the last updates of AWS, implementing auto-billing and chargebacks transparently is necessary. For this, the following steps must be taken into consideration. 

  • First of all, a proper understanding of blended and unblended costs in consolidated billing files (Cost & Usage Report and Detailed Billing Report) is important. 
  • Then the AWS Venting Machine must be used to create an AWS account and keep the account details and reservation-related data in the database in different tables. 
  • After that, to help the admin to add invoice details, a web page hosted on AWS Lambda or a web server is used. 
  • Then to begin the transformation process of the billing, the trigger is added to the S3 bucket to push messages into Amazon Simple Queue Services. After this, your billing transformation will run on Amazon EC2 instances.  

3. AWS Tools for Cost Optimization

Now, after going through all the different practices that can be taken into consideration for AWS cost optimization, let us have a look at different tools that are used to help companies track, report, and analyze costs by offering several AWS reports.  

  • Amazon S3 Analytics: It enables software development companies to automatically carry out analysis and visualization of Amazon S3 storage patterns which can eventually help in deciding whether there is a need to shift data to another storage class or not. 
  • AWS Cost Explorer: This tool enables you to check the patterns in AWS and have a look at the current spending, project future costs, observe Reserved Instance utilization & Reserved Instance coverage, and more. 
  • AWS Budgets: It is a tool that allows companies to set custom budgets that can trigger alerts when the cost increases that the pre-decided budget. 
  • AWS Trusted Advisor: It offers real-time identification of business processes and areas that can be optimized. 
  • AWS CloudTrail: With this tool, users can log into the AWS infrastructure, continuously monitor the processes, and retain all the activities performed by the account to take better actions which can help in reducing the cost. 
  • Amazon CloudWatch: It enables the companies to gather the metrics and track them, set alarms, monitor log files, and automatically react to changes that are made in AWS resources. 

4. Conclusion

As seen in this blog, there are many different types of AWS cost optimization best practices that can be followed by organizations that are working with the AWS platform to create modern and scalable applications for transforming the firm. Organizations following these practices can achieve the desired results with AWS without any hassle and can also stay ahead in this competitive world of tech providers.  

The post AWS Cost Optimization Best Practices appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/aws-cost-optimization/feed/ 0
A Complete Guide to React Micro Frontend https://www.sysgenpro.com/blog/react-micro-frontend/ https://www.sysgenpro.com/blog/react-micro-frontend/#respond Tue, 05 Dec 2023 08:15:33 +0000 https://www.sysgenpro.com/blog/?p=12274 It is a difficult and challenging task for developers to manage the entire codebase of the large scale application. Every development team strives to find methods to streamline their work and speed up the delivery of finished products. Fortunately, concepts like micro frontends and microservices are developed to manage the entire project efficiently and have been adopted by application development companies.

The post A Complete Guide to React Micro Frontend appeared first on sysgenpro Blog.

]]>

Key Takeaways

  1. When developers from various teams contribute to a single monolith on the top of microservices architecture, it becomes difficult to maintain the large scale application.
  2. To manage the large-scale or complex application, breaking down the frontend into smaller and independently manageable parts is preferable.
  3. React is a fantastic library! One can create robust Micro-Frontends using React and tools like Vite.
  4. Micro Frontend with react provides benefits like higher scalability, rapid deployment, migration, upgradation, automation, etc.

It is a difficult and challenging task for developers to manage the entire codebase of the large scale application.  Every development team strives to find methods to streamline their work and speed up the delivery of finished products. Fortunately, concepts like micro frontends and microservices are developed to manage the entire project efficiently and have been adopted by application development companies.   

Micro frontends involve breaking down the frontend side of the large application into small manageable parts. The importance of this design cannot be overstated, as it has the potential to greatly enhance the efficiency and productivity of engineers engaged in frontend code. 

Through this article, we will look at micro frontend architecture using react and discuss its advantages, disadvantages, and implementation steps. 

1. What are Micro Frontends?

The term “micro frontend” refers to a methodology and an application development approach that ensures that the front end of the application is broken down into smaller, more manageable parts which  are often developed, tested, and deployed separately from one another. This concept is similar to how the backend is broken down into smaller components in the process of microservices.

Read More on Microservices Best Practices

Each micro frontend consists of code for a subset (or “feature”) of the whole website. These components are managed by several groups, each of which focuses on a certain aspect of the business or a particular objective.

Being a widely used frontend technology, React is a good option for building a micro frontend architecture. Along with the react, we can use vite.js tool for the smooth development process of micro frontend apps. 

What are Micro frontends

2.1 Benefits of Micro Frontends

Here are the key benefits of the Micro Frontend architecture: 

Key Benefit Description
Gradual Upgrades
  • It might be a time-consuming and challenging task to add new functionality to a massive, outdated, monolithic front-end application.
  • By dividing the entire application into smaller components, your team can swiftly update and release new features via micro frontends.
  • Using multiple frameworks, many portions of the program may be focused on and new additions can be deployed independently instead of treating the frontend architecture as a single application.
  • By this way, teams can improve overall dependencies management, UX, load time, design, and more.
Simple Codebases
  • Many times, dealing with a large and complicated code base becomes irritating for the developers.
  • Micro Frontend architecture separates your code into simpler, more manageable parts, and gives you the visibility and clarity you need to write better code.
Independent Deployment
  • Independent deployment of each component is possible using Micro frontend.
Tech Agnostic
  • You may keep each app independent from the rest and manage it as a component using micro frontend.
  • Each app can be developed using a different framework, or library as per the requirements.
Autonomous Teams
  • Dividing a large workforce into subgroups can increase productivity and performance.
  • Each team of developers will be in charge of a certain aspect of the product, enhancing focus and allowing engineers to create a feature as quickly and effectively as possible.

Here in this Reddit thread one of the front-end developers mentioned how React Micro frontend helps in AWS Cloudwatch.

Comment
byu/angle_of_doom from discussion
inAskProgramming

2.2 Limitations of Micro Frontends

Here are the key limitations of Micro Frontend architecture: 

Limitations Description
Larger Download Sizes
  • Micro Frontends are said to increase download sizes due to redundant dependencies.
  • Larger download rates derive from the fact that each app is made with React or a related library / framework and must download the requirement whenever a user needs to access that particular page.
Environmental Variations
  • If the development container is unique from the operational container, it might be devastating.
  • If the production container is unique from the development container, the micro frontend will malfunction or act otherwise after release to production.
  • The universal style, which may be a component of the container or other micro frontends, is a particularly delicate aspect of this problem.
Management Complexity
  • Micro Frontend comes with additional repositories, technologies, development workflows, services, domains, etc. as per the project requirements.
Compliance Issues
  • It might be challenging to ensure consistency throughout many distinct front-end codebases.
  • To guarantee excellence, continuity, and accountability are kept throughout all teams, effective leadership is required.
  • Compliance difficulties will arise if code review and frequent monitoring are not carried out effectively.

Please find a Reddit thread below discussing the disadvantages of Micro frontend.

Comment
byu/crazyrebel123 from discussion
inreactjs

Now, let’s see how Micro Frontend architecture one can build with React and other relevant tools. 

3. Micro Frontend Architecture Using React

Micro Frontends are taking the role of monolithic design, which has served as the standard in application development for years. The background of monolithic designs’ popularity is extensive. As a result, many prominent software developers and business figures are enthusiastic supporters. Yet as time goes on, new technologies and concepts emerge that are better than what everyone thinks to be used to.

The notion of a “micro frontend” in React is not unique; instead, it represents an evolution of previous architectural styles. The foundation of microservice architecture is being extensively influenced by revolutionary innovative trends in social media, cloud technology, and the Internet of Things in order to quickly infiltrate the industry.

Because of the switch to continuous deployment, micro frontend with react provides additional benefits to enterprises, such as:

  • High Scalability
  • Rapid Deployment
  • Effective migration and upgrading
  • Technology-independence
  • No issue with the insulation
  • High levels of deployment and automation
  • Reduced development time and cost
  • Less Threats to safety and dependability have decreased

Let’s go through the steps of creating your first micro frontend architecture using react: 

4. Building Micro Frontend with React and Vite

4.1 Set Up the Project Structure

To begin with, let’s make a folder hierarchy.

# Create folder named react-vite-federation-demo
# Folder Hierarchy 
--/packages
----/application
----/shared

The following instructions will put you on the fast track:

mkdir react-vite-federation-demo && cd ./react-vite-federation-demo
mkdir packages && cd ./packages

The next thing to do is to use the Vite CLI to make two separate directories: 

  1. application, a react app which will use the components, 
  2. shared, which will make them available to other apps.
#./react-vite-federation-demo/packages
pnpm create vite application --template react
pnpm create vite shared --template react

4.2 Set Up pnpm Workspace

Now that you know you’ll be working with numerous projects in the package’s folder, you can set up your pnpm workspace accordingly.

A package file will be generated in the package’s root directory for this purpose:

touch package.json

Write the following code to define various elements in the package.json file. 

{
  "name": "react-vite-federation-demo", 
  "version": "1.1.0",
  "private": true,   
  "workspaces": [
    "packages/*"
  ],
  "scripts": {
    "build": "pnpm  --parallel --filter \"./**\" build",
    "preview": "pnpm  --parallel --filter \"./**\" preview",
    "stop": "kill-port --port 5000,5001"
  },
  "devDependencies": {
    "kill-port": "^2.0.1",
    "@originjs/vite-plugin-federation": "^1.1.10"
  }
}

This package.json file is where you specify shared packages and scripts for developing and executing your applications in parallel.

Then, make a file named “pnpm-workspace.yaml” to include the pnpm workspace configuration:

touch pnpm-workspace.yaml

Let’s indicate your packages with basic configurations:

# pnpm-workspace.yaml
packages:
  - 'packages/*'

Packages for every applications are now available for installation:

pnpm install

4.3 Add Shared Component  (Set Up “shared” Package)

To demonstrate, let’s create a basic button component and include it in our shared package.

cd ./packages/shared/src && mkdir ./components
cd ./components && touch Button.jsx

To identify button, add the following code in Button.jsx

import React from "react";
import "./Button.css"
export default ({caption = "Shared Button"}) => ;

Let’s add CSS file for your button:

touch Button.css

Now, to add styles, write the following code in Button.css

.shared-button {
    background-color:#ADD8E6;;
    color: white;
    border: 1px solid white;
    padding: 16px 30px;
    font-size: 20px;
    text-align: center;
}

It’s time to prepare the button to use by vite-plugin-federation, so let’s do that now. This requires modifying vite.config.js file with the following settings:

import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
import federation from '@originjs/vite-plugin-federation'
import dns from 'dns'

dns.setDefaultResultOrder('verbatim')

export default defineConfig({
  plugins: [
    react(),
    federation({
      name: 'shared',
      filename: 'shared.js',
      exposes: {
        './Button': './src/components/Button'
      },
      shared: ['react']
    })
  ],
  preview: {
    host: 'localhost',
    port: 5000,
    strictPort: true,
    headers: {
      "Access-Control-Allow-Origin": "*"
    }
  },
  build: {
    target: 'esnext',
    minify: false,
    cssCodeSplit: false
  }
})

Set up the plugins, preview, and build sections in this file.

4.4 Use Shared Component and Set Up “application” Package

The next step is to incorporate your reusable module into your application’s code. Simply use the shared package’s Button to accomplish this:

import "./App.css";
import { useState } from "react";
import Button from "shared/Button";

function Application() {
  const [count, setCount] = useState(0);
  return (
    

Application 1

count is {count}
); } export default Application;

The following must be done in the vite.config.js file:

import { defineConfig } from 'vite'
import federation from '@originjs/vite-plugin-federation'
import dns from 'dns'
import react from '@vitejs/plugin-react'

dns.setDefaultResultOrder('verbatim')

export default defineConfig({
  plugins: [
    react(),
    federation({
      name: 'application',
      remotes: {
        shared: 'http://localhost:5000/assets/shared.js',
      },
      shared: ['react']
    })
  ],
  preview: {
    host: 'localhost',
    port: 5001,
    strictPort: true,
  },
  build: {
    target: 'esnext',
    minify: false,
    cssCodeSplit: false
  }
})

In this step, you’ll also configure your plugin to use a community package. The lines match the standard packaging format exactly.

Application Launch

The following commands will help you construct and launch your applications:

pnpm build && pnpm preview

Our shared react application may be accessed at “localhost:5000”:

Launch Your Application

At “localhost:5001”, you will see your application with a button from the shared application on “localhost:5000”:

5. Conclusion

Micro Frontends are unquestionably cutting-edge design that addresses many issues with monolithic frontend architecture. With a micro frontend, you may benefit from a quick development cycle, increased productivity, periodic upgrades, straightforward codebases, autonomous delivery, autonomous teams, and more.

Given the high degree of expertise necessary to develop micro frontends with React, we advise working with professionals. Be sure to take into account the automation needs, administrative and regulatory complexities, quality, consistency, and other crucial considerations before choosing the micro frontend application design.

The post A Complete Guide to React Micro Frontend appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/react-micro-frontend/feed/ 0
.NET Microservices Implementation with Docker Containers https://www.sysgenpro.com/blog/net-microservices/ https://www.sysgenpro.com/blog/net-microservices/#respond Thu, 23 Nov 2023 07:19:28 +0000 https://www.sysgenpro.com/blog/?p=12223 Applications and IT infrastructure management are now being built and managed on the cloud. Today's cloud apps require to be responsive, modular, highly scalable, and trustworthy.
Containers facilitate the fulfilment of these needs by applications.

The post .NET Microservices Implementation with Docker Containers appeared first on sysgenpro Blog.

]]>

Key Takeaways on .Net Microservices

  1. The microservices architecture is increasingly being favoured for the large and complex applications based on the independent and individual subsystems.
  2. Container-based solutions offer significant cost reductions by mitigating deployment issues arising from failed dependencies in the production environment.
  3. With Microsoft tools, one can create containerized .NET microservices using a custom and preferred approach.
  4. Azure supports running Docker containers in a variety of environments, including the customer’s own datacenter, an external service provider’s infrastructure, and the cloud itself.
  5. An essential aspect of constructing more secure applications is establishing a robust method for exchanging information with other applications and systems.

1. Microservices – An Overview

Applications and IT infrastructure management are now being built and managed on the cloud. Today’s cloud apps require to be responsive, modular, highly scalable, and trustworthy.

Containers facilitate the fulfilment of these needs by applications. To put it another way, attempting to navigate a new location by placing an application in a container without first deciding on a design pattern is like going directionless. You could get where you’re going, but it probably won’t be the fastest way.

.NET Microservices is necessary for this purpose. With the help of a reliable .NET development company offering microservices, the software can be built and deployed in a way that meets the speed, scalability, and dependability needs of today’s cloud-based applications.

2. Key Considerations for Developing .Net Microservices

When using .NET to create microservices, it’s important to remember the following points:

API Design

Since microservices depend on APIs for inter-service communication, it’s crucial to construct APIs with attention. RESTful APIs are becoming the accepted norm for developing APIs and should be taken into consideration. To prevent breaking old clients, you should plan for versioning and make sure your APIs are backward compatible.

Data Management

Because most microservices use their own databases, ensuring data consistency and maintenance can be difficult. If you’re having trouble keeping track of data across your microservices, you might want to look into utilising Entity Framework Core, a popular object-relational mapper (ORM) for .NET.

Microservices need to be tested extensively to assure their dependability and sturdiness. For unit testing, you can use xUnit or Moq, and for API testing, you can use Postman.

Monitoring and analysis are crucial for understanding the health of your microservices and fixing any problems that may develop. You might use monitoring and logging tools such as Azure Application Insights.

If you want to automate the deployment of your microservices, you should use continuous integration and continuous delivery (CI/CD) pipeline. This will assist guarantee the steady delivery and deployment of your microservices.

3. Implementation of .Net Microservices Using Docker Containers

3.1 Install .NET SDK

Let’s begin from scratch. First, install .NET 7 SDK. You can download it from this URL: https://dotnet.microsoft.com/en-us/download/dotnet/7.0  

Once you complete the download, install the package and then open a new command prompt and run the following command to check .NET (SDK) information: 

> dotnet

If the installation succeeded, you should see an output like the following in command prompt: 

.NET SDK Installation

3.2 Build Your Microservice

Open command prompt on the location where you want to create a new application. 

Type the following command to create a new app named “MyMicroservices”

> dotnet new webapi -o DemoMicroservice --no-https -f net7.0 

Then, navigate to this new directory. 

> cd DemoMicroservice

What do these commands mean? 

CommandMeaning
dotnetIt creates a new application of type webapi  (that’s a REST API endpoint). 
-oCreates a directory where your app “DemoMicroservices” is stored.
–no-httpsCreates an app that runs without an HTTPS certificate. 
-fIndicates that you are creating a .NET 7 application. 

3.3 Run Microservice

Type this into your command prompt:

> dotnet run

The output will look like this: 

run microservices

The Demo Code: 

Several files were generated in the DemoMicroservices directory. It gives you a simple service which is ready to run.  

The following screenshot shows the content of the WeatherForecastController.cs file. It is located in the Controller directory. 

Demo Microservices

Launch a browser and enter http://localhost:<port number>/WeatherForecast once the program shows that it is monitoring that address.

In this example, It shows that it is listening on port 5056. The following image shows the output on the following url: http://localhost:5056/WeatherForecast.

WeatherForecast Localhost

You’ve successfully launched a basic service.

To stop the service from running locally using the dotnet run command, type CTRL+C at the command prompt.

3.4 Role of Containers

In software development, containerization is an approach in which a service or application, its dependencies, and configurations (in deployment manifest files) are packaged together as a container image.    

The containerized application may be tested as a whole and then deployed to the host OS in the form of a container image instance.

Software containers are like cardboard boxes in which they are a standardised unit of software deployment that can hold a wide variety of programs and dependencies, and they can be moved from location to location. 

This method of software containerization allows developers and IT professionals to easily deploy applications to many environments with few code changes.

If this seems like a scenario where containerizing an application may be useful, it’s because it is. The advantages of containers are nearly identical to the advantages of microservices.

The deployment of microservices is not limited to the containerization of applications. Microservices may be deployed via a variety of mechanisms, such as Azure App Service, virtual machines, or anything else. 

Containerization’s flexibility is an additional perk. Creating additional containers for temporary jobs allows you to swiftly scale up. The act of instantiating an image (by making a container) is, from the perspective of the application, quite similar to the method of implementing a service or a web application.

In a nutshell, containers improve the whole application lifecycle by providing separation, mobility, responsiveness, versatility, and control.

All of the microservices you create in this course will be deployed to a container for execution; more specifically, a Docker container.

3.5 Docker Installation

3.5.1. What is Docker?

Docker is a free and set of platform as a service products that use OS level virtualization for automating the deployment of applications as portable, self-sufficient containers that can run on cloud or on-premises. Docker also has premium tiers for premium features. 

Azure supports running Docker containers in a variety of environments, including the customer’s own datacenter, an external service provider’s infrastructure, and the cloud itself. Docker images may be executed in a container format on both Linux and Windows.

3.5.2. Installation Steps

Docker is a platform for building containers, which are groups of an app, its dependencies, and configurations. Follow the steps mentioned below to install the docker: 

  • First download the .exe file from docker website
  • Docker’s default configuration for Windows employs Linux Containers. When asked by the installation, just accept the default settings.
  • You may be prompted to sign out of the system after installing Docker.
  • Make sure Docker is up and running.
  • Verify that Docker is at least version 20.10 if you currently have it installed.

Once the setup is complete, launch a new command prompt and enter:

> docker --version

If the command executes and some version data is displayed, then Docker has been set up properly.

3.6 Add Docker Metadata

A Docker image can only be created by following the directions provided in a text file called a Dockerfile. If you want to deploy your program in the form of a Docker container, you’ll need a Docker image.

Get back to the app directory

Since the preceding step included opening a new command prompt, you will now need to navigate back to the directory in which you first established your service.

> cd DemoMicroservice

Add a DockerFile

Create a file named “Dockerfile” with this command:

> fsutil file createnew Dockerfile 0

To open the docker file, execute the following command. 

> start Dockerfile.

In the text editor, update the following with the Dockerfile’s current content:

FROM mcr.microsoft.com/dotnet/sdk:7.0 AS build
WORKDIR /src
COPY DemoMicroservice.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c release -o /app

FROM mcr.microsoft.com/dotnet/aspnet:7.0
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "DemoMicroservice.dll"]

Note: Keep in mind that the file needs to be named as Dockerfile and not Dockerfile.txt or anything else.

Optional: Add a .dockerignore file

If you have a .dockerignore file, it will limit the number of files that are read during the ‘docker build’ process. Reduce the number of files to compile faster.

If you’re acquainted with .gitignore files, the following command will create a .dockerignore file for you:

> fsutil file createnew .dockerignore 0

You can then open it in your favorite text editor manually or with this command:

> start .dockerignore

Then, either manually or with the following command, load it in your preferred text editor:

Dockerfile
[b|B]in
[O|o]bj

3.7 Create Docker Image

Start the process with this command:

> > docker build -t demomicroservice

Docker images may be created with the use of the Dockerfile and the docker build command.

The following command will display a catalogue of all images on your system, especially the one you just made.

> docker images

3.8 Run Docker image

Here’s the command you use to launch your program within a container:

> docker run -it --rm -p 3000:80 --name demomicroservicecontainer demomicroservice

To connect to a containerized application, go to the following address: http://localhost:3000/WeatherForecast 

demo microservices with docker weatherforecast

Optionally, The subsequent command allows you to observe your container in a different command prompt: 

> docker ps
docker ps

To cancel the docker run command that is managing the containerized service, enter CTRL+C at the prompt.

Well done! A tiny, self-contained service that can be easily deployed and scaled with Docker containers has been developed by you.

These elements provide the foundation of a microservice.

4. Conclusion

The .NET Framework, from its inception with .NET Core to the present day, was designed from the ground up to run natively on the cloud. Its cross-platform compatibility means your .NET code will execute regardless of the operating system as your Docker image is built on. .NET is incredibly quick, with the ASP.NET Kestrel web server consistently surpassing its competitors. Its remarkable presence leaves no doubt for disappointment and should be incorporated in your projects.

5. FAQs

Why is .NET core good for microservices?

.NET enables the developers to break down the monolithic application into smaller parts and deploy services separately which can not only help businesses get more time to market the product but also benefit in adapting to the changes quickly and with flexibility. Because of this reason, the .NET core is considered a powerful platform to create and deploy microservices. Besides this, some other major reasons behind it being a good option for microservices are – 

  • Easier maintenance as with .NET core, microservices can be tested, updated, and deployed independently.
  • Better scalability is offered by the .NET core. It scales each service independently to meet the traffic demands.

What is the main role of Docker in microservices?

When it comes to a microservices architecture, the .Net app developers can create applications that are independent of the host environment. This can be done by encapsulating each of the microservices in Docker containers. Docker is a concept that enables the developers to package the applications they create into containers and here each container has a standard executable component and operating system library to run the microservices in any platform. 

The post .NET Microservices Implementation with Docker Containers appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/net-microservices/feed/ 0
Building Microservices Architecture Design on Azure https://www.sysgenpro.com/blog/azure-microservices/ https://www.sysgenpro.com/blog/azure-microservices/#respond Wed, 20 Sep 2023 04:59:43 +0000 https://www.sysgenpro.com/blog/?p=11945 Nowadays, the usage of Microservices architectures is widely increasing and it is replacing the traditional monolithic application architecture. This approach is generally used by software development companies to develop large-scale applications with the use of public cloud infrastructure such as Azure. This infrastructure offers various services like Azure Functions, Azure Kubernetes Service (AKS), Azure Service Fabric, and many more.

The post Building Microservices Architecture Design on Azure appeared first on sysgenpro Blog.

]]>

Key Takeaways

  1. Challenges like Reliability, complexity, scalability, data integrity, versioning, and many others can be solved by Azure. It simplifies the microservices application development process.
  2. The serverless solution, Azure Functions, enables you to write less code and eventually it saves cost.
  3. Azure manages the K8s (Kubernetes) API services. AKS (Azure Kubernetes Services) is a managed K8s cluster hosted on the Azure Cloud. So, agent nodes are the only thing you need to manage.
  4. Azure also provides services like Azure DevOps Services, AKS, Azure Monitor, Azure API Management, etc. to automate build, test, and deployment tasks.

Nowadays, the usage of Microservices architectures is widely increasing and it is replacing the traditional monolithic application architecture. This approach is generally used by software development companies to develop large-scale applications with the use of public cloud infrastructure such as Azure. This infrastructure offers various services like Azure Functions, Azure Kubernetes Service (AKS), Azure Service Fabric, and many more. To know more about microservices, their implementation benefits, and how cloud infrastructure like Azure help businesses implement the microservices architecture, let’s go through this blog.

1. What are Microservices?

Microservices are known as one of the best-fit architectural styles for creating highly scalable, resilient, modern, large scale and independently deployable applications. There are various approaches available by which one can design and create microservices architecture. Microservice architecture can consist of autonomous and smaller services. And here each service is self-contained which means that it helps in implementing a single business capability within a defined bounded context. Here a bounded context means a natural division within a business that offers boundaries within each business domain.

Microservices Architecture

Basically, microservices are easy to develop and maintain the developers as they are independent, small, and loosely coupled. Each service has its own separate codebase which can be easily maintained by a small development team. Besides this, when it is time to deploy services, it can be done independently. In addition to this, services are also responsible for persisting external or their own data which differs from the traditional method of app development.

Further Reading on: Microservices Best Practices
Microservices vs Monolithic Architecture

2. How can Azure be the Most Beneficial for Microservices Development and Deployment?

Here are some of the reasons that prove that Azure is very beneficial for the implementation of Microservices architectures –

2.1 Creates and Deploys Microservices with Agility

Microsoft Azure enables effortless management of newly released features, bug fixes, and other updates in individual components without forcing them to redeploy the entire application. This means that it enables continuous integration/continuous deployment (CI/CD) with the help of automated software delivery workflow.

2.2 Makes Applications Resilient

Azure microservices help replace an individual service or retire the services without affecting the entire software application. The reason behind this is that microservices platforms enable developers to use patterns like circuit breaking to handle the failure of individual services, their reliability, and security, unlike the traditional monolithic model. 

2.3 Microservices Scale with Demand

Microservices on Azure enables the developers to scale individual services based on the requirement of the resources without scaling out the entire application. For this, the developers have to gather the higher density of services into an individual host with the use of a container orchestrator such as Azure Red Hat OpenShift or Azure Kubernetes Service (AKS).

2.4 Finds the Best Approach for the Team

Another benefit of Azure Microservices is that it enables dedicated software development teams to select their preferred language, deployment approach, microservices platform, and programming model for each microservice. Besides, Azure API management enables the developers to publish microservice APIs for both external and internal consumption while maintaining crosscutting concerns like caching, monitoring, authentication, throttling, and authorization.

3. Building Microservices Architecture on Azure

Here are some steps that will help developers to create microservices architecture on Azure –

Step 1: Domain Analysis

Domain Analysis

When it comes to microservices, developers generally face issues related to defining the boundaries of each service in the system. And doing so becomes a priority according to the rule which states that each microservices should have a single responsibility. Therefore, by following this rule and putting it into consideration, microservices should be designed only after  understanding  the client’s business domain, requirements, and goals. If this is not the case, the design of the microservice will be haphazard with undesirable characteristics like tight coupling, hidden dependencies, and poorly designed interfaces. Therefore, it is necessary to design the microservices properly and for that analyzing the domain is the best first step. 

In addition to this, Domain-Driven Design (DDD) approach is used as it offers a framework that can help developers create a well-designed service. This approach comes in two different phases: strategic and tactical. In this method, strategic DDD ensures that the service architecture is created in such a way that it focuses on the business capabilities, while tactical DDD offers a set of design patterns for services. To apply Domain-Driven Design, there are some steps that the developers need to follow and they are – 

  • The very first step is to analyze the business domain in order to get a proper understanding of the application’s functional requirements. Once this step is performed, as a result, the software engineers will get an informal description of the domain which can be reframed and converted into a formal set of service domain models.
  • The next step is to define the domain’s bounded contexts. Here, each bounded context holds one domain model that represents a subdomain of an app. 
  • After this, tactical DDD patterns must be applied within the bounded context, so that they can be helpful in defining patterns, aggregations, and domain services.
  • The last step is to use the outcome from the previously performed step to identify the app’s microservices.

This shows that the use of the DDD approach enables the developers to design every microservice within a particular boundary and also helps in avoiding the trap that creates boundaries for business organizations or enables them to choose the designs. It enables the development team to keep a close observation of code creation. 

Step 2: Identify Boundaries and Define the Microservices

After setting the bounded contexts for the application and analyzing the domain model in the first step, now is the time to jump from the domain model to the application model. And for this, there are some steps that are required to be followed in order to derive services from the domain model. The below-listed steps help the development team to identify the microservices from the domain model. 

  • To start with this process, first, one needs to check the functionalities required in the service and must confirm that it doesn’t span more than one bounded context. And as per the definition, the bounded context marks any particular domain model’s boundary. This basically talks about whether the software development teams want to find out that the microservices used in the system are able to mix different domain models or not. If yes, then domain analysis must be refined in order to make the microservices carry on their tasks easily.
  • After that, the team needs to look at the domain’s aggregates. Generally, aggregates are known as good applicants for services and when they are well-defined, they have different characteristics –
    1. High functional cohesion
    2. Derived from business requirements and not technical concerns
    3. Loosely coupled
    4. Boundary of persistence
  • Then the team needs to check domain services. These services are stateless operations that are carried out across various aggregates.
  • The last step is to consider non-functional requirements. Here, one needs to check factors like data types, team size, technologies, requirements, scalability, and more. The reason behind checking these factors is that they lead to the further decomposition of microservices into smaller versions.

After checking and identifying the microservices, the next thing to do is validate the design of microservices against some criteria to set boundaries. For that, check out the below aspects – 

  • Every service must have a single responsibility.
  • There should not be any chatty calls between the microservices.
  • In lock-step, no inter-dependencies require the deployment of two or more services.
  • Small independent teams can create a service as it is small and simple.
    1. Services can evolve independently as they are not tightly coupled.

Step 3: Approaches to Build Microservices

The next step is to use any of the two widely used approaches to create microservices. Let’s go through both these approaches.

Service Orchestrators

Service orchestrator is an approach that helps in handling tasks that are related to managing and deploying a set of services. These tasks include: 

  • Health monitoring of the services 
  • Placing services on nodes 
  • Load balancing 
  • Restarting unhealthy services 
  • Service discovery 
  • Applying configuration updates
  • Scaling the number of service instances. 

Some of the most popular orchestrators that one can use are Docker Swarm, Kubernetes, DC/OS, and Azure Service Fabric.

When the developer is working on the Microsoft Azure platform for microservices, consider the following options: 

Service Description
Azure Containers Apps
  • A managed service built on Kubernetes.
  • It abstracts the complexity of container orchestration and other tasks. 
  • It simplifies the deployment process of containerized apps and microservices in the serverless environment. 
Azure Service Fabric
  • A distributed systems platform to manage microservices.
  • Microservices can also be deployed to service fabric as containers, or reliable services.
Azure Kubernetes Services (AKS)
  • A managed K8s (Kubernetes) service. 
  • It provisions and exposes the endpoints of Kubernetes API but manages and hosts the Kubernetes automated patching, Kubernetes control plane, and performing automated upgrades.
Docker Enterprise Edition
  • It can run in the IaaS environment on Azure.

Serverless

Serverless architecture is a concept that makes it easier for the developers to deploy code and host service that can handle putting and executing that code onto a VM. Basically, it is an approach that helps coordinated functions of the application to handle the events with the use of event-based triggers. For instance, when a message is placed in a queue the function that reads from the queue and carries out the messages might get triggered.

In this case, Azure Functions are serverless compute services and enable support to various function triggers such as Event Hubs Events, HTTP request, and  Service Bus queues.

Orchestrator or Serverless?

Some of the factors that give a clear idea of the differences between an orchestrator approach and a serverless approach are –

Comparison Aspects Orchestrator Serverless
Manageability  An orchestrator is something that might force the development team to think about various issues like networking, memory usage, and load balancing. Serverless applications are very simple and easy to manage as the platform itself will help in managing all resources, systems, and multiple subsystems.
Flexibility An orchestrator offers good control over managing and configuring the new microservices and the clusters. In serverless architecture, a developer might have to give up some degree of control as the details in it can be abstracted.
Application Integration In Orchestrator, It can be easy to integrate applications as it provides better flexibility.  It can be challenging to build a complex application using a serverless architecture because it requires good coordination between managed services, and independent components.
Cost Here, one has to pay for the virtual machines that are running in the cluster. In serverless applications, payment for only actual computing resources is necessary.

Step 4: Establish Communication between Microservices

When it comes to building stateful services or other microservices applications architecture, communication plays an important role. The reason behind it is that the communication between microservices must be robust and efficient in order to make the microservices application run smoothly. And in such types of applications, unlike traditional monolithic applications, various small and granular services interact to complete a single business activity, and it can be challenging. A few of these challenges are – resiliency, load balancing, service versioning, distributed applications tracing, and more. To know more about these challenges and possible solutions, Let’s go  through the following table –

Challenges Possible Solutions
Resiliency 
  • Microservices can fail because of many reasons like hardware, VM reboot, or node-level failures.
  • To avoid it, resilient design patterns like retry and circuit breaker are used.
Load Balancing
  • When one service calls another, it must reach the running instance of the other service.
  • Kubernetes provides the stable ip address for a group of pods. 
  • This gets processed by iptable rules and a service mesh that offers an intelligent load-balancing algorithm based on observed latency and few other metrics. 
Service Versioning
  • Deploying a new service version must avoid breaking other services and external clients that depend on it.
  •  Also, you may be required to run various versions of the service parallel, and route the requests to a particular version. For the solution, go through the next step of API versioning in detail.
Distributed Tracing
  • Single transactions extend various services which make it difficult to monitor the health and performance of the system. Therefore, distributed tracing is a challenge.
  •  For the solution, follow these steps:
    1. Assign unique external id to each external request
    2.  Include these external Ids in all log messages
    3. Record information about the requests.
TLS Encryption and Authentication
  • Security is one of the key challenges.
  •  For the same, you must encrypt traffic between services with TLS, and use mutual TLS authentication to authenticate callers.

But with the right use of Azure and Azure SQL databases, strong communication can be established without any issue. And for this, there are two basic messaging patterns with microservices that can be used. 

  • Asynchronous communication: In this type of pattern, a microservice sends a message without getting any response. Here, more than one service processes messages asynchronously. 
  • Synchronous Communication: Another type of pattern is synchronous communication where the service calls an API that another microservice exposes with the use of protocols like gRPC and HTTP. Also the caller service waits for the response of the receiver service to proceed further.  
Synchronous vs. async communication
Source: Microsoft

Step 5: Design APIs for Microservices

 Designing good APIs for microservices is very important because all data exchange that happens between services is either through API calls or messages. Here, the APIs have to be efficient and effective to avoid the creation of chatty I/O. 

Besides this, as the microservices are designed by the teams that are working independently, the APIs created for them must consist of versioning and semantic schemes as these updates should not break any other service.

Design APIs for Microservices

Here the two different types of APIs that are widely designed –

  • Public APIs: A public API must be compatible with client apps which can be either the native mobile app or browser app. This means that the public APIs will mostly use REST over HTTP.
  • Backend APIs: Another type of API designed for microservices is backend API which depends on network performance and completely depends on the granularity of the key services. Here interservice communication plays a vital role and it can also result in network traffic.

Here are some things to think about when choosing how to implement an API.

Considerations Details
REST API vs RPC
  • It is a natural way to express the domain and enables the definition of a uniform interface that is based on HTTP verbs.
  • It comes with perfectly defined semantics in terms of side effects, idempotency, and response codes.
  • RPC is more oriented around commands and operations.
  • RPC is an interface that looks just like the method calls and enables the developers to design chatty APIs.
  • For the RESTful interface, select REST over HTTP using JSON. And for RPC style interface, use frameworks like gRPC, Apache avro, etc.
Interface Definition Language (IDL)
  • It is a concept that is used to define the API’s parameters, methods, and return values.
  • It can be used to create client code, API documentation, and serialization.
Efficiency
  • In terms of memory, speed, and payload size efficiency is highly considered.
Framework and Language Support 
  • HTTP is supported in every language and framework.
  • gRPC, and Apache Avro all have libraries for C#, JAVA, C++, and Python.
Serialization
  • Objects are serialized through binary formats and text-based formats.
  • Some serialization also requires some compiling of a schema definition file and fixed schema.

Step 6: Use API Gateways in Microservices

Use API Gateways in Microservices

In microservices architecture, it generally happens that a client might interact more with the front-end service to know how a client identifies which endpoints to call or what might happen if existing services are re-engineered or new services are introduced. For all these things, an API gateway can be really helpful as it helps in addressing the defined challenges. An API gateway resides between the services and the clients which enables it to act like a reverse proxy for routing requests from the client side of the application to the services side. Here, it also performs many cross-cutting tasks like rate limiting, SSL termination, and authentication.

Basically, by using an API gateway in the microservices architectural approach, clients will have to send requests directly to front-end services if there is no deployment of a gateway. Here are some of the issues that might occur while exposing services directly to the application’s client-side –

  • There can be complexity in the client code for which the client will have to keep a track of various endpoints and also resiliently handle failures.
  • When a single operation is carried out, it might require various calls for multiple services and this can result in round trips of multiple networks between the server and the client. 
  • Exposing services can create coupling between the front end (client-side) and the back end. And in this case, the client requires to have knowledge about the process of individual services being decomposed. This makes it difficult to handle the client and refactor services. 
  • Services that have public endpoints are at the attack surface and require to be hardened. 
  • Services must expose protocols like WebSocket or HTTP that are client-friendly and this limits the communication protocol choices. 

But with the right API gateway usage in services, the team can achieve business needs. The reason behind this is that a gateway can help in addressing these issues from services by decoupling the clients. Gateways have the capability to perform various functions and they can be grouped into the following design patterns – 

Gateway Design Patterns  What do They do? 
Gateway Aggregation
  • It is used to aggregate various individual requests into one single request.
  • Gateway aggregation is applied by the development team when a single operation requires calls to various backend services.
Gateway Routing
  • This is a process where the gateway is used as a reverse proxy for routing requests to a single or more backend service with the use of layer 7 routings.
  • Here, the gateway offers clients a single endpoint and decouples clients from services.
Gateway Offloading
  • Here, the usage of the gateway is done to offload functionalities from various individual services.
  • It can also be helpful in consolidating the functions in one place rather than making all the services in the system responsible for the implementation of the functions.

Some of the major examples of functionalities that can be offloaded by the development team to a gateway are – 

  • Authentication
  • SSL termination
  • Client rate limiting (throttling)
  • IP allow list or blocklist
  • Response caching
  • Logging and monitoring
  • GZIP compression
  • Web application firewall

For implementing an API gateway to the application, here are some major options – 

API Gateway Implementation  Approach 
Azure Application Gateway
  • It is a concept that helps in handling load-balancing services that consist of the capability to perform SSL termination and layer-7 routing.
  • This gateway also offers WAF. 
Azure API Management 
  • This implementation approach is known as the turnkey solution which can be used while publishing APIs to both internal and external customers. 
  • Features of this approach are beneficial in handling public-facing APIs like authentication, IP restrictions, rate limiting, and more.
Azure Front Door
  • Azure Front Door is a modern cloud Content Delivery Network that offers secure, reliable, and fast access between the application’s dynamic and static web content and users.
  • It is an option used for content delivery using a global edge network by Microsoft.

Step 7: Managing Data in Microservices Architecture

The next step is managing the data centers in service architecture. Here one can use the single responsibility principle.  This means that each service is responsible for its own data store which is private and cannot be accessed by other services. The main reason behind this rule is to avoid unintentional coupling in microservices as it can result in underlying data schemas. And when a change occurs to the data schema, it must be coordinated across each service that depends on that database.

Basically, if there is isolation between the service’s data store, the team can limit the scope of change and also safeguard the agility of independent deployments. Besides this, each microservice might have its own queries, patterns, and data models which cannot be shared if one wants to have an efficient service approach.

Now let’s go through some tools that are used to make data management in microservices architecture an easy task with Azure –

Tools Usage
Azure Data Lake
    With Azure Data Lake, developers can easily store data of any shape, size, and speed for processing and analytics across languages & platforms.

  • It also removes the complexities of storing and ingesting data by making it faster to run the data with streaming, batch, and interactive analytics.
Azure Cache
  • Azure Cache adds caching to the applications that have more traffic and demand to handle thousands of users simultaneously with faster speed and by using all the benefits of a fully-managed service.
Azure Cosmos DB 
  • Azure Cosmos DB is a cloud-based, and fully managed NoSQL database.
  • This relational database offers a response time of single-digit milliseconds and guarantees speed & scalability.

Step 8: Microservices Design Patterns

The last step in creating microservice architecture using Azure is design patterns that help in increasing the velocity of application releases and this can be done by decomposing the app into various small services which are autonomous & deployed independently. This process might come with some challenges but the design patterns defined here can help mitigate them –

Microservices Design Patterns
  • Anti-Corruption Layer: It can help in implementing a facade between legacy and new apps to ensure that the new design is not dependent on the limitations of legacy systems.
  • Ambassador: It can help in offloading common client connectivity tasks like routing, monitoring, logging, and more.
  • Bulkhead: It can isolate critical resources like CPU, memory, and connection pool for each service and this helps in increasing the resiliency of the system.
  • Backends for Frontends: It helps in creating separate backend services for various clients like mobile or desktop.
  • Gateway Offloading: It helps each microservice to offload the service functionalities that are shared.
  • Gateway Aggregation: It aggregates single service requests to various individual microservices so that chattiness can be reduced.
  • Gateway Routing: It helps in routing requests to various microservices with the use of a single endpoint.
  • Strangler Fig: It enables incremental refactoring by slowly replacing the functionalities of an application.
  • Sidecar: It helps in deploying components to offer encapsulation and isolation.

4. Conclusion

As seen in this blog, microservices architecture is very beneficial to develop large applications by accommodating various business needs, and using cloud infrastructure like Azure can enable easy migration from legacy applications. Azure and various tools can help companies to create and manage well-designed microservices applications in the competitive market.

5. Frequently Asked Questions about Azure Microservices:

1. What are Azure Microservices?

Microservices refers to the architectural methodology employed in the development of a decentralized application, wherein multiple services that execute specific business operations and interact over web interfaces are developed and deployed independently.

2. What are the Different Types of Microservices in Azure?

  • Azure Kubernetes Service (AKS): The Kubernetes control plane is hosted and maintained by the Azure Kubernetes Service (AKS), which is a governed Kubernetes service.
  • Azure Container Apps: The Azure Container Apps service simplifies the often-tricky processes of container orchestration and other forms of administration.
  • Service Fabric: Microservices can be put together, deployed, and managed with the help of Service Fabric, which is a distributed systems platform. 

3. How do I Use Microservices in Azure?

Step-1: Domain analysis

The use of domain analysis to create microservice boundaries helps designers avoid several errors. 

Step-2: Design the services

To properly support microservices, application development must shift gears. Check out Microservices architecture design to learn more.

Step-3: Operate in production

Distributed microservices architectures need dependable delivery and monitoring mechanisms.

4. Is an Azure Function a Microservice?

Azure functions have the capability to be employed within a microservices architectural framework; however, it is important to note that Azure functions do not inherently possess the characteristics of microservices.

5. How Microservices are Deployed in Azure?

Build an Azure Container Registry in a similar place as your microservices and connect it to a resource group for deploying them. Deployed container instances from your registry will run on a Kubernetes cluster.

The post Building Microservices Architecture Design on Azure appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/azure-microservices/feed/ 0
Key Kubernetes Challenges and Solutions https://www.sysgenpro.com/blog/kubernetes-challenges/ https://www.sysgenpro.com/blog/kubernetes-challenges/#comments Fri, 30 Jun 2023 11:13:56 +0000 https://www.sysgenpro.com/blog/?p=11222 With the increasing use of containers for application deployment, Kubernetes, one of the best portable, open-source, and container-orchestration platforms, has become very popular for deploying and managing containerized applications.

The post Key Kubernetes Challenges and Solutions appeared first on sysgenpro Blog.

]]>
With the increasing use of containers for application deployment, Kubernetes, one of the best portable, open-source, and container-orchestration platforms, has become very popular for deploying and managing containerized applications.

The main reason behind it is that early on, companies ran applications on physical servers and had no defined resource boundaries for the same. For instance, if there are multiple apps that run on a physical server, there are chances that one app takes up the majority of the resources and others start underperforming due to the fewer resources available. For this, containers are a good way to bundle the apps and run them independently. And to manage the containers, Kubernetes is majorly used by Cloud and DevOps Companies. It restarts or replaces the failed containers which means that no application can directly affect the other applications.

Basically, Kubernetes offers a framework that runs distributed systems and deployment patterns. To know more about Kubernetes and key Kubernetes challenges, let’s go through this blog.

1. Kubernetes: What It can Do & Why Do You Need It?

Traditionally, companies used to deploy applications on physical servers which had no specific boundaries for resources and this led to an issue in which the performance of apps might suffer because of other apps’ resource consumption. 

To resolve this, bundling the apps and running them efficiently in containers – is a good way. This means that, in the production environment, the software development team needs to manage the containers that run the applications, and after that, the developers need to ensure that there is no downtime. For instance, if the container of the application goes down, another one needs to start. And handling this behavior can be really easy with Kubernetes. It offers a framework that can run distributed systems flexibly. 

Kubernetes platform takes care of the app failover & scaling, offers deployment patterns, and much more. Besides this, Kubernetes has the capability to provide:

Area of Concern How can Kubernetes be helpful?
Self-healing
  • Kubernetes is self-healing which means that it can restart the containers, kill containers that don’t respond properly, replace containers, and doesn’t advertise the containers to the clients till they are perfectly ready.
Storage Orchestration
  • Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and more.
Automate Bin Packing
  • Kubernetes offers automatic bin packing with a cluster of nodes.
  • This can be used to run containerized tasks. Basically, the development team needs to tell Kubernetes how much memory(RAM) and CPU each container needs.
  • It will help them to fit containers onto the nodes to make the perfect use of the available resources.
Automated Rollbacks and Rollouts
  • Automated rollbacks and rollouts can be handled by Kubernetes.
  • Kubernetes can create new containers for the deployment process, and remove existing containers & adopt all the resources to the new ones.
  • This means that one can describe the desired state for the containers that are deployed using Kubernetes and it can help in changing the actual state to the desired state.
Service Discovery and Load Balancing
  • Kubernetes can expose containers with the use of the DNS name or IP address. This means that if there is high traffic to a container, it can distribute the network traffic and balance it so that the deployment is stable.

2. Key Kubernetes Challenges Organizations Face and Their Solutions

As we all know, many organizations have started adopting the Kubernetes ecosystem and along with that they are facing many Kubernetes challenges. So, they need a robust change program that is – a shift in mindset and software development practices which comes with two important standpoints – the organization’s viewpoint and the development teams’ viewpoint. Let’s go through the key Kubernetes challenges and possible solutions: 

2.1 Security

One of the biggest Kubernetes challenges users face while working with Kubernetes is security. The main reason behind it is the software’s vulnerability and complexity. This means that if the software is not properly monitored, it can obstruct vulnerability identification. Let’s look at where K8s best fit within the entire software development process for security:

Kubernetes Security Architecture

Basically, when multiple containers are deployed, detecting vulnerabilities becomes difficult and this makes it easy for hackers to break into your system. One of the best examples of the Kubernetes break-in is the Tesla cryptojacking attack. In this attack, the hackers infiltrated the Kubernetes admin console of Tesla which led to the mining of cryptocurrencies on AWS, Tesla’s cloud resource. To avoid such security Kubernetes challenges, here are some recommendations you can follow:

Infrastructure Security

Area of ConcernRecommendations
Network access to API Server
  • Kubernetes control plane access should not be allowed publicly on the internet.
  • It must be controlled by network access control lists limited to specific IP addresses.
Network access to Nodes
  • Nodes must not be exposed on the public internet directly.
  • It should be configured only to accept connections for services in Kubernetes (NodePort, LoadBalancer, etc.) from the control panel on the specific ports.
Kubernetes access to cloud provider API
  • Grant only required permissions to each cloud provider on Kubernetes control plans and nodes.
Access to the database (etcd)
  • It should be limited to the control plan only.
Database encryption
  • It’s best practice to encrypt the storage wherever possible for security.

Containers Security

Area of ConcernRecommendations
Container Vulnerability
  • Scan all the containers for known Vulnerabilities.
Image signing
  • Sign container images to maintain system reliability.
Disallow privileged users
  • Consult the documentation properly, and create users inside the containers that have the necessary privilege of an operating system.
Storage isolation
  • Use container runtime with storage isolation.
  • For that, select container runtime classes.
Individual Containers
  • Use separate containers so that the private key is hidden and maximum security measures can be maintained.

Other Recommendations

Area of ConcernRecommendations
Use App Modules
  • Use Security modules like AppArmor and SELinux to improve overall security.
Access Management
  • Make Mandatory Role Based Access Control (RBAC) to every user for authentication.
Pod Standards
  • Make sure that Pods meet predefined security standards.
Secure an Ingress with TLS
  • By specifying a Secret that contains a TLS private key and certificate, you can secure an Ingress.

2.2 Networking

Kubernetes networking allows admins to move workloads across public, private, and hybrid clouds. Kubernetes is mainly used to package the applications with proper infrastructure and deploy updated versions quickly. It also allows components to communicate with one another and with other apps as well.

Kubernetes networking platform is quite different from other networking platforms. It is based on a flat network structure that eliminates the need to map host ports to container ports. There are 4 key Kubernetes challenges that one should focus on:

  1. Pod to Pod Communication
  2. Pod to Service Communication
  3. External to Service Communication
  4. Highly-coupled container-to-container Communication

Besides this, there are some areas that might also face multi-tenancy and operational complexity issues, and they are:

  • During the time of deployment, if the process spans more than one cloud infrastructure, using Kubernetes can make things complex. The same issue arises when the same thing happens from different architectures with mixed workloads. 
  • Complexity issues are faced by the users when mixed workloads from ports on Kubernetes and static IP addresses work together. For instance, when IP-based policies are implemented because pods make use of an infinite number of IPs, things get complex and challenging. 
  • One of the Kubernetes challenges with traditional networking approaches is multi-tenancy. This issue occurs when multiple workloads share resources. It also affects other workloads in the same Kubernetes environment if the allocation of resources is improper.

In these cases, the container network interface (CNI) is a plug-in that enables the programmer to solve the networking Kubernetes challenges. This interface enables Kubernetes to easily integrate into the infrastructure so that the users can effectively access applications on various platforms. This issue can be resolved using service mesh which is an infrastructure layer in an application that has the capability to handle network-based intercommunication through APIs. It also enables the developers to have an easy networking and deployment process. 

Basically, these solutions can be useful to make container communication smooth, secure, seamless, and fast, resulting in a seamless container orchestration process. With this, the developers can also use delivery management platforms to carry out various activities like handling Kubernetes clusters and logs.

2.3 Interoperability

Interoperability can also be a serious issue with Kubernetes. When developers enable interoperable cloud-native applications on this platform, the communication between the applications starts getting tricky. It starts affecting the cluster deployment and the reason behind is that the application may have an issue in executing an individual node in the cluster. 

This means Kubernetes doesn’t work as well at the production level as it does in development, staging, and QA. Besides this, when it comes to migrating an enterprise-class production environment, there can be many complexities like governance, interoperability, and performance. For this, there are some measures that can be taken into consideration and they are:

  • For production problems, fuel interoperability. 
  • One can use the same command line, API, and user interface. 
  • Leveraging collaborative projects in different organizations to offer services for applications that run on cloud-native can be beneficial. 
  • With the help of Open Service Broker API, one can enable interoperable cloud-native applications and increase the portability between providers and offers. 

2.4 Multi Cluster Headaches

Every developer or every team starts working with a single cluster in Kubernetes. However, many teams expect that the size or number of clusters will increase next year. And the number of clusters indeed increases with time. 

The reason behind such a dramatic increase in Kubernetes clusters is that the teams break down the development, staging, and production environment clusters. Developers can handle working on 3-4 clusters by using various types of open-source tools like k9s, and more. 

But the work becomes overwhelming even for a team when the number rises beyond two dozen clusters. You can definitely solve complex problems using multiple Kubernetes clusters but managing a high number of clusters is challenging. Even when each cluster grows, its complexities also increase which makes it even tougher to manage one cluster. 

Every Kubernetes cluster is responsible for various operations like updating, maintaining, and securing the cluster, addition or removal of the node, and more. These operations are of a dynamic nature and when they scale, they require a well-arranged Kubernetes multi-cluster management system in place. 

Such a system enables you to monitor each cluster and everything happening inside it. The operation team’s SREs can maintain the uniformity of your Kubernetes architecture and the performance of your application by monitoring the health of the cluster environment. 

Multi-cluster management is the solution to multi-cluster headaches. Developers should use a declarative approach to explain the configuration rather than individually connecting with the clusters using open-source tools. 

This description should determine the future state of the cluster which includes everything from app workloads to infrastructure. Providing such a detailed description helps recreate the entire cluster stack from the ground up whenever the need arises. 

2.5 Storage

Storage is one of the key Kubernetes challenges that many organizations face. When larger organizations try to use Kubernetes on their on-premises servers, Kubernetes starts creating storage issues. The main reason behind it is that the companies are trying to manage the whole storage infrastructure of the business without using cloud resources. This leads to memory crises and vulnerabilities. 

The permanent solution is to move the data to the cloud and reduce the storage from the local servers. Besides this, there are some other solutions for the Kubernetes storage issues and they are:

  1. Persistent Volumes:
    • Stateful apps rely on data being persisted and retrieved to run properly. So, the data must be persisted regardless of pod, container, or node termination or crashes. For this, it requires persistent storage – storage that is forever for a pod, container, or node.
    • If one runs a stateful application without storage, data is tied to the lifecycle of the node or pod. If the pod terminates or crashes, entire data can be lost.
    • For solution, one requires these three simple storage rules to follow:
      1. Storage must be available from all pods, nodes, or containers in the Kubernetes Cluster.
      2. It should not depend upon the pod lifecycle.
      3. Storage should be available regardless of any crash or app failure.
  2. Ephemeral Storage:
    • This type of storage refers to the volatile temporary data saving approach that is attached to the instances in their lifetime data such as buffers, swap volume, cache, session data, and so on.
    • Basically, when containers can utilize a temporary filesystem (tmpfs) to write & read files, and when a container crashes, this tmpfs gets lost.
    • This makes the container start again with a clean slate.
    • Also multiple containers can not share the temporary filesystem.
  3. Some other Kubernetes challenges can be solved through:
    • Decoupling pods from the storage
    • Container Storage Interface (CSI)
    • Static provisioning and dynamic provisioning with Amazon EKS and Amazon EFS

2.6 Cultural Change

Not every organization succeeds even after adopting new processes, and team structures, and implementing easy-to-use new software development technologies and tools. They also tend to constantly improve the portfolio of systems which might affect the work quality and time. Due to this reason, businesses must establish the right culture in the firm and come together to ensure that innovation comes at the intersection of business and technology.

Apart from this, companies should make sure that they start promoting autonomous teams that can have their own freedom to create their own work practices and internal goals. This helps them get motivated to create high-quality and reliable output.

Once they have a qualitative and reliable outcome, companies should focus on reducing uncertainties and creating confidence which is possible by encouraging experimentation and all these things benefit companies a lot. Any organization that is constantly active, trying new things and re-assessing is more suited to create an adaptive architecture that is easy to roll back and can be expandable without any risk.

Automation is the most essential thing when it comes to evolutionary architecture. Adopting new architecture and handling the changes that the architecture has brought in an effective manner includes provisioning end-to-end infrastructure as code with the use of tools like Ansible and Terraform. Besides this, automation can also include adhering to software development lifecycle practices such as testable infrastructure with continuous integration, version control, and more. Basically, the companies that nurture such an environment, lean towards creating interactive cycles, safety nets, and space for exploration.

2.7 Scaling

Any business organization in the market aims to increase its scope of operations over a period of time. But not everyone gets successful in this desire as their infrastructure might be poorly equipped to scale. And this is a major disadvantage. In this case, if companies are using Kubernetes microservices, because of its complex nature, it generates a huge amount of data to be deployed, diagnosed, and fixed which can be a daunting task. Therefore, without the help of automation, scaling can be a difficult task. The main reason behind it is that for any company working with mission-critical apps or real-time apps, outages are very damaging to user experience and revenue. This also happens to companies offering customer-facing services through Kubernetes.

If such issues aren’t resolved then choosing an open-source container manager for running Kubernetes can be a solution as it helps in managing and scaling apps on multi-cloud environments or on-premises. These containers can handle functions like- 

  • Easy to scale clusters and pods.
  • Infrastructure management across multiple clouds and clusters.
  • User-friendly interface for deployment. 
  • Easy management of RBAC, workload, and project guidelines.

Scaling can be done Vertical as well as Horizontal.

  • Suppose, an application requires roughly 3 GiB RAM, and there is not much going on the cluster, then you will require only one node. But for the safer side you can have a 5GiB node where 3 GiB node will be in use and the remaining 2 GiB in stash. So, when your application starts consuming more resources, you can immediately have those 2 GiB node available.  And you start paying for those resources when in use. This is Vertical Scaling.
Vertical Scaling
  • K8s also supports horizontal auto-scaling. New nodes will be added to a cluster whenever CPU, RAM, Disk Storage, I/O reaches a certain usage limit. New nodes will be added as per the cluster topology. Let’s understand it from following image: 
Horizontal Scaling

2.8 Deployment

Last challenge on our list is the deployment of containers.

Companies sometimes face issues while deploying Kubernetes containers and these issues occur because Kubernetes deployment requires various manual scripts. This process becomes difficult when the development team has to perform various deployments using Kubernetes manual scripts. And this slows down the process. For such issues, some of the best Kubernetes practices are:

  • Spinnaker pipeline for automated end-to-end deployment.
  • Inbuilt deployment strategies.
  • Constant verification of new artifacts.
  • Fixing container image configuration issue.
  • Kubernetes enables solving issues about fixing node availability.

Common Errors Related to Kubernetes Deployments and Their Possible Solutions:

Common errorsPossible Solution
Invalid Container Image Configuration
  • To identify this error, you must check the Pod status. If it shows as ErrImagePull or ImagePullBackOff, it represents that there is an error while pulling the image.
  • For this, you can use the kubectl describe command to pull down this error and notify it via a message which says “Failed to pull image” in the event log.
  • We suggest you specify credentials for private registries in Kubernetes by adding an imagePullSecrets section to the manifest file.
Node availability issues
  • The simplest way is to run the Kubectl get nodes command and check whether any of the nodes is facing an issue or not.
  • For resource overutilization issues, implement requests and limits.
Container startup issues
  • To fix the startup issues, we can use the describe pod command and examine the pod configuration and events.
  • Have a look at the Containers section and review the state of the container. If it shows as Terminated then the Reason shows up as Error which indicates having an issue with the application.
  • One can use Exit Code to start as it specifies the response of the system to the process.
Missing or misconfigured configuration issues
  • There are multiple ways to fix misconfigured configuration files and one of them is to use the pod describe command.
  • Looking at the event log will reveal what configuration reference is failing through which you can easily resolve this issue
  • .
Persistent volume mount failures
  • To fix the persistent volume mount failure issue, you must check the event log of the Pod as it will indicate which volume caused the mount failure and its properties of the volume.
  • This will help you check the volume configurations where the user can create a document that matches these properties and retry the deployment to fix the mount failures.

3. How have Big Organizations Benefited after Using Kubernetes?

Here are some of the instances that show that big organizations have benefited from Kubernetes:

Organization Challenge  Solution Impact
Booking.com After Booking.com migrated OpenShift to the platform, accessing infrastructure was easy for developers but because Kubernetes was abstracted, trying to scale support wasn’t easy for the team.  The team created its own Vanilla Kubernetes platform after using Openshift for a year and asked the developers to learn Kubernetes. Previously, creating a new service used to take a few days but with containers, it can be created within 10 mins. 
AppDirect AppDirect, the end-to-end eCommerce platform was deployed on tomcat infrastructure where the process was complex because of various manual steps. So better infrastructure was needed.  Creating a solution that enables teams to deploy the services quicker was the perfect solution, so after trying various technologies, Kubernetes was the choice as it enabled the creation of 50 microservices and 15 Kubernetes clusters.  Kubernetes helped the team of engineers to have 10X growth and it also enabled the company to have a system with additional features that boost the velocity. 
Babylon Products of Babylon leveraged AI and ML but computing power in-house was difficult. Besides, the company also wanted to expand.  The best solution for this was to get apps integrated with Kubernetes and make the team get turned towards Kubeflow. Teams were able to compute clinical validations within mins after using Kubernetes infrastructure and it also helped Babylon to enable cloud-native platforms in various countries. 

4. Conclusion

As seen in this blog, Kubernetes is an open-source container orchestration solution that helps businesses manage and scale their containers, clusters, nodes, and deployments. As this is a unique system, developers face many different Kubernetes challenges while working with its infrastructure and managing and scaling business processes between cloud providers. But once they master this tool, Kubernetes can be used in a way that offers an amazing, declarative, and simple model for programming complex deployments.

FAQs:

1. What is the biggest disadvantage of Kubernetes?

Understanding Kubernetes is very difficult. It takes time to learn how to implement Kubernetes effectively. Moreover, setting up and maintaining Kubernetes is complex, especially for small teams that have very limited resources.

2. What are the challenges of adopting Kubernetes?

Some of the biggest challenges you may face while adopting Kubernetes are network visibility and operability, cluster stability, security, and storage issues. 

3. What makes Kubernetes difficult?

Google created Kubernetes to handle large clusters at scale. So, its architecture is highly distributed and built to scale. With microservices at the core of its architecture, Kubernetes becomes complex to use.

The post Key Kubernetes Challenges and Solutions appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/kubernetes-challenges/feed/ 1
CI/CD Implementation with Jenkins https://www.sysgenpro.com/blog/ci-cd-jenkins/ https://www.sysgenpro.com/blog/ci-cd-jenkins/#comments Tue, 06 Jun 2023 10:19:13 +0000 https://www.sysgenpro.com/blog/?p=11060 In DevOps, CI/CD (Continuous Integration / Continuous Delivery) is a process used by software development companies to frequently update and deploy any app by automating the integration and delivery process. There are various tools available by which one can achieve this objective.

The post CI/CD Implementation with Jenkins appeared first on sysgenpro Blog.

]]>
In DevOps, CI/CD (Continuous Integration / Continuous Delivery) is a process used by software development companies to frequently update and deploy any app by automating the integration and delivery process. There are various tools available by which one can achieve this objective. 

Jenkins has built-in functionality for CI CD process implementation. Using Jenkins, CI CD pipeline alongside your assignment will substantially accelerate your software development process.

You’re at the ideal place if you want to learn more about the Jenkins CI CD pipeline. Continue on, and in this article, you’re going to discover how to build up the CI/CD pipeline using Jenkins.

1. What is Continuous Integration (CI)/Continuous Delivery (CD)?

“Continuous Integration” refers to the process of integrating the new code into an existing repository in real time. As a preventative measure, it employs automated checks to spot issues before they become major. While CI can’t guarantee bug-free code, it can speed up the process of locating and fixing problems.

With the “Continuous Delivery” process, modifications to the code are performed before deployment. During this stage, the team determines what will be released to consumers and when. Deployments are the crucial stage in the entire process.

When these two methods are combined, the resulting process is called continuous integration/continuous delivery (CI/CD) since all the stages are automated. With CI/CD in place, the team can deliver code more rapidly and reliably. With this procedure, the team becomes more nimble, productive, and self-assured.

The most popular DevOps tool for CI/CD pipelines is Jenkins. Therefore, let’s take a look at the creation part of CI/CD pipelines using Jenkins. 

2. What is Jenkins?

Jenkins is a free and open-source automation server. The tool streamlines the process of incorporating code modifications into the development. Jenkins performs Continuous Integration with the aid of plugins.

Jenkins’s adaptability, approachability, plugin-capability, and user-friendliness make it the ideal tool for constructing a CI/CD pipeline.

Here are some of the advantages of Jenkins:

  • Jenkins can be utilized as a simple continuous integration server or CD (Continuous delivery) hub for any project. 
  • Jenkins can be useful with any operating system like Windows, Linux, MacOS, etc. 
  • It also has an intuitive online interface for installation and configuration, along with real-time error checking and contextual guidance. Hence, provides a seamless set up.
  • Jenkins is compatible with nearly every tool or plugins used in the CI/CD.
  • Jenkins can also be extended by using various plugins. 
  • Tasks can be easily distributed among various machines using Jenkins. So, the development, testing and deployment process can become faster across multiple machines. 

2.1 Why use Jenkins for CI/CD?

Using Jenkins for CI/CD can be very beneficial. Here are the top three advantages:

1. Costs

Jenkins is available on an open-source platform, giving it an edge over other tools. And because it is free, there won’t be any procurement costs. Additionally, developers can optimize the infrastructure and cloud costs by implementing the IAC approach and efficiently using the available resources with Jenkins.

2. Plugins

Another valuable trait of Jenkins is the variety of plugins it offers. This enables the cloud service users to use CI processes in far less time. On top of that, Jenkins comes with a set of common tools and services. Interestingly, you can use them on both cloud and on-premise. Some of the top plugins available in Jenkins are Build Pipeline Plugins, Test Analysis Plugins, and Dashboard View.

3. Faster integration and deployment 

Builds and tests on each commit provide a fast-paced environment that helps resolve bugs. This allows you to deliver new features and releases quickly. Before, code integration and debugging were manual and complex. Only after getting through multiple commits were they able to build a functional version. But, with the use of Jenkins, integration has become possible after every commit. It ensures that the program functionality is operational and available.

3. Building CI/CD Pipeline with Jenkins

3.1 Prerequisites

In order to proceed the instructions and scenarios in this article, you will need the following:

  1. Git SCM. The most up-to-date 64-bit Windows version of Git (2.40.0) is used in this article.
  2. A repository on Github that can be integrated with the Jenkins CI CD system.
  3. Jenkins. 

Now, let’s first create a new github repository that we will use to build CI/CD pipelines on Jenkins. 

3.1.1 Preparing the GitHub Repository

1. Start up your browser and sign in to your GitHub profile and go to the repositories section.

GitHub Repository

2. Now, you may create a new repository by filling in the various  information such as Repository name, description, etc.  Some fields are optional so you can skip if you wish. You can also choose who can see your repository on the internet by turning the profile public or private. Once you are done filling the details, click on “Create Repository”.

Create Repository

3. Next, add a new document to the repository titled “Jenkins”. The Jenkins file is a text file that includes the description of a Jenkins Pipeline. This document belongs in the repository where you create the code.

Under your repository tab, select Add file —> Create new file.

Quick Setup

4. Now, copy the script below and paste it into the Jenkins file.

A pipeline is a series of commands expressed in programming for continuous delivery.

The stage block comprises a number of stages in a pipeline, showing the Jenkins pipeline flow.

A step is a single job that performs a certain procedure at a predetermined time. A pipeline comprises a succession of stages.

pipeline {
    agent any

    stages {
        stage('Build') {
            steps {
                echo 'Building Example'
            }
        }
        stage('Test') {
            steps {
                echo 'Testing Example'
            }
        }
        stage('Deploy') {
            steps {
                echo 'Deploying Example'
            }
        }
    }
}

After creating the code, the file will appear much like the snapshot below.

Jenkins Pipeline Code

5. Go to the end of the page and tap on the button “Commit new file”.

Commit new file

6. Lastly, copy the GitHub Repository URL. In order to do so, select the Code button and tap on the Copy icon right hand side of the given link. You will require the resultant URL for the Jenkins CI CD pipeline construction.

Resultant URL

3.2 Creating a Jenkins CI/CD Pipeline

Several modifications and additions to the code are necessary when creating any new feature. There are various tasks such as changes in commitment, compilation, testing, packaging, etc. to be done in the intervals when the code is updated. 

Jenkins and similar automation platforms make these procedures quick and foolproof. In this section we will learn to create  CI/CD pipelines through Jenkins.

Step:1 Install Jenkins

First step is the Jenkins Installation. If you already have that in your pc, you can move on to the next step. 

Step:2 Creating a Jenkins CI-CD Pipeline

1. Start a web browser, enter your Jenkins URL (http://localhost:8080/) into the address bar, and sign in. Use “admin” as a username. You will get the password from Drive C > ProgramData > Jenkins > .jenkins > secrets > initialAdminPassword file.

Login to Jenkins

2. Create a new Jenkins Job: Select the New Item icon on the left side of the Jenkins Dashboard.

New Jenkins Job

3. Type the title of the new pipeline. Choose the “Pipeline” template and give it the label as “pipelines-demo” for this article. The final step is to click OK. Here, 

“Pipeline” enables any developer to define the entire application lifecycle. 

Choose Template

Here, other available options are Freestyle Project and Multi-configuration project. 

  • Jenkins Freestyle project allows developers to automate jobs like testing, packaging apps, creating apps, producing reports, etc. 
  • With a Multi-configuration project, one can create a single job with many configurations. 

4. On the configuration screen, select the General tab, and then select the GitHub project box. The Project URL should then be updated to include the URL you obtained from the repository.

GitHub Project URL

5. Select the “GitHub hook trigger for GITScm polling” in the Build Triggers section, as seen in the picture below.

Build Triggers
Configure a Pipeline Job with SCMScript

6. If you are specifically configuring with the SCMScript, go to the Pipeline section and make the following selections/specifications there:

  • In Definition box, select: Pipeline script from SCM
  • SCM box, select: Git
  • Repository URL box, enter: Your git repository URL
  • In Branches to Build section, write “*/*” in the branch specifier field. 
Pipeline Job with SCMScript

7. After that, Write the file name “Jenkins” in the Script path field. Now, you complete the configurations. Click on the “Save” button. 

Configure
Configure a Pipeline Job through a Direct Script

6.  If you directly want to configure the pipelines without git, you can write the pipelines creation script by selecting the “Pipeline script” option in the “Definition” field. Checkout the below image for the reference.

After writing the script here, click on the “Save” button in the end. 

Pipeline Job through a Direct Script

3.3 Configuring a Webhook in GitHub

Jenkins project requires a webhook to be set up in the GitHub repository before you execute a new job. In the event of a push to the repository, Jenkins server will be notified via this webhook.

Establish a Webhook by following the below steps: 

1. Open your GitHub repository’s “Settings” menu, then select the “Webhooks” tab. Choose “Add webhook” on the “Webhooks” page.

Settings
Add Webhooks

2. Then, add the Payload URL field to your Jenkins URL(/github-webhook/). For instance, HTTP://addJenkinsURL/github-webhook/.

Alter the property of Content type to “application/json” as well.

Add the Payload URL

3. On “Which events would you like to trigger this webhook” selection, click the “Let me select individual events” option.

Events to Trigger the Webhook

4. Click the “Pull request reviews”, “Pushes” and “Pull requests” boxes at the bottom of the page. When these conditions are met, GitHub will trigger the sending of a payload to Jenkins.

Trigger the sending of a payload to Jenkins

5. Add the webhook by pressing the button at the end of the page to validate the webhook.  If everything checked out with the webhook successfully, a message like the one in the snapshot would appear at the very top of the page.

Checked out with the webhook

3.4 Executing the Jenkins CI/CD Pipeline Job

How would you verify the functionality of the pipeline after you’ve set it up? You may see the current state of your pipeline with the use of Jenkins’ Pipeline Stage View add-on.

1. Click on the  “Build Now” option to begin the build procedure and create the initial build data. 

Build Now
Create Initial Build Data

2. Put the pipeline through its paces by adding a dummy file to the repository. Simply navigate back to your GitHub repository and select Add File —> Create new file to start.

Create a new file with the name “demofile” and fill it with any demo text.

Demo file

After you complete, select “Commit new file” in the page’s footer.

3. If you return to the status page for your Jenkins pipeline, you will find a new build entry with a single commit, as seen below. You will get this result in the case when you choose to Configure pipeline jobs with SCMScript.

Stage View SCMScript

4. If you are Configuring pipeline jobs through a direct script, below is the kind of result you’ll get when you click on the “Build now” option. Once you add or update any changes in GitHub, it will appear here. 

Stage View Direct Script

4. Conclusion

In this article, we gained an understanding of how to implement a Jenkins CI CD pipeline for the purpose of automating the software development lifecycle. As an added bonus, you now know how to utilize Jenkins to keep a CI/CD action chain in a software project running well.

Moreover, you may find a wealth of additional information on the Internet that can aid in your comprehension of pipelines. So, how likely are you to implement the Jenkins CI CD pipeline in your projects? Let us know in the comments!

FAQs

What is the role of Jenkins in CI/CD?

Jenkins is an open-source automation software DevOps tool that is used to execute the CI/CD workflows called pipelines. 

What is the difference between CI and CD? 

In the process of code development and deployment, CI is the first phase which focuses on preparing the code for build/test or release. Meanwhile, CD is the second phase which includes the actual release/deploy or deployment of the code. 

Why is CI CD pipeline used?

The CI/CD pipeline is used to automate the entire software delivery process. It builds a codebase, runs tests, and deploys the new version of the app. CI/CD pipelines allow quick iterations, eliminate manual errors, and offer a standardized feedback loop to developers. 

Is Jenkins a free tool?

Yes, Jenkins is a free and open-source automation software DevOps tool. So, developers don’t have to deal with any additional procurement costs to use Jenkins. 

The post CI/CD Implementation with Jenkins appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/ci-cd-jenkins/feed/ 1