sysgenpro Blog https://www.sysgenpro.com/blog/feed/ Wed, 26 Jun 2024 10:05:09 +0000 en-US hourly 1 Docker Best Practices https://www.sysgenpro.com/blog/docker-best-practices/ https://www.sysgenpro.com/blog/docker-best-practices/#respond Wed, 26 Jun 2024 10:05:05 +0000 https://www.sysgenpro.com/blog/?p=13086 Several container tools and platforms have evolved to facilitate the development and operation of containers, even though Docker has become interchangeable with containers and used by many software development companies during the development process. Protecting container-based apps developed with technologies follows similar security principles as Docker.

The post Docker Best Practices appeared first on sysgenpro Blog.

]]>
Several container tools and platforms have evolved to facilitate the development and operation of containers, even though Docker has become interchangeable with containers and used by many software development companies during the development process. Protecting container-based apps developed with technologies follows similar security principles as Docker. To create more secure containers, we have assembled some of the most important Docker best practices into one blog post, making it the most thorough practical advice available. Shall we begin?

1. Docker Development Best Practices

In software development, adhering to best practices while working with Docker can improve the efficiency and reliability of your software projects. Below-mentioned best practices help you better optimize images, measure security in the docker container runtime, and host OS, along with ensuring smooth deployment processes and maintainable Docker environments.

1.1 How to Keep Your Images Small

Small images are faster to pull over the network and load into memory when starting containers or services. There are a few rules of thumb to keep the image size small:

  • Begin with a suitable basic image: If you require a JDK, for example, you might want to consider using an existing Docker Image like eclipse-temurin instead of creating a new image from the ground up.
  • Implement multistage builds: For example, you may develop your Java program using the maven image, then switch to the tomcat image, and finally, deploy it by copying the Java assets to the right place, all within the same Dockerfile. This implies that the built-in artifacts and environment are the only things included in the final image, rather than all of the libraries and dependencies.
  • To make your Docker image smaller and utilize fewer layers in a Dockerfile with fewer RUN lines, you can avoid using Docker versions without multistage builds. Just use your shell’s built-in capabilities to merge any commands into one RUN line, and you’ll be good to go. Take into account these two pieces. When you use the first, you get two image layers; when you use the second, you get just one.
RUN apt-get -y update
RUN apt-get install -y python

or

RUN apt-get -y update && apt-get install -y python
  • If you have multiple images with a lot in common, consider creating your base image with the shared components, and basing your unique images on that. Docker only needs to load the common layers once, and they are cached. This means that your derivative images use memory on the Docker host more efficiently and load faster.
  • To keep your production image lean but allow for debugging, consider using the production image as the base image for the debug image. Additional testing or debugging tools can be added on top of the production image.
  • Whenever deploying the application in different environments and building images, always tag images with useful tags that codify version information, intended destination (prod or test, for instance), stability, or other useful information. Don’t rely on the automatically created latest tag.

1.2 Where and How to Persist Application Data

  • Keep application data out of the container’s writable layer and away from storage drivers. Compared to utilizing volumes or bind mounts, storing data in the container’s writable layer makes it larger and less efficient from an I/O standpoint.
  • Alternatively, use volumes to store data.
  • When working in a development environment, bind mounts can be useful for mounting directories such as source code or newly produced binaries into containers. Instead of mounting a volume in the same spot as a bind mount during development; use that spot for production purposes.
  • During production, it is recommended to utilize secrets for storing sensitive application data that services consume, and configs for storing non-sensitive data like configuration files. You may make use of these capabilities of services by transforming from standalone containers to single-replica services.

1.3 Use CI/CD for Testing and Deployment

Use Docker Hub or another continuous integration/continuous delivery pipeline to automatically build, tag, and test Docker images whenever you make a pull request or check in changes to source control.

Make it even more secure by having the teams responsible for development, testing, and security sign images before they are sent into production. The development, quality, and security teams, among others, can test and approve an image before releasing it to production.

2. Docker Best Practices for Securing Docker Images

Let’s have a look at the best practices for Docker image security.

2.1 Use Minimal Base Images

When creating a secure image, selecting an appropriate base image is the initial step. Select a small, reputable image and make sure it’s constructed well.

Over 8.3 million repositories are available on Docker Hub. Official Images, a collection of Docker’s open-source and drop-in solution repositories, are among these images. Docker publishes them. Images from verified publishers can also be available on Docker.

Organizations that work with Docker produce and maintain these high-quality images, with Docker ensuring the legitimacy of their repository content. Keep an eye out for the Verified Publisher and Official Image badges when you choose your background image.

Pick a simple base image that fits your requirements when creating your image using a Dockerfile. A smaller base image doesn’t only make your image smaller and faster to download, but also reduces the amount of vulnerabilities caused by dependencies, making your image more portable.

As an additional precaution, you might want to think about creating two separate base images: one for use during development and unit testing, and another for production and beyond. Compilers, build systems, and debugging tools are build tools that may not be necessary for your image as it progresses through development. One way to reduce the attack surface is to use a minimal image with few dependencies.

2.2 Use Fixed Tags for Immutability

Versions of Docker images are often managed using tags. As an example, the most recent version of a Docker image may be identified by using the “latest” tag. But since tags are editable, it’s conceivable for many images to have the same most recent tag, which can lead to automated builds acting inconsistently and confusingly.

To make sure tags can’t be changed or altered by later image edits, you can choose from three primary approaches:

  • If an image has many tags, the build process should choose the one with the crucial information, such as version and operating system. This is because more specific tags are preferred.
  • A local copy of the images should be kept, maybe in a private repository, and the tags should match those in the local copy.
  • Using a private key for cryptographic image signing is now possible with Docker’s Content Trust mechanism. This ensures that both the image and its tags remain unaltered.

2.3 Use of Non-Root User Accounts

Recent research has shown that the majority of images, 58% to be exact, are using the root user ID (UID 0) to run the container entry point, which goes against Dockerfile recommended practices. Be sure to include the USER command to alter the default effective UID to a non-root user, because very few use cases require the container to run as root.

In addition, Openshift necessitates extra SecurityContextConstraints, and your execution environment may automatically prohibit containers operating as root.

To run without root privileges, you might need to add a few lines to your Dockerfile, such as:

  • Verify that the user listed in the USER instruction is indeed present within the container.
  • Make sure that the process has the necessary permissions to read and write to the specified locations in the file system.
# Base image
FROM alpine:3.12

# Create a user 'app' and assign ownership and permissions
RUN adduser -D app && chown -R app /myapp-data

# ... copy application files

# Switch to the 'app' user
USER app

# Set the default command to run the application
ENTRYPOINT ["/myapp"]

It is possible to encounter containers that begin as root and then switch to a regular user using the gosu or su-exec commands.

Another reason containers might use sudo is to execute certain commands as root.

Although these two options are preferable to operating as root, they might not be compatible with restricted settings like Openshift.

3. Best Practices for Local Docker

Let’s discuss Local Docker best practices in detail.

3.1 Cache Dependencies in Named Volumes #

Install code dependencies as the machine starts up, rather than baking them into the image. Using Docker’s named volumes to store a cache significantly speeds up the process compared to installing each gem, pip, and yarn library from the beginning each time we resume the services (hello NOKOGIRI). The configuration mentioned above might evolve into:

services:
  rails_app:
    image: custom_app_rails
    build:
      context: 
      dockerfile: ./.docker-config/rails/Dockerfile
    command: ./bin/rails server -p 3000 -b '0.0.0.0'
    volumes:
      - .:/app
      - gems_data:/usr/local/bundle
      - yarn_data:/app/node_modules

  node_app:
    image: custom_app_node
    command: ./bin/webpack-dev-server
    volumes:
      - .:/app
      - yarn_data:/app/node_modules
      
volumes:
  gems_data:
  yarn_data:

To significantly reduce startup time, put the built dependencies in named volumes. The exact locations to mount the volumes will differ for each stack, but the general idea remains the same.

3.2 Don’t Put Code or App-Level Dependencies Into the Image #

When you start a docker-compose run, the application code will be mounted into the container and synchronized between the container and the local system. The main Dockerfile, where the app runs, should only contain the software that is needed to execute the app.

You should only include system-level dependencies in your Dockerfile, such as ImageMagick, and not application-level dependencies such as Rubygems and NPM packages. When dependencies are baked into the image at the application level, it becomes tedious and error-prone to rebuild the image every time new ones are added. On the contrary, we incorporate the installation of such requirements into a starting routine.

3.3 Start Entrypoint Scripts with Set -e and End with exec “$@” #

Using entrypoint scripts to install dependencies and handle additional setups is crucial to the configuration we’ve shown here. At the beginning and end of each of these scripts, you must incorporate the following elements:

  • Next to #!/bin/bash (or something similar) at the start of the code, insert set -e. If any line returns an error, the script will terminate automatically.
  • Put exec “$@” at the end of the file. The command directive will not carry out the instructions you provide unless this is present.

4. Best Practices for Working with Dockerfiles

Here are detailed best practices for working with Dockerfiles.

4.1 Use Multi-Stage Dockerfiles

Now imagine that you have some project contents (such as development and testing tools and libraries) that are important for the build process but aren’t necessary for the application to execute in the final image.

Again, the image size and attack surface will rise if you include these artifacts in the final product even though they are unnecessary for the program to execute.

The question then becomes how to partition the construction phase from the runtime phase. Specifically, how can the build dependencies be removed from the image while remaining accessible during the image construction process? In that case, multi-stage builds are a good option. With the functionality of multi-stage builds, you can utilize several temporary images during the building process, with only the most recent one becoming the final artifact:

Example:-

# Stage 1: Build the React app
FROM node:latest as react_builder
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# Stage 2: Create the production image
FROM nginx:stable
COPY --from=react_builder /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

4.2 Use the Least Privileged User

Now, which OS user will be utilized to launch the program within this container when we build it and run it?

Docker will use the root user by default if no user is specified in the Dockerfile, which may pose security risks. On the other hand, running containers with root rights is usually unnecessary. This creates a security risk as containers may get root access to the Docker host when initiated.

Therefore, an attacker may more easily gain control of the host and its processes, not just the container, by launching an application within the container with root capabilities. This is true unless the container’s underlying application is susceptible to attacks.

The easiest way to avoid this is to execute the program within the container as the specified user, as well as to establish a special group in the Docker image to run the application.

Using the username and the USER directive, you can easily launch the program.

Tips: You can utilize the generic user that comes packed with some images. This way, you may avoid creating a new one. For instance, you can easily execute the application within the container using the node.js image, which already includes a generic user-named node. 

4.3 Organize Your Docker Commands

  • Combine commands into a single RUN instruction whenever possible; for instance, you can run many instructions using the && operator.
  • To minimize the amount of file system modifications, arrange the instructions in a logical sequence. For instance, group operations that modify the same files or directories together.
  • If you want to utilize fewer commands, you need to reevaluate your current approach.
  • Reducing the number of COPY commands in the Apache web server example, as described in the section Apache Web Server with Unnecessary COPY Commands is one way to achieve this.

5. Best Practices for Securing the Host OS

Below are a few best practices for securing the host OS with Docker.

5.1 OS Vulnerabilities and Updates

It is critical to establish consistent procedures and tools for validating the versioning of packages and components within the base OS after selecting an operating system. Take into consideration that a container-specific operating system may have components that could be vulnerable and necessitate fixing. Regular scanning and checking for component updates using tools offered by the operating system vendor or other reputable organizations.

To be safe, always upgrade components when vendors suggest, even if the OS package does not contain any known security flaws. You can also choose to reinstall an updated operating system if that is more convenient for you. Just like containers should be immutable, the host running containerized apps should also maintain immutability. Data should not only persist within the operating system. To prevent drift and drastically lower the attack surface, follow this best practice. Finally, container runtime engines like Docker provide updates with new features and bug fixes often. Applying the latest patches can help reduce vulnerabilities.

5.2 Audit Considerations for Docker Runtime Environments

Let’s examine the following:

  • Container daemon activities
  • These files and directories:
    • /var/lib/docker
    • /etc/docker
    • docker.service
    • docker.socket
    • /etc/default/docker
    • /etc/docker/daemon.json
    • /usr/bin/docker-containerd
    • /usr/bin/docker-runc

5.3 Host File System

Ensure that containers operate with the minimum required file system permissions. Mounting directories on a host’s file system should not be possible for containers, particularly when such folders include OS configuration data. This is not a good idea since an attacker might gain control of the host system if they were to get their hands on the Docker service, as it runs as root.

6. Best Practices for Securing Docker Container Runtime

Let’s follow these practices for the security of your Docker container runtime.

6.1 Do Not Start Containers in Privileged Mode

Unless necessary, you should avoid using privileged mode (–privileged) due to the security risks it poses. Running in privileged mode allows containers to access all Linux features and removes limitations imposed by certain groups. Because of this, they can access many features of the host system.

Using privileged mode is rarely necessary for containerized programs. Applications that require full host access or the ability to control other Docker containers are the ones that use privileged mode.

6.2 Vulnerabilities in Running Containers

You may add items to your container using the COPY and ADD commands in your Dockerfile. The key distinction is ADD’s suite of added functions, which includes the ability to automatically extract compressed files and download files from URLs, among other things.

There may be security holes due to these supplementary features of the ADD command. A Docker container might be infiltrated with malware, for instance, if you use ADD to download a file from an insecure URL. Thus, using COPY in your Dockerfiles is a safer option.

6.3 Use Read-Only Filesystem Mode

Run containers with read-only mode enabled for their root filesystems. This will allow you to quickly monitor writes to explicitly designated folders. One way to make containers more secure is to use read-only filesystems. Furthermore, avoid writing data within containers because they are immutable. On the contrary, set a specified volume for writes.

7. Conclusion

With its large user base and many useful features, Docker is a good choice to the cloud-native ecosystem and will likely remain a dominant player in the industry. In addition, Docker offers significant benefits for programmers, and many companies aspire to use DevOps principles. Many developers and organizations continue to rely on Docker for developing and releasing software. For this reason, familiarity with the Dockerfile creation process is essential. Hopefully, you have gained enough knowledge from this post to be able to create a Dockerfile following best practices.

The post Docker Best Practices appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/docker-best-practices/feed/ 0
React Design Patterns- A Comprehensive Guide https://www.sysgenpro.com/blog/react-design-patterns/ https://www.sysgenpro.com/blog/react-design-patterns/#comments Thu, 09 May 2024 05:00:53 +0000 https://www.sysgenpro.com/blog/?p=12968 Traditional web development was complicated once but with the arrival of React on the scene, the process has been simplified significantly. It also offers great ease of use thanks to the reusable components and its extensive ecosystem.

The post React Design Patterns- A Comprehensive Guide appeared first on sysgenpro Blog.

]]>
Traditional web development was complicated once but with the arrival of React on the scene, the process has been simplified significantly. It also offers great ease of use thanks to the reusable components and its extensive ecosystem. 

React ecosystem provides a large range of tools; some are used to fulfill various development requirements whereas others help resolve different types of issues. React Design Patterns are quick and reliable solutions for your typical development problems. 

In ReactJS, you can find a large number of design patterns leveraged by Reactjs development companies and each serves a unique purpose in a development project. This article talks about some “need-to-know” design patterns for React developers. 

But before diving into the topic, it is important to know what are the design patterns in React and how they are useful for app development. 

1. What is a Reactjs Design Pattern?

ReactJS design patterns are solutions to common problems that developers face during a typical software development process. They are reusable and help reduce the size of the app code. There is no need to use any duplication process to share the component logic when using React design patterns.

While working on a software development project, complications are bound to arise. But with reliable solutions from Design patterns, you can easily eliminate those complications and simplify the development process. This also enables you to write easy-to-read code.

2. Benefits of Using React Design Patterns in App Development 

If you have any doubt about the effectiveness of React development, it is because you might have yet to take a look at the benefits of React Design Patterns. Their advantages are one of those factors that make React development more effective. 

2.1 Reusability

React Design Patterns offer reusable templates that allow you to build reusable components. Therefore, developing applications with these reusable components saves a lot of your time and effort. More importantly, you don’t have to build a React application from the ground up every time you start a new project.

2.2 Collaborative Development

React is popular for providing a collaborative environment for software development. It allows different developers to work together on the same project. If not managed properly, this can cause some serious issues. However, the design patterns provide an efficient structure for developers to manage their projects effectively. 

2.3 Scalability

Using Design patterns, you can write React programs in an organized manner. This makes app components simpler. So, even if you are working on a large application, maintaining and scaling it becomes easy. And because every component here is independent, you can make changes to one component without affecting another. 

2.4 Maintainability 

Design patterns are referred to as the solutions for typical development issues because they provide a systematic approach to programming. It makes coding simple which is not only helpful in developing the codebase but also in maintaining it. This is true even if you are working on large React projects

React Design Patterns make your code more decoupled and modular which also divides the issues. Modifying and maintaining a code becomes easy when dealing with small chunks of code. It is bound to give satisfactory results because making changes to one section of the code will not affect other parts in a modular architecture. In short, modularity promotes maintainability. 

2.5 Efficiency 

Although React has a component-based design, it can provide faster loading time and quick updates, thanks to the Virtual DOM. As an integral aspect of the architecture, Virtual DOM aims to improve the overall efficiency of the application. This also helps offer an enhanced user experience. 

Moreover, design patterns such as memoization save results of expensive rendering so that you do not have to conduct unnecessary rerenders. Re-renderings take time but if the results are already cached then they can be immediately delivered upon request. This helps improve the app’s performance.  

2.6 Flexibility

Due to its component-based design, applying modifications to the React apps is convenient. Using this approach allows you to try out various combinations of the components to build unique solutions. Components design patterns also allow you to craft a suitable user interface. Your application needs such flexibility to succeed in the marketplace.

Unlike other famous web development frameworks, React doesn’t ask you to adhere to specific guidelines or impose any opinions. This offers ample opportunities for developers to express their creativity and try to mix and match various approaches and methodologies in React development. 

2.7 Consistency

Adhering to React design patterns provides a consistent look to your application and makes it user-friendly. The uniformity helps offer a better user experience whereas simplicity makes it easy for users to navigate across the app which increases user engagement. Both of these are important factors to boost your revenues.

3. Top Design Patterns in React that Developers Should Know 

Design patterns help you resolve issues and challenges arising during development projects. With so many efficient design patterns available in the React ecosystem, it is extremely difficult to include them all in a single post. However, this section sheds some light on the most popular and effective React design patterns.  

3.1 Container and Presentation Patterns

Container and presentation patterns allow you to reuse the React components easily. Because this design pattern divides components into two different sections based on the logic. The first is container components containing the business logic and the second one is presentation components consisting of presentation logic. 

Here, the container components are responsible for fetching data and carrying out necessary computations. Meanwhile, presentation components are responsible for rendering the fetched data and computed value on the user interface of the application or website. 

When using this pattern for React app development, it is recommended that you initially use presentation components only. This will help you analyze if you aren’t passing down too many props that won’t be of any use to the intermediate components and will be further passed to the components below them.  

If you are facing this problem then you have to use container components to separate the props and their data from the components that exist in the middle of the tree structure and place them into the leaf components. 

The example for container and presentation pattern:

import React, { useEffect, useState } from "react";
import UserList from "./UserList";

const UsersContainer = () => {
  const [users, setUsers] = useState([]);
  const [isLoading, setIsLoading] = useState(false);
  const [isError, setIsError] = useState(false);

  const getUsers = async () => {
    setIsLoading(true);
    try {
      const response = await fetch(
        "https://jsonplaceholder.typicode.com/users"
      );
      const data = await response.json();
      setIsLoading(false);
      if (!data) return;
      setUsers(data);
    } catch (err) {
      setIsError(true);
    }
  };

  useEffect(() => {
    getUsers();
  }, []);

  return ;
};

export default UsersContainer;



// the component is responsible for displaying the users

import React from "react";

const UserList = ({ isLoading, isError, users }) => {
  if (isLoading && !isError) return 
Loading...
; if (!isLoading && isError) return
error occurred.unable to load users
; if (!users) return null; return ( <>

Users List

    {users.map((user) => (
  • {user.name} (Mail: {user.email})
  • ))}
); }; export default UserList;

3.2 Component Composition with Hooks

First introduced in 2019, Hooks gained popularity with the advent of React 16.8. Hooks are the basic functions designed to fulfill the requirements of the components. They are used to provide functional components access to state and React component lifecycle methods. State, effect, and custom hooks are some of the examples of Hooks. 

Using Hooks with components allows you to make your code modular and more testable. By tying up the Hooks loosely with the components, you can test your code separately. Here is an example of Component composition with Hooks: 

// creating a custom hook that fetches users

import { useEffect, useState } from "react";

const useFetchUsers = () => {
  const [users, setUsers] = useState([]);
  const [isLoading, setIsLoading] = useState(false);
  const [isError, setIsError] = useState(false);
  const controller = new AbortController();

  const getUsers = async () => {
    setIsLoading(true);
    try {
      const response = await fetch(
        "https://jsonplaceholder.typicode.com/users",
        {
          method: "GET",
          credentials: "include",
          mode: "cors",
          headers: {
            "Content-Type": "application/json",
            "Access-Control-Allow-Origin": "*",
          },
          signal: controller.signal,
        }
      );
      const data = await response.json();
      setIsLoading(false);
      if (!data) return;
      setUsers(data);
    } catch (err) {
      setIsError(true);
    }
  };

  useEffect(() => {
    getUsers();
    return () => {
      controller.abort();
    };
  }, []);

  return [users, isLoading, isError];
};

export default useFetchUsers;

Now, we have to import this custom hook to use it with the StarWarsCharactersContainer component. 

Now, we have to import this custom hook to use it with the UsersContainer component.

import React from "react";
import UserList from "./UserList";
import useFetchUsers from "./useFetchUsers";

const UsersContainer = () => {
  const [users, isLoading, isError] = useFetchUsers();

  return ;
};

export default UsersContainer;

3.3 State Reducer Pattern 

When you are working on a complex React application with different states relying on complex logic then it is recommended to utilize the state reducer design pattern with your custom state logic and the initialstate value. The value here can either be null or some object.

Instead of changing the state of the component, a reducer function is passed when you use a state reducer design pattern in React. Upon receiving the reducer function, the component will take action with the current state. Based on that action, it returns a new State. 

The action consists of an object with a type property. The type property either describes the action that needs to be performed or it mentions additional assets that are needed to perform that action. 

For example, the initial state for an authentication reducer will be an empty object and as a defined action the user has logged in. In this case, the component will return a new state with a logged-in user. 

The code example for the state reducer pattern for counter is given below:

import React, { useReducer } from "react";

const initialState = {
  count: 0,
};

const reducer = (state, action) => {
  switch (action.type) {
    case "increment":
      return { ...state, count: state.count + 1 };
    case "decrement":
      return { ...state, count: state.count - 1 };
    default:
      return state;
  }
};

const Counter = () => {
  const [state, dispatch] = useReducer(reducer, initialState);

  return (
    

Count: {state.count}

); }; export default Counter;

3.4 Provider Pattern 

If you want your application to stop sending props or prop drilling to nested components in the tree then you can accomplish it with a provider design pattern. React’s Context API can offer you this pattern.

import React, { createContext } from "react";
import "./App.css";
import Dashboard from "./dashboard";

export const UserContext = createContext("Default user");

const App = () => {
  return (
    
); }; export default App; //Dashboard component import React, { useContext } from "react"; import { UserContext } from "../App"; const Dashboard = () => { const userValue = useContext(UserContext); return

{userValue}

; }; export default Dashboard;

The above provider pattern code shows how you can use context to pass the props directly to the newly created object. Both the provider and the consumer of the state have to be included in the context. In the above code, the dashboard component using UserContext is your consumer and the app component is your provider. 

Take a look at the visual representation given below for better understanding.

Provider Pattern

When you don’t use the provider pattern, you have to pass props from component A to component D through prop drilling where components B and C act as intermediary components. But with the provider pattern, you can directly send the props from A to D. 

3.5 HOCs (Higher-Order Components) Pattern 

If you want to reuse the component logic across the entire application then you need a design pattern with advanced features. The higher-order component pattern is the right React pattern for you. It comes with various types of features like data retrieval, logging, and authorization. 

HOCs are built upon the compositional nature of the React functional components which are JavaScript functions. So, do not mistake them for React APIs. 

Any higher-order component in your application will have a similar nature to a JavaScript higher-order function. These functions are of pure order with zero side effects. And just as the JavaScript higher-order functions, HOCs also act as a decorator function.

The structure of a higher-order React component is as given below: 

const MessageComponent = ({ message }) => {
  return <>{message};
};

export default MessageComponent;



const WithUpperCase = (WrappedComponent) => {
  return (props) => {
    const { message } = props;
    const upperCaseMessage = message.toUpperCase();
    return ;
  };
};

export default WithUpperCase;


import React from "react";
import "./App.css";
import WithUpperCase from "./withUpperCase";
import MessageComponent from "./MessageComponent";

const EnhancedComponent = WithUpperCase(MessageComponent);

const App = () => {
  return (
    
); }; export default App;

3.6 Compound Pattern

The collection of related parts that work together and complement each other is called compound components. A card component with many of its elements is a simple example of such a design pattern.

Compound Design Patterns

The functionality provided by the card component is a result of joint efforts from elements like content, images, and actions. 

import React, { useState } from "react";

const Modal = ({ children }) => {
  const [isOpen, setIsOpen] = useState(false);

  const toggleModal = () => {
    setIsOpen(!isOpen);
  };

  return (
    
{React.Children.map(children, (child) => React.cloneElement(child, { isOpen, toggleModal }) )}
); }; const ModalTrigger = ({ isOpen, toggleModal, children }) => ( ); const ModalContent = ({ isOpen, toggleModal, children }) => isOpen && (
× {children}
); const App = () => ( Open Modal

Modal Content

This is a simple modal content.

); export default App;

Compound components also offer an API that allows you to express connections between various components. 

3.7 Render Prop Pattern

React render prop pattern is a method that allows the component to share the function as a prop with other components. It is instrumental in resolving issues related to logic repetition. The component on the receiving end could render content by calling this prop and using the returned value. 

Because they use child props to pass the functions down to the components, render props are also known as child props. 

It is difficult for a component in the React app to contain the function or requirement when various components need that specific functionality. Such a problematic situation is called cross-cutting. 

As discussed, the render prop design pattern passes the function as a prop to the child component. The parent component also shares the same logic and state as the child component. This would help you accomplish Separation of concerns which helps prevent code duplication. 

Leveraging the render prop method, you can build a component that can manage user components single-handedly. This design pattern will also share its logic with other components of the React application. 

This will allow the components that require the authentication functionality and its state to access them. This way developers don’t have to rewrite the same code for different components. 

Render Prop Method Toggle code example:

import React from "react";

class Toggle extends React.Component {
  constructor(props) {
    super(props);
    this.state = { on: false };
  }

  toggle = () => {
    this.setState((state) => ({ on: !state.on }));
  };

  render() {
    return this.props.render({
      on: this.state.on,
      toggle: this.toggle,
    });
  }
}

export default Toggle;



import React from "react";
import Toggle from "./toggle";

class App extends React.Component {
  render() {
    return (
      

Toggle

(

The toggle is {on ? "on" : "off"}.

)} />
); } } export default App;

3.8 React Conditional Design Pattern

Sometimes while programming a React application, developers have to create elements according to specific conditions. To meet these requirements, developers can leverage the React conditional design pattern. 

For example, you have to create a login and logout button if you want to add an authentication process to your app. The process of rendering those elements is known as a conditional rendering pattern. The button would be visible to first-time users. 

The most common conditional statements used in this pattern are the if statement and suppose/else statement. The ‘if statement’ is used when it is required to pass at least one condition. Meanwhile, the developer uses the suppose/else statement when more than one condition has to be passed. 

You can easily refactor the code given above with the help of the switch/case statement in the following way:

const MyComponent = ({ isLoggedIn }) => {
  if (isLoggedIn) {
    return ;
  } else {
    return ;
  }
};

export default MyComponent;



Example with the help of the switch statement in the following way: 

const MyComponent = ({ status }) => {
  switch (status) {
    case "loading":
      return ;
    case "error":
      return ;
    case "success":
      return ;
    default:
      return null;
  }
};

export default MyComponent;

4. Conclusion

The React design patterns discussed in this article are some of the most widely used during development projects. You can leverage them to bring out the full potential of the React library. Therefore, it is recommended that you understand them thoroughly and implement them effectively. This would help you build scalable and easily maintainable React applications.

The post React Design Patterns- A Comprehensive Guide appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/react-design-patterns/feed/ 1
Kubernetes Deployment Strategies- A Detailed Guide https://www.sysgenpro.com/blog/kubernetes-deployment-strategies/ https://www.sysgenpro.com/blog/kubernetes-deployment-strategies/#respond Tue, 09 Apr 2024 12:04:42 +0000 https://www.sysgenpro.com/blog/?p=12877 Kubernetes is a modern-age platform that enables business firms to deploy and manage applications. This container orchestration technology enables the developers to streamline infrastructure for micro-service-based applications that eventually help in managing the workload. Kubernetes empowers different types of deployment resources like updating, constructing, & versioning of CD/CI pipelines.

The post Kubernetes Deployment Strategies- A Detailed Guide appeared first on sysgenpro Blog.

]]>

Key Takeaways

  1. To deliver resilient apps and infrastructure, shorten time to market, create deployments without downtime, release features & apps faster and operate them with greater flexibility, choosing the right Kubernetes deployment strategies is important.
  2. All the K8s deployment strategies use one or more of these use cases:
    • Create: Roll out new K8s pods and ReplicaSets.
    • Update: Declare new desired state and roll out new pods and ReplicaSets in a controlled manner.
    • Rollback: Revert the K8s deployment to its previous state.
    • Scale: Increase the number of pods and Replica Sets in the K8s deployment without changing them.
  3. Following factors one should consider while selecting any K8s deployment strategy:
    • Deployment Environment
    • How much downtime you can spare
    • Stability of the new version of your app
    • Resources availability and their cost
    • Project goals
  4. Rolling or Ramped deployment method is Kubernetes default rollout method. It scaled down old pods only after new pods became ready. Also one can pause or cancel the deployment without taking the whole cluster offline.
  5. Recreate Deployment, Blue/Green Deployment, Canary Deployment, Shadow Deployment, and A/B Deployment are other strategies one can use as per requirements.

Kubernetes is a modern-age platform that enables business firms to deploy and manage applications. This container orchestration technology enables the developers to streamline infrastructure for micro-service-based applications that eventually help in managing the workload. Kubernetes empowers different types of deployment resources like updating, constructing, & versioning of CD/CI pipelines. Here, it becomes essential for the Kubernetes deployment team to use innovative approaches for delivering the service because of frequent updates in Kubernetes.

For this, software development companies have to choose the right deployment strategy as it is important for deploying production-ready containerized applications into Kubernetes infrastructure. For this, there are three different options available in the market and they are canary releases, rolling, and blue/green deployments. Kubernetes helps in deploying and autoscaling the latest apps by implementing new code modifications in production environments. To know more about this Kubernetes deployment and its strategies, let’s go through this blog.

1. What is Kubernetes Deployment?

A Deployment allows a description of an application’s life cycle such as images to be used, the number of pods required, and how to update them in the application. In other words, deployment in Kubernetes is a resource object that can be used to specify the desired state of the application as deployment is generally declarative which means that the developers cannot dictate the achievement steps of this phase. Besides, in Kubernetes, a deployment controller can be used to reach the target efficiently.

2. Key Kubernetes Deployment Strategies

The Kubernetes deployment process is known as a declarative statement that helps development teams configure it in the YAML file that specifies the several Kubernetes deployment strategies and life of the application and how it will get updated from time to time. While deploying applications to a K8s cluster the selected deployment strategy would determine which applications have been updated from an older version to a newer version. Also, some Kubernetes deployment strategies involve downtime while some deployments would introduce testing concepts and enable user analysis.

2.1 Rolling Deployment Strategy

Rolling Deployment Strategy
  • Readiness probes

This type of deployment strategy helps the developers to monitor when any application gets activated and what happens if the probe fails & no traffic is sent to the pod. This approach is mostly used when there is a need for specific types of initialization steps in an application before it goes live. There are chances where applications may be overloaded with traffic and cause probe failure. With this, it also safeguards the application from getting more traffic.

Once the availability of the new version of an application is detected by the readiness probe, the older version gets removed. If there are any challenges then the rollout can be stopped and rollback of the previous version is deployed to avoid downtime of the application across the Kubernetes cluster. This is because each pod in the application gets replaced one by one. Besides this, even the deployment can take some time when the clusters are larger than usual. When a new deployment gets triggered before the previous one is finished, the previous deployment gets ignored and the new version gets updated as per the new deployment. 

When there is something specific to the pod and it gets changed, the rolling deployment gets triggered. The change here can be anything from the environment to the image to the label of the pod. 

  • MaxSurge

It specifies a maximum number of pods that are permitted at the time of deployment.

  • MaxUnavailable

It defines the maximum number of pods that are granted by the deployment creation process to be inaccessible while the rollout is being processed. 

In this example:

  • replicas: 3 indicates that there are initially three replicas of your application running.
  • rollingUpdate is the strategy type specified for the deployment.
  • maxUnavailable: 1 ensures that during the rolling update, at most one replica is unavailable at a time.
  • maxSurge: 1 allows one additional replica to be created before the old one is terminated, ensuring the desired number of replicas is maintained
apiVersion: E-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce
  template:
    metadata:
      labels:
        app: e-commerce
    spec:
      containers:
        - name: e-commerce-container
          image: sysgenpro/e-commerce:latest
          ports:
            - containerPort: 80
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1

2.2 Recreate Deployment

Recreate Deployment

Here is a digital representation of Recreate Deployment. It’s a two-step process that includes deleting all old pods, and once that is done, new pods are created. It may lead to downtime as users will have to wait until old services are deleted and new ones are created. However, this strategy is still used by Kubernetes for performing deployment.

Recreate deployment strategy helps eliminate all the pods and enables the development team to replace them with the new version. This recreate deployment strategy is used by the developers when a new and old version of the application isn’t able to run at the same time. Here, in this case, the downtime amount taken by the system depends on the time the application takes to shut down its processing and start back up. Once the pods are completely replaced, the application state is entirely renewed. 

In this example, the strategy section specifies the type as Recreate, indicating that the old pods will be terminated before new ones are created.

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce
  template:
    metadata:
      labels:
        app: e-commerce
    spec:
      containers:
        - name: e-commerce-container
          Image: sysgenpro/e-commerce:latest
          ports:
            - containerPort: 80
  strategy:
    type: Recreate

2.3 Blue-Green Deployment

In the Blue-Green Kubernetes deployment strategy, you can release new versions of an app to decrease the risk of downtime. It has two identical environments, one serves as the active production environment, that is, blue, and the other serves as a new release environment, that is, green.

Blue-Green Deployment

A Blue-Green deployment is one of the most popular Kubernetes deployment strategies that enable the developers to deploy the new application version which is called green deployment with the old one which is called blue deployment. Here, when the developers want to direct the traffic of the old application to the new one, they use a load balancer. In this case, it is utilized as the form of the service selector object. Blue/Green Kubernetes deployments are costly as they require double the resources of normal deployment processes. 

Define Blue Deployment (blue-deployment.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: blue-e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: blue-e-commerce
  template:
    metadata:
      labels:
        app: blue-e-commerce
    spec:
      containers:
        - name: blue-e-commerce-container
          image: sysgenpro/blue-e-commerce:latest
          ports:
            - containerPort: 80

Define Green Deployment (green-deployment.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: green-e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: green-e-commerce
  template:
    metadata:
      labels:
        app: green-e-commerce
    spec:
      containers:
        - name: green-app-container
          image: sysgenpro/green-e-commerce:latest
          ports:
            - containerPort: 80

Define a Service (service.yaml):

apiVersion: v1
kind: Service
metadata:
  name: e-commerce-service
spec:
  selector:
    app: blue-e-commerce  # or green-app, depending on which environment you want to expose
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Define an Ingress (ingress.yaml):

apiVersion: e-commerce/v1
kind: Ingress
metadata:
  name: e-commerce-ingress
spec:
  rules:
    - host: example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: e-commerce-service
                port:
                  number: 80

Now, the blue environment serves traffic. Whenever you are switching to green env, you can make an update in the service selector to match the green deployment labels. After this update is made, Kubernetes will begin routing traffic in the green environment.

2.4 Canary Deployment

In this strategy, you can route a small group of users to the latest version of an app, running in a smaller set of pods. It tests functions on a small group of users and avoids impacting the whole user base. Here’s the visual representation of the canary deployment strategy.

Canary Deployment

A Canary deployment is a strategy that the Kubernetes app developers can use to test a new version of the application when they aren’t fully confident about the functionality of the new version created by the team. This canary deployment approach helps in managing the deployment of a new application version with the old one. Here, the previous version of the application tends to serve the majority of the application users, and the newer application version is supposed to serve a small number of test users. This lets the new deployment rollout maximum number of users when it is successful.

For instance, in the Kubernetes cluster with 100 running pods, 95 could be for v1.0.0 while 5 could be for v2.0.0 of the application. This means that around 95% of the users will be directed to the app’s old version while 5% of them will be directed to the new one. 

Version 1.0 Deployment (v1-deployment.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-v1-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce-v1
  template:
    metadata:
      labels:
        app: e-commerce-v1
    spec:
      containers:
        - name: e-commerce-v1-container
          image: sysgenpro/e-commerce-v1:latest
          ports:
            - containerPort: 80

Version 2.0 Deployment (v2-deployment.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-v2-deployment
spec:
  replicas: 1  # A smaller number for canary release
  selector:
    matchLabels:
      app: e-commerce-v2
  template:
    metadata:
      labels:
        app: e-commerce-v2
    spec:
      containers:
        - name: e-commerce-v2-container
          image: sysgenpro/e-commerce-v2:latest
          ports:
            - containerPort: 80

Service (service.yaml):

apiVersion: v1
kind: Service
metadata:
  name: e-commerce-service
spec:
  selector:
    app: e-commerce-v1  # Initially pointing to the version 1.0 deployment
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

Gradually Shift Traffic to Version 2.0 (canary-rollout.yaml):

apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: canary-rollout
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce-v1  # Initially pointing to the version 1.0 deployment
  template:
    metadata:
      labels:
        app: e-commerce-v1
    spec:
      containers:
        - name: e-commerce-v1-container
          Image: sysgenpro/e-commerce-v1:latest
          ports:
            - containerPort: 80
---
apiVersion:  e-commerce/v1
kind: Deployment
metadata:
  name: canary-rollout
spec:
  replicas: 2
  selector:
    matchLabels:
      app: e-commerce-v2  # Gradually shifting to the version 2.0 deployment
  template:
    metadata:
      labels:
        app: e-commerce-v2
    spec:
      containers:
        - name: e-commerce-v2-container
          image: sysgenpro/e-commerce-v2:latest
          ports:
            - containerPort: 80

This example gradually shifts traffic from version 1.0 to version 2.0 by updating the number of replicas in the Deployment. Adjust the parameters based on your needs, and monitor the behavior of your application during the canary release.

2.5 Shadow Deployment

Shadow deployment is a strategy where a new version of an app is deployed alongside the new production version, primarily monitoring and testing the product.

Shadow deployment is another type of canary deployment that allows you to test the latest release of the workload. This strategy of deployment helps you to split the traffic between a new and current version, without users even noticing it.

When the performance and stability of the new version meet in-built requirements, operators will trigger a full rollout of the same.

One of the primary benefits of shadow deployment is that it can help you test the new version’s non-functional aspects like stability, performance, and much more.

On the other hand, it has a downside as well. This type of deployment strategy is complex to manage and needs two times more resources to run than a standard deployment strategy.

2.6 A/B Deployment

Just like Canary deployment, the A/B deployment strategy helps you to target a desired subsection of users based on some target parameters like HTTP headers or cookies.

It can distribute traffic amongst different versions. It is a technique that is widely used to test the conversion of a given feature and then the version converting the most popular.

In this strategy, data is usually collected based on the user behavior and is used to make better decisions. Here users are left uninformed that testing is being done and a new version will be made available soon.

This deployment can be automated using tools like Flagger, Istio, etc.

3. Resource Utilization Strategies:

Here are a few resource utilization strategies to follow:

3.1 Resource Limits and Requests:

Each container in the pod in Kubernetes can define limits and resource requests for both memory and CPU. These settings are crucial for resource isolation and allocation.

Resource Requests:

  • A particular amount of resources that Kubernetes guarantees to the container
  • If the container exceeds the requested resources, it might feel throttled.

Resource Limits:

  • It helps in setting an upper limit on the exact amount of resources that a container can utilize.
  • In case this limit is exceeded, it might terminate or face other consequences.

So, it is necessary to set these values appropriately to ensure fair resource allocation among various containers on the same node.

Ex. In the following example, the pod specifies resource requests of 64MiB memory and 250 milliCPU (0.25 CPU cores). It also sets limits to 128MiB memory and 500 milliCPU. These settings ensure that the container gets at least the requested resources and cannot exceed the specified limits.

apiVersion: e-commerce/v1
kind: Pod
metadata:
  name: e-commerce
spec:
  containers:
  - name: e-commerce-container
    image: e-commerce:v1
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

3.2 Horizontal Pod Autoscaling (HPA)

In this technique, you get a feature of automatic adjustment of the number of replicas of a running pod that is based on particular metrics.

Target Metrics:

  • HPZ can scale depending on different metrics like memory usage, CPU utilization, and custom metrics.
  • The targetAverageUtilization field determines the required average utilization for CPU or memory.

Scaling Policies:

  • Define the maximum and minimum number of pod replicas for social deployment.
  • Scaling the decisions that are made based on whether the metrics break the decided thresholds.

HPA is useful for handling different loads and ensuring decided resource utilization by managing the number of pod instances in real-time.

Ex. This HPA example targets a Deployment named example-deployment and scales based on CPU utilization. It is configured to maintain a target average CPU utilization of 80%, scaling between 2 and 10 replicas.

apiVersion: e-commerce/v1
kind: HorizontalPodAutoscaler
metadata:
  name: example-hpa
spec:
  scaleTargetRef:
    apiVersion: e-commerce/v1
    kind: Deployment
    name: e-commerce-deployment
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 80

3.3 Vertical Pod Autoscaling

While HPA adjusts the number of replicas, VPA will focus on dynamically adjusting the resource requests of each pod.

Observation and Adaptation:

  • VPA observes the real resource usage of pods and adjusts resource requests based on that.
  • It optimizes both memory and CPU requests based on historical information.

Update Policies:

  • With the updateMode function, you can determine how aggressively VPA should update resource requests.
  • With alternatives like Auto, Off, and Recreate, the updated policies help the users officially.
  • It also helps in fine-tuning resource allocation for adapting to the actual runtime behavior.

Ex. This VPA example targets a Deployment named example-deployment and is configured to automatically adjust the resource requests of the containers within the deployment based on observed usage.

apiVersion: e-commerce/v1
kind: VerticalPodAutoscaler
metadata:
  name: e-commerce-vpa
spec:
  targetRef:
    apiVersion: "e-commerce/v1"
    kind: "Deployment"
    name: "e-commerce-deployment"
  updatePolicy:
    updateMode: "Auto"

3.4 Cluster Autoscaler

The cluster autoscaler is responsible for dynamically adjusting the node pool size in response to the resource requirements of your workloads.

Node Scaling:

  • When a node lacks resources and cannot accommodate new pods, the Cluster Autoscaler adds more nodes to the cluster.
  • Conversely, when nodes are not fully utilized, the Cluster Autoscaler scales down the cluster by removing unnecessary nodes.

Configuration:

  • Configuration parameters such as minimum and maximum node counts vary depending on the cloud provider or underlying infrastructure.

The Cluster Autoscaler plays a crucial role in ensuring an optimal balance between resource availability and cost-effectiveness within a Kubernetes cluster.

Ex. This example includes a simple Deployment and Service. Cluster Autoscaler would dynamically adjust the number of nodes in the cluster based on the resource requirements of the pods managed by the Deployment.

apiVersion: e-commerce/v1
kind: Service
metadata:
  name: e-commerce-service
spec:
  selector:
    app: e-commerce
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
---
apiVersion: e-commerce/v1
kind: Deployment
metadata:
  name: e-commerce-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: e-commerce
  template:
    metadata:
      labels:
        app: e-commerce
    spec:
      containers:
      - name: e-commerce-container
        image: e-commerce:v1
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

4. How to Deploy Kubernetes

Most of the Kubernetes deployments and functions are specified in YAML (or JSON) files. These are developed using ‘kubectl apply’. 

For example, when it comes to Ngnix deployment, the YAML is called ‘web deployment’ with four copies of replicas. This will look like the code given below:

$ kubectl set image deployment mywebsite nginx=nginx:1.22.1

In the above example, the metadata shows that the ‘web-deployment’ was developed with four copies of pods which are replicas of each other (replicas: 4), and the selector defines how the deployment process will find the pods using the label (app: nginx). Besides this, here the container (nginx) runs its image at version 1.17.0, and the deployment opens port 80 for the pod’s usage.

In addition to this, here, the environment variables for the containers get declared with the use of the ‘envFrom’ or ‘env’ field in the configuration file. After the deployment gets specified, it gets developed from the YAML file with: kubectl apply -f https://[location/web-deployment.yaml]

5. Update Kubernetes Deployment

When it comes to Kubernetes deployment, the developers can use the set command to make changes to the image, configuration fields, or resources of an object. 

For instance, to update a deployment from nginx version 1.22.0 to 1.22.1, the following command can be considered.

apiVersion: apps/v1;

kind: Deployment

metadata:

name: web-deployment 

spec: 

selector:

matchLabels:

app: nginx

replicas: 4

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: nginx:1.17.0

ports:- containerPort: 80

6. Final Words

In this blog, we saw that there are multiple ways a developer can deploy an application. When the developer wants to publish the application to development/staging environments, a recreated or ramped deployment is a good choice. But when production is considered, a blue/green or ramped deployment is an ideal choice. When choosing the right Kubernetes deployment strategy, if the developer isn’t sure about the stability of the platform, a proper survey of all the different ways is required. Each deployment strategy comes with its pros and cons, but which to choose depends on the type of project and resources available.

FAQs:

1. What is the Best Deployment Strategy in Kubernetes?

There are mainly 8 different Kubernetes deployment strategies:

Rolling Deployment, Ramped slow rollout, Recreate Deployment, Best-effort controlled rollout, canary deployment, A/B testing, and Shadow deployment. 

You can choose the one that’s most suitable to your business requirements.

2. What Tool to Deploy k8s?

Here’s the list of tools to deploy by Kubernetes professionals:

  • Kubectl
  • Kubens
  • Helm
  • Kubectx
  • Grafana
  • Prometheus
  • Istio
  • Vault
  • Kubeflow
  • Kustomize, and many more

3. What is the Difference Between Pod and Deployment in Kubernetes?

A Kubernetes pod is the smallest unit of Kubernetes deployment. It is a cluster of one or more containers that has the same storage space, and even similar network resources.

On the other hand, Kubernetes deployment is the app’s life cycle that includes the pods of that app. It’s a way to communicate your desired state of Kubernetes deployments.

4. What is the Life Cycle of Kubernetes Deployment?

There are majorly ten steps to follow in a Kubernetes deployment life cycle:

  • Containerization
  • Container Registry
  • YAML or JSON writing
  • Kubernetes deployment
  • Rollbacks and Rolling Updates
  • Scaling of the app
  • Logging and Monitoring
  • CI/CD

The post Kubernetes Deployment Strategies- A Detailed Guide appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/kubernetes-deployment-strategies/feed/ 0
How to Use Typescript with React? https://www.sysgenpro.com/blog/typescript-with-react/ https://www.sysgenpro.com/blog/typescript-with-react/#comments Thu, 04 Apr 2024 08:21:11 +0000 https://www.sysgenpro.com/blog/?p=12891 Although TypeScript is a superset of JavaScript, it is a popular programming language in its own right. Developers tend to feel confident programming with TypeScript as it allows you to specify value types in the code. It has a large ecosystem consisting of several libraries and frameworks. Many of them use TypeScript by default but in the case of React, you are given the choice whether to use TypeScript or not.

The post How to Use Typescript with React? appeared first on sysgenpro Blog.

]]>

Key Takeaways

  1. Writing TypeScript with React.js is a lot like writing JavaScript with React.js. The main advantage of using TypeScript is that you can provide types for your component’s props which can be used to check correctness and provide inline documentation in editors.
  2. TypeScript enables developers to use modern object-oriented programming features. TypeScript Generics allows the creation of flexible react components that can be used with different data structures.
  3. With TypeScript in React Project, one can perform more compile-time checks for errors and some features become easier to develop.
  4. For TypeScript with React, one can refer to community-driven CheatSheet – React TypeScript covering useful cases and explanations in depth for various modules.

Although TypeScript is a superset of JavaScript, it is a popular programming language in its own right. Developers tend to feel confident programming with TypeScript as it allows you to specify value types in the code. It has a large ecosystem consisting of several libraries and frameworks. Many of them use TypeScript by default but in the case of React, you are given the choice whether to use TypeScript or not. 

React is a JS-based library that enables you to create UIs using a declarative, component-based method. Using TypeScript with React has proven to be very effective. You can observe that the offerings of a reputed software development company include React services that focus on crafting elegant user interfaces. In contrast, TypeScript is leveraged to identify code errors, improving the project’s overall quality.

This tutorial will guide you on how to use TypeScript with React. But first, let’s clear our basics. 

1. What is Typescript?

Microsoft created a high-level programming language called TypeScript. It is statically typed. So, it automatically introduces static type checking to your codebase to improve its quality.

Although React projects mostly use JavaScript, certain features like type annotations, interfaces, and static types that are necessary to detect code errors early on are only available with TypeScript.

There are many perks such as improved collaboration in large-scale projects, increased productivity, and improved code quality for utilizing types from TypeScript. Because it offers the means to define the data format and structure, the language ensures type safety along with the prevention of runtime errors.

The conjunction of React and TypeScript enables the developers to build strongly typed React components. It also enforces the type checking in state and props which helps make your code more robust and reliable. 

TypeScript offerings such as documentation and advanced code navigation features further simplify the work of developers. You can develop robust and easy-to-maintain React applications effortlessly by leveraging the React-TypeScript integration capabilities.

Further Reading on: JavaScript vs TypeScript

2. React: With JavaScript or TypeScript? Which one is Better? 

JavaScript is a popular scripting language. More often it’s the first programming language developers learn for web development. React is a JS-based library that developers prefer to use for building UIs. 

On the other hand, TypeScript is JavaScript’s superset. So, it offers all the benefits of JavaScript. On top of that, it also provides some powerful tooling and type safety features. Using these languages has its own merits and demerits. So, the choice of whether to use TypeScript or JavaScript with React in your project boils down to your requirements.

JavaScript features such as high speed, interoperability, rich interfaces, versatility, server load, extended functionality, and less overhead make it a perfect candidate for building fast-changing apps, browser-based apps and websites, and native mobile and desktop applications.  

TypeScript features like rich IDE support, object-oriented programming, type safety, and cross-platform and cross-browser compatibility allow you to build complex and large-scale applications with robust features.

3. Why Do We Use Typescript with React?

To avail a multitude of advantages. Some of them are: 

  • Static Type Checking: Static Typing was introduced for React projects in TypeScript. It helps developers identify errors early on. This early detection of potential errors and prevention of runtime errors is possible because of the enforcement of type annotations from the programming language. It makes your code robust and reliable.
  • Improved Code Quality: Typescript language allows the developers to define the return values, function parameters, and strict types for variables. This helps in writing a clean and self-documenting code. The Type system of the language encourages the developers to write more structured and high-quality code. So, your code becomes easy to read and maintain. 
  • Enhanced Developer Productivity: The code editors of TypeScript come with features like real-time error checking, type inference, and auto-completion as a part of advanced tooling support. This helps developers code quickly, find mistakes, implement better coding suggestions, minimize the debugging time, and in the end enhance productivity. 
  • Better Collaboration: It might pique your interest to know that apart from providing coding advantages, TypeScript offers collaboration advantages as well. It provides contract definitions and clear interfaces through type annotations and interfaces. It helps developers understand how different modules and components interact. This improves the overall project understanding. 

4. Create a React Application with Typescript

Explore the steps mentioned below to create a React application with Typescript.

4.1 Prerequisites

Before we get into action, make sure you are prepared for it. What you will need includes:

  • One Ubuntu 20.04 server. Make sure it is firewall enabled and comes with a non-root user and sudo privileges. Ubuntu 20.04 offers an initial server setup guide. It would help you get started. 
  • Install nodejs and NPM. Use a NodeSource PPA with an Apt. You have to install them on Ubuntu 20.04 to get started. 

4.2 Create React App

Create-react-app v2.1 comes with a TypeScript by default. While setting up the new project with CRA, you can use TypeScript as the parameter.

npx create-react-app hello-tsx --typescript

Tsconfig.json is generated when TypeScript is used for project setup. 

4.3 Install TypeScript in the React App

To add a TypeScript version to an existing application, you have to install it with all the necessary types. Execute the following code: 

npm install --save typescript @types/node @types/react @types/react-dom @types/jest

Next, rename all your files with ts file and tsx file extension. After that, you can start your server which will automatically produce a JSON file named tsconfig.json. Once that happens, you can start writing React in TypeScript. 

It is important to note that adding TypeScript to existing projects doesn’t mean it will affect the app’s JS code. It would work fine and if you want, you can migrate that code to TypeScript as well. 

4.4 How to Declare Props?

The following example depicts how you can use TypeScript to type props in a React component. 

import React from 'react';

type DisplayUserProps = {
    name: string,
    email: string
};
const DisplayUser: React.FC = ({ name, email }) => {
  return (
    
        

{name}

{email}

) } export default DisplayUser;

A custom type for DisplayUserProps is defined in the code above. It also includes the name and email. The generic type of React Functional Component is used to define the DisplayUser by taking pre-defined DisplayUserProps as arguments. 

So, wherever we use the DisplayUser component, we will get the data back as the props. It helps confirm the predefined types such as name and email. The component is called inside App.tsx. 

const App = () => {
  const name: string = "Cherry Rin";
  const email: string = "cherryrin@xyz.com";
  return (
    
  );
};

export default App;

After reloading the React app, when you check the UI, it renders the name and email as shown in the example below:

Cherry Rin

The TypeScript shows an error when you pass the data that isn’t part of the data structure. 


Your console will display the error message as follows:

error message

It is how TypeScript ensures that defined types are adhered to when props are passed to the DisplayUser component. This also helps enforce error checking and type safety in the development process. 

4.5 How to Declare Hooks?

useRef and useState are the two most common React hooks used in TypeScript. 

Typing the UseState hook

Without Types, the useState will look similar to the following command:

const [number, setNumber] = useState<>(0)

With Types, the useState will look like:

const [number, setNumber] = useState(0)

And just like that, you can simply declare the state value’s types. The value is determined in <number>. If you or anyone else tries updating the state with another value, the action will be prevented with an error message. 

error message

But if you want your state to hold different values then you have to declare:

const [number, setNumber] = useState(0).

After running this command, you can enter any string or number and it won’t count as an error. 

The following example depicts how you can use useState in a React component,

import DisplayUsers from './components/DisplayUsers';
import { useState } from 'react';

const App = () => {
  const [name, setName] = useState('Cherry Rin');
  const [email, setEmail] = useState('cherryrin@xyz.com');
  return (
    
  );
};
export default App;

Typing the UseRef hook

In React, a DOM component is referenced using the useRef hook. The example given below shows how you can implement it in TypeScript with React.

const App = () => {
  const nameRef = useRef(null);

  const saveName = () => {
    if (nameRef && nameRef.current) {
      console.log("Name: ", nameRef.current.value);
    } 
  };
  return (
    <>
      
      
    
  );
};

Output:

Output

We will be starting with the ref variable as null and declare HTMLInputElement | null as its type. While using the useRef hook with TypeScript, you can assign it to either null or to declared type. 

The benefit of declaring the useRef hook is that you won’t be reading data or taking actions from the unmatched types as TypeScript will be there to prevent you. You will get the following errors in case for ref, you try to declare a type number. 

useRef hook

It helps you avoid making silly errors which saves time that you would have been otherwise spending on debugging. While working on large projects, where multiple people are contributing to the codebase, using TypeScript for React apps provides you with a more ordered and controlled work environment. 

4.6 How to Declare and Manage States?

If you have working experience with React or a basic understanding, you would know that adding a simple and stateless component to your React apps isn’t entirely possible. To hold state values in your app component, you have to define state variables.  

The state variable is defined with a useState hook as shown below:

import React, { useState } from "react";
function App() {
  const [sampleState, setSampleState] = useState("Ifeoma Imoh");
  return 
Hello World
; } export default App;

JavaScript

In this case, the type of state is presumed automatically. By declaring the type your state accepts, you can increase its safety. 

import React, { useState } from 'react';
function App() {
  const [sampleState, setSampleState] = useState("Ifeoma Imoh");
  //sampleStateTwo is a string or number
  const [sampleStateTwo, setSampleStateTwo] = useState("");
  //sampleStateThree is an array of string
  const [sampleStateThree, setSampleStateThree] = useState([""]);
  return (
    
Hello World
) } export default App;

4.7 React Functional Component

In TypeScript, a functional component or a stateless component is defined with the following commands: 

type DisplayUserProps = {
    name: string,
    email: string
};

const DisplayUsers: React.FC = ({ name, email }) => {
  return (
    
        

{name}

{email}

) } export default Count;

With React.FC, we will define the expected props object structure. Utilizing props is one of many ways to create the interface. 

interface DisplayUserProps {
    name: string;
    email: string;
};
const DisplayUsers: React.FC = ({ name, email }) => {
  return (
    
        

{name}

{email}

)}

4.8 How to Display the Users on the UI?

Now, what shall we do to display the user’s name and email on the screen?  The console consists of the name property for all objects. These objects have the user’s name and email. We can use them to display the user’s name and email. 

Now, you have to replace the content in the App.tsx file with the following:

import React, { useState } from 'react';
import { UserList } from '../utils/UserList';

export type UserProps = {
    name: string,
    email: string
};

const DisplayUsers = () => {
    const [users, setUsers] = useState(UserList);

    const renderUsers = () => (
        users.map((user, index) => (
            
                

{user.name}

{user.email}

)) ); return (

User List

{renderUsers()}
) } export default DisplayUsers;

The users array is looped over using the array map method. To destructure the name and email properties of each user object, the object destructuring method is used. Here, the user’s name and email will be displayed on the screen as an unordered list. 

Now is the time to determine a user as an object that has properties like name and email. For every property, we also have to define a data type. Next, you have to change the useState users array from:

const [users, setUsers] = useState([]);

to :

const [users, setUsers] = useState(UserList);

It is to specify users as an array of type users’ objects. It is our declared UI. So, if you check your App.tsx files now, you will see that it no longer has any TypeScript errors.

specify users as an array of type users’ objects

As a result, you can see that the screen is displaying a list of 5 random users.

user list

4.9 Run App

Until now, we discussed the basic yet most important concepts for React app development using TypeScript. This section will show how you can leverage them to create a simple to-do app. 

Create a new React-TypeScript project like we did at the beginning of the article. After that, you have to run the code given below at the app’s root level to start the development server. 

npm start
Shell

Now, open the App.tsx file. It consists of a code which you have to replace with the following: 

import DisplayUser, { DisplayUserProp as UserProps } from './components/DisplayUser';
import React, { useEffect, useState } from 'react';
import { UserList } from './utils/UserList';

const App = () => {
  const [users, setUsers] = useState([]);

  useEffect(() => {
    if (users.length == 0 && UserList) setUsers(UserList);
  }, [])

  const renderUsers = () => (
    users.map((user, index) => (
      
        
      
    ))
  );

  return (
    
      

User List

{renderUsers()}
); }; export default App;

Here, User data comes from the UserList.ts file which is inside the utils folder. Add the below-mentioned data to the UserList.ts file:

import { DisplayUserProp as UserProps } from "../components/DisplayUser";
export const UserList: UserProps[] = [
    {
        name: "Cherry Rin",
        email: "cherryrin@xyz.com"
    },
    {
        name: "Lein Juan",
        email: "leinnj23@xyz.com"
    },
    {
        name: "Rick Gray",
        email: "rpgray@xyz.com"
    },
    {
        name: "Jimm Maroon",
        email: "jimaroonjm@xyz.com"
    },
    {
        name: "Shailey Smith",
        email: "ssmith34@xyz.com"
    },
];

To render individual User items in your app, you have to import a DisplayUser component. We will be creating it next. But first, go to your app’s src directory. Open a new components folder. In there, create a DisplayUser file with the tsx extension. Now add the following code to it.

import React from 'react';

export type DisplayUserProp = {
    name: string,
    email: string
};

const DisplayUser: React.FC = ({ name, email }) => {
    return (
        

{name}

{email}

) }; export default DisplayUser;

After saving all the modifications, test your React app in the browser.

user list output on browser

5. Conclusion

In this article, we have surfed through the fundamentals of using TypeScript with React development. These are the concepts that are widely used in your typical TypeScrip-React projects. We also saw how you can integrate these concepts to build a simple React application. You can imply them similarly in other projects with little modifications to suit your requirements. 

If you have any queries or input on the matter, feel free to share them with us in the comments section below. We will get back to you ASAP! 

FAQs

Can You Use TypeScript with React?

When it comes to adding type definitions in a JS code, developers prefer to use TypeScript. Only by adding @types/react and @types/react-dom to your React Project, you can get complete JSX and React web support.

Is TypeScript better for React?

With static typing, TypeScript helps the React compiler detect code errors early in the development stage. On the other hand, because JavaScript is dynamically typed, the compiler has a hard time detecting code errors.

The post How to Use Typescript with React? appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/typescript-with-react/feed/ 1
Detailed Guide to React App Testing https://www.sysgenpro.com/blog/react-app-testing/ https://www.sysgenpro.com/blog/react-app-testing/#respond Wed, 06 Mar 2024 09:36:08 +0000 https://www.sysgenpro.com/blog/?p=12822 React is a prominent JavaScript library that is used by app development companies to create unique and robust applications. It comes with a declarative style and gives more emphasis on composition. With the help of this technology, every React app development company in the market can transform their client's business by creating modern web applications.

The post Detailed Guide to React App Testing appeared first on sysgenpro Blog.

]]>

Key Takeaways

  1. React app testing is crucial for delivering secure, high-performing, and user friendly application.
  2. React Apps are created using different UI components. So it is necessary to test each component separately and also how they behave when integrated.
  3. It is essential to involve Unit Testing, Integration Testing, End-to-End Testing, and SnapShot Testing for a React app as per the requirements.
  4. React Testing Library, Enzyme, Jest, Mocha, Cypress, Playwright, Selenium,  JMeter, Jasmin and TestRail are some of the key tools and libraries used for testing React app.

React is a prominent JavaScript library that is used by app development companies to create unique and robust applications. It comes with a declarative style and gives more emphasis on composition. With the help of this technology, every React app development company in the market can transform their client’s business by creating modern web applications. When businesses grow, the size of the web application along with its complexity will grow for which the development team will have to write tests that can help in avoiding bugs.

Though testing React apps isn’t an easy task, some frameworks and libraries can make it possible for the development teams. In this article, we will go through such libraries.

1. Why do We Need to Test the Web App?

The main reason behind testing applications is to ensure that the apps work properly without any errors. Along with this, in any application, several features might require some attention from the developers as they might cause expensive iterations if not checked frequently. Some of the areas where testing is a must are:

  • Any part of the application that involves getting input from the user or retrieving data from the application’s database and offering that to the user.
  • The features of the application that are connected with call-to-action tasks where user engagement becomes necessary, need to be tested.
  • When any sequential event is being rendered as its elements lead to function, testing is required.

2. What to Test in React App?

Developers often get confused about what to test in a React application. The reason behind this confusion is that applications are generally dealing with simple data but sometimes they are quite sophisticated. In any case, developers need to set their priorities for testing the applications. Some of the best things to start the testing process with are:

  • Identifying the widely used React components in the applications and start testing them.
  • Identifying application features that can help in adding more business value and adding them for testing.
  • Executing border case scenarios in high-valued features of React application.
  • Performance and stress testing applications if they are serving a large number of users like Amazon or Netflix.
  • Testing React hooks.

3. Libraries and Tools Required

React test libraries and frameworks can be beneficial to offer the best application to the end users. But all these frameworks have their specialty. Here we will have a look at some of these React testing libraries and tools for React application testing.

3.1 Enzyme

EnzymeJS

Enzyme is a React testing library that enables React app developers to traverse and manipulate the runtime of the output. With the help of this tool, the developers can carry out component rendering tasks, find elements, and interact with them. As Enzyme is designed for React, it offers two types of testing methods: mount testing and shallow rendering. This tool is used with Jest.

Some of the benefits of Enzyme are:

  • Supports DOM rendering.
  • Shallow rendering.
  • React hooks.
  • Simulation during runtime against output.

3.2 Jest

JestJS

Jest is a popular React testing framework or test runner suggested by the React community. The testing team prefers this tool to test applications for large-scale companies. Firms like Airbnb, Uber, and Facebook are already using this tool.

Some of the benefits of Jest are:

  • Keeps track of large test cases.
  • Easy to configure and use.
  • Snapshot-capturing with Jest.
  • Ability to mock API functions.
  • Conduct parallelization testing method.

3.3 Mocha

MochaJS

It is also a widely used testing framework. Mocha runs on Node.js and testers use it to check applications that are developed using React. It helps developers to conduct testing in a very flexible manner.

Here are some of the benefits of Mocha:

  • Easy async testing.
  • Easy test suite creation.
  • Highly extensible for mocking libraries.

3.4 Jasmine

Jasmine

Jasmine is a simple JavaScript testing framework for browsers and Node.js. Jasmine comes with a behavior-driven development pattern which means that using this tool can be a perfect choice for configuring an application before using it. Besides this, third-party tools like Enzyme can be used while working with Jasmine for testing React applications. 

Some of Jasmine’s benefits include:

  • No DOM is required.
  • Asynchronous function testing.
  • Front-end and back-end testing is possible.
  • Inbuilt matcher assertion.
  • Custom equality checker assertion.

Despite its many benefits, Jasmine isn’t the perfect testing framework for React apps. It doesn’t offer support for testing snapshots. For this, it requires the usage of third-party tools.

4. How to Test React Applications?

Here are the steps that can help you test a React application. 

4.1 Build a Sample React App

First of all, we will create a minimal application that displays users’ information from an API. This application will be then tested to see how React app testing works. 

Here, as we only have to focus on the front end of the application, we will use JSONPlaceholder user API. First of all, the developer needs to write the following code in the App.js file:

import { useEffect, useState } from "react";
import axios from "axios";
import { getFormattedUserName } from "./utility";
import "./App.css";


function App() {
  const [users, setUsers] = useState([]);


  // Fetch the data from the server
  useEffect(() => {
    let isMounted = true;
    const url = "https://jsonplaceholder.typicode.com/users";
    const getUsers = async () => {
      const response = await axios.get(url);
      if (isMounted) {
        setUsers(response.data);
      }
    };
    getUsers();
  }, []);


  return (
    

Users:

    {users.map((user) => { return (
  • {user.name} --{" "} ({getFormattedUserName(user.username)})
  • ); })}
); } export default App;

Then, it is time to create a file in the src folder. Name the file utility.js and write the following function in it:

export function getFormattedUserName(username) {
  return "@" + username;
}

Now, run the application using this command:

npm start

Once you run the application, you will see the following output:

User List- Output

4.2 Unit Testing

Now, let’s start with testing the application that we have just developed. Here we will start with unit testing. It is a test for checking individual software units or React components separately. A unit in an application can be anything from a routine, function, module, method, and object. Whatever the test objective the testing team decides to follow specific if the unit testing will offer the expected results or not. A unit test module contains a series of approaches that are provided by React development tools like Jest to specify the structure of the test. 

To carry out unit testing, developers can use methods like test or describe as the below-given example:

describe('my function or component', () => {
 test('does the following', () => {
   // add your testing output
 });
});

In the above example, the test block is the test case and the described block is the test suite. Here, the test suite can hold more than one test case but a test case doesn’t need to be present in a test suite. 

When any tester is writing inside a test case, he can include assertions that can validate erroneous or successful processes. 

In the below example, we can see assertions being successful:

describe('true is truthy and false is falsy', () => {
 test('true is truthy', () => {
   expect(true).toBe(true);
 });

 test('false is falsy', () => {
   expect(false).toBe(false);
 });
});

After this, let us write the first test case that can target the function named

getFormattedUserName

from the utilitymodule. For this, the developer will have to build a file called utility.test.js. All the test files use this naming pattern: {file}.test.js, there {file} is the module file name that needs to be tested. 

In this code, the function will take a string as an input and will offer the same string as an output by just adding an @ symbol at its beginning. Here is an example of the same:

import { getFormattedUserName } from "./utility";


describe("utility", () => {
  test("getFormattedUserName adds @ at the start beginning of the username", () => {
    expect(getFormattedUserName("jc")).toBe("@jc");
  });
});

As seen in the above code, any tester can easily specify the module and test case in the code so that if it fails, they can get an idea about the things that went wrong. As the above code states, the first test is ready, so the next thing to do is run the test cases and wait for the output. For this, the tester needs to run a simple npm command:

npm run test

After this, one will have to focus on running tests in a single test with the use of the following command:

npm run test -- -t utility

This can be done when there are other tests created by create-react-app. If by running the above commands, everything goes well, you will be able to see an output like this:

Unit test Output

Successful output.

Here, in the output, you can observe that one test passed successfully. But if in this case, something goes wrong, then a new test needs to be added to the utils test suite. For this, the below-described code can be useful:

test('getFormattedUserName function does not add @ when it starts with @is already provided', () => {
    expect(getFormattedUserName('@jc')).toBe('@jc');
  });

This will be a different situation. In this case, if the username already has an @ symbol at the start of the string, then the function will return the output as the username was provided without any other symbol. Here is the output for the same:

Failed Unit Test

Failed test output.

As anticipated, the test failed as the information the system received was the expected value as the output. Here, if the tester can detect the issue, he can also fix it by using the following code:

export function getFormattedUserName(username) {
  return !username.startsWith("@") ? `@${username}` : username;
}

The output of this code will be:

Unit Test Successful

As you can see, the test is a success.

4.3 SnapShot Testing

Now, let us go through another type of testing which is Snapshot testing. This type of test is used by the software development teams when they want to make sure that the UI of the application doesn’t change unexpectedly.

Snapshot testing is used to render the UI components of the application, take its snapshot, and then compare it with other snapshots that are stored in the file for reference. Here, if the two snapshots match, it means that the test is successful, and if not, then there might have been an unexpected change in the component. To write a test using this method, the tester needs the react-test-renderer library as it allows the rendering of components in React applications. 

The very first thing a tester needs to do is install the library. For this, the following command can be used:

npm i react-test-renderer

After this, it’s time to edit the file to include a snapshot test in it.

import renderer from "react-test-renderer";

// ...

test("check if it renders a correct snapshot", async () => {
  axios.get.mockResolvedValue({ data: fakeUserData });
  const tree = renderer.create().toJSON();
  expect(tree).toMatchSnapshot();
});


// ...

When the tester runs the above code, he will get the following output.

SnapShot Test Output

When this test runs, a test runner like Jest creates a snapshot file and adds it to the snapshots folder. Here is how it will look:

// Jest Snapshot v1, https://goo.gl/fbAQLP

exports[`check if it renders a correct snapshot 1`] = `

Users:

Loading user details...
`;

Now, if one wants to modify the App component, only one single text value needs to be changed. Here, the correct snapshot test will be rendered to fail as there will be a change in the output.

Renders Correct Snapshot

To make the test successful, the tester needs to inform a tool like Jest about the intentional changes. This can be easily carried out when Jest is in watch mode. The snapshot as shown below will be taken:

SnapShot Test Successful

4.4 End-to-end Testing

Another popular type of React application testing is end-to-end testing. In this type of testing, the entire system is included. Here all the complexities and dependencies are considered. In general, UI tests are difficult and expensive which is why end-to-tests are carried out rather than unit tests that focus on only the critical parts of the applications. This type of testing comes with various approaches:

  • Usage of platform for automated end-to-end testing.
  • Automated in-house end-to-end testing.
  • Usage of platform for manual end-to-end testing.
  • Manual in-house end-to-end testing.

4.5 Integration Testing

Now, we will go through integration testing which is also considered an essential type of react application testing. It is done to ensure that two or more modules can work together with ease.

For this, the software testing team will have to follow the below-given steps:

First of all, one needs to install the dependencies with yarn by using the following command:

yarn add --dev jest @testing-library/react @testing-library/user-event jest-dom nock

Or if you want to install it with npm, use this command line:

npm i -D jest @testing-library/react @testing-library/user-event jest-dom nock

Now, it’s time to create an integration test suite file named viewGitHubRepositoriesByUsername.spec.js. For this, Jest will be useful to automatically pick it up.

Now, import dependencies using the code:

import React from 'react'; // so that we can use JSX syntax
import {
 render,
 cleanup,
 waitForElement
} from '@testing-library/react'; // testing helpers
import userEvent from '@testing-library/user-event' // testing helpers for imitating user events
import 'jest-dom/extend-expect'; // to extend Jest's expect with DOM assertions
import nock from 'nock'; // to mock github API
import {
 FAKE_USERNAME_WITH_REPOS,
 FAKE_USERNAME_WITHOUT_REPOS,
 FAKE_BAD_USERNAME,
 REPOS_LIST
} from './fixtures/github'; // test data to use in a mock API
import './helpers/initTestLocalization'; // to configure i18n for tests
import App from '../App'; // the app that we are going to test

After that, you can set the test suite by following this code

describe(check GitHub repositories by username', () => {
 beforeAll(() => {
   nock('https://api.github.com')
     .persist()
     .get(`/users/${FAKE_USERNAME_WITH_REPOS}/repos`)
     .query(true)
     .reply(200, REPOS_LIST);
 });

 afterEach(cleanup);

 describe('if a user of GitHub has public repositories', () => {
   it('Users can view the list of public repositories by entering their GitHub username.', async () => {
     // arrange
     // act
     // assert
   });
 });


 describe('when a user on GitHub doesn't have any public repos', () => {
   it('The user is informed that the login provided for GitHub does not have any public repositories associated with it.', async () => {
     // arrange
     // act
     // assert
   });
 });

 describe('when logged in user does not exist on Github', () => {
   it('user is presented with an error message', async () => {
     // arrange
     // act
     // assert
   });
 });
});

5. Conclusion

As seen in this blog, to conduct React app testing, the testing team needs to have a proper understanding of the React testing libraries and tools. After that, they need to find out what React testing is needed for. This can help them choose the right tool from the above-listed ones.

FAQs

How is React testing conducted?

When it comes to React app testing, developers and testers can support the development of high-quality software along with features that can help developers reduce their time in making changes to the development and help testers create a dependable test suite.

Is React testing library Jest?

React Testing Library and Jest are two different things. Jest is a tool that helps in writing tests which can be beneficial for components of any application. 

Where can a React code be tested?

To test a React code, testers can use any tool like React Testing Library (RTL) and Enzyme. The choice depends on the application type and testing that is required for it.

The post Detailed Guide to React App Testing appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/react-app-testing/feed/ 0
Guide to Deploy React App on Various Cloud Platforms https://www.sysgenpro.com/blog/deploy-react-app/ https://www.sysgenpro.com/blog/deploy-react-app/#respond Wed, 28 Feb 2024 09:32:00 +0000 https://www.sysgenpro.com/blog/?p=12618 For any app development company, the most crucial part of the development process is deployment. This is why the development teams need to understand the different options of deployment that are available in the market and learn how to use them to ensure that the deployment process is carried out smoothly.

The post Guide to Deploy React App on Various Cloud Platforms appeared first on sysgenpro Blog.

]]>
For any app development company, the most crucial part of the development process is deployment. This is why the development teams need to understand the different options of deployment that are available in the market and learn how to use them to ensure that the deployment process is carried out smoothly. In this blog, we will go through some of the most popular platforms that can be used by React app development companies to quickly and efficiently deploy React applications for their clients. 

Let’s discuss step by step deployment of React applications using different platforms like AWS Amplify, Vercel, Heroku and more.

1. AWS Amplify

One of the most popular platforms to deploy and host modern Reach applications is AWS Amplify Console. It provides custom domain setup, globally available CDNs, password protection, and feature branch deployments. 

Core Features

  • Authentication
  • DataStore
  • Analytics
  • Functions
  • Geo
  • API
  • Predictions

Pricing

  • AWS Amplify enables the users to start creating the backend of an application for free and then they can start paying for some specific functionalities if required. Besides this, hosting setup of an application with AWS Amplify for 12 months is free for 1000 build minutes per month and then per minute, it charges $0.01.

Deploy React App with AWS Amplify

For React app development companies, sometimes deploying an application becomes a daunting task. But when they use the right tools, the React app developers can carry the process very smoothly. For this, one of the most effective options is Amazon Web Services (AWS). It provides a cost-effective and simple solution for hosting static web app. Here, we will have a look at a step-by-step process that a developer can follow while deploying a React app on Amazon Web Services (AWS).

Prerequisites:

Before starting the deployment process, here are a few prerequisites that are required:

  • A React application: The development team must have experience in working with React applications.
  • Amazon Web Services (AWS) Amplify Account: An account for AWS Amplify is required if one doesn’t have it.

Step 1: Create React App Project

The very first step of this guide is to create a React application with the use of NPM. NPM is considered an excellent tool used to bridge the gap between the different generations of web development approaches. It enables the developers to ensure efficient and faster app development experience for modern web projects. 

Here, the React developers need to open the terminal and run the following command to create a new project setup: 

npx create-react-app react-app-demo

Now, the developer has to upload the application on any version control tool like BitBucket or GitHub. This is done to directly connect the developed app to the host platform.

Step 2: Select Amplify into AWS Account

The next step is to choose Amplify in the AWS account and for this, the developer needs to first log in to the AWS Account. To get started with Amplify, click the “AWS Amplify” button as shown in the below image.

select amplify

Now, click on the “Get Started” Button to start the process.

Get Started with Amplify

Now, under the Amplify Hosting option, click on the “Get Started” button to host the application.

Host app

Step 3: Choose Your React App Repo

The next step is to select the application Repo, for which choose the “GitHub” Button to link your GitHub Account to the AWS Amplify Account.

select repo

Now, in the Add repository branch from the drop-down, choose the repository that you want to use to deploy an application on the AWS Amplify Account.

choose repo

Step 4: Configure Your React App

After selecting the repository, one needs to check the configuration of the application and verify the branch code code that will be deployed. As seen in the below image, the app name will be a default Project name, if you want to change it, this is the time. After that, the developer needs to check the code in the “Build and test settings” section and make changes if required.

choose branch

After checking everything, the developer can click on the “Next” button.

Step 5: Deploy a React App

check build command and name

Now, you will be diverted to the review page where you can check the repository details along with the app setting details. And if all looks fine, you need to click on the “Save and deploy” button to deploy the app.

save and deploy

These steps will deploy the application successfully. If the React developer wants to check the deployed application, he can by clicking on the link promoted in the below image to view the deployed application.

build and check preview

Step 6: Preview of Your Deployed React App

While reviewing the application, the developer can check the URL and can change it if desired by configuring the website domain to an AWS Amplify account. Besides URLs, developers can configure different things like Access Control, Monitoring, Domain, Build Settings, and more.

preview
build and check preview in Amplify

Congratulations on a successful deployment! Your application is now live on AWS Amplify and accessible to the world. Share the link as needed to showcase your application.

2. Vercel

Another popular service for the deployment of React applications is Vercel. It is a new approach that developers follow to simplify the deployment process along with team collaboration while creating the React application. This is a service that supports importing source code from GitLab, GitHub, and Bitbucket. With the help of Vercel, developers can get access to starter templates that can help in creating and deploying applications. Besides this, it also offers HTTPS, Serverless functions, and continuous deployment.

Core Features

  • Infinite Scalability
  • Observability as Priority
  • Intelligent Edge Caching
  • Image Optimization
  • Automatic Failover
  • Multi-AZ Code Execution
  • Atomic Deploys

Pricing

  • When it comes to hosting Hobby sites on Vercel, there is no charge but for commercial use, the Vercel platform charges from $20 per month per seat.

Deploy React app with Vercel

In the world of web development, developers can use different types of tools to deploy a React app. Here we will go through the step-by-step process of deploying a React app on Vercel.

Prerequisites:

Before any React app developer starts with this process, here are a few prerequisites for the same:

  • A React application: The development team must have experience in working on a React application that needs to be deployed.
  • Vercel Account: An account in Vercel is required.

Step 1: Build Your Application

Step 2: Login into the Vercel Account

The developer needs to log in to the Vercel Account. For this, they have to click on the “Continue with GitHub” button to log in with the help of the GitHub account. 

Vercel Login

Step 3: Choose Your React App Git Repository

Once the developer logs in, he will be asked to choose the git repository from your GitHub Account. Here one needs to click on the “Import” Button of the repository that the developer needs to deploy the application on.

Import Repo

Step 4: Configure Your React App

Now it’s time to check all configurations of the application. Here the developer needs to check the branch code and make changes in it if required. 

Then as seen in the below image, the project name will be set as default by the system, if the developer wants to set the react project name in the vercel account he can. 

Similarly, the default Build Settings commands will be set in the “Build and Output Settings” section, one can change them as per the requirements. 

Besides this, from the same page, one can set multiple environment variables in the app. For this, there is a section named “Environment Variables”.

After checking all the configurations and making the required changes, it’s time to deploy the application. For this, the developer needs to click on the “Deploy” button. 

Set Env

Step 5: Deploy React App

After that, you will see a page saying “Congratulations – You just Deployed a New React Project to Vercel”. This means that your application has been deployed. From this page, you can get an instant review of the application by clicking on the “Instant Preview” option.

deployed React app to Vercel

 Step 6: Preview of Your Deployed React App

In the application preview page, the developer can check the URL of the deployed application and can make changes to it by configuring the website domain to a Vercel account. To make changes, the developer needs to go to the “setting tab” on Vercel. From here, a developer can also make changes in the security and environmental variables.

preview
Change Settings

Congratulations on a successful deployment! Your React app is now live on Vercel and accessible to the world. Share the link as needed to showcase your application.

3. Firebase

Firebase is a widely used platform for developing and scaling React applications. This Google product offers services like application hosting, Cloud Firestore, authentication, cloud functions, and more.

Core Features

  • Realtime Database
  • Authentication
  • Cloud Messaging
  • Performance Monitoring
  • Test Lab
  • Crashlytics

Pricing

  • For the Spark Plan, Firebase doesn’t charge anything but this plan offers limited data storage, users, and functions. Another plan is the Blaze Plan for which Firebase charges as per the project type and its requirements.

Deploy React App with Firebase

Now, we will have a look at the React development process using Firebase after going through tools like Vercel and AWS Amplify. Here is the step-by-step process of deploying your React app on Firebase.

Prerequisites:

Before the developer starts working with Firebase, here are a few prerequisites:

  • A React application: The development team working on Firebase to deploy the React application must have experience in working on the same application. 
  • Firebase Account: An account in Firebase will be required.

Step 1: Build Your React App

Step 2: Create a project into Firebase Account

To login to the Firebase account, the developer needs to click the “Create a project” button.

create a project in Firebase

Now, one will have to type the project name and click on the “Continue” Button to start the process.

set project name

Now, click on the “Continue” button for the next step.

step 2

Select a country from the dropdown, check all checkboxes, and click on the “create project” button.

step 3

Step 3: Enable Hosting on Your App

1. Now, to enable the hosting setup of the application, the developer will have to select the “Hosting” tab from the left sidebar tabs in Firebase Account and click on the “Get Started” button.

select hosting and started

Step 4: Install Firebase on Your React App

After getting started with the hosting process, the developer will have to follow an installation process. 

1. To install firebase-tool, the developer will have to use the command “npm install -g firebase-tools”.

install firebase

3. Now, will have to log in with your Firebase account on your React application with the use “Firebase login” command.

google login

4. Initialize firebase on your react application using : “firebase init” command.

firebase init

Now, the developer will have to select the “Hosting” option.

select hosting with optional

Now, the development team will have to choose the “Use an existing project” option.

use existing project

Here, one will have to choose a newly created project from the options.

select app

The application is created in the build folder by default, so here the same will be used as the public directory.

options

After this, to create react app, the developer will have to run the “npm run build” command.

run build

Step 5: Deploy React App

After this, it’s time to deploy Firebase hosting sites. For this, the developer will have to run the command “firebase deploy”.

Deploy React app to Firebase

Step 6: Preview of Your Deployed App

Once the application is deployed, the developer can preview it and configure the website domain if required from the Firebase account.

Preview of your Deployed App
preview links on cmd

Congratulations on a successful deployment! Your app is now live on Firebase and accessible to the world. Share the link as needed to showcase your application.

4. Netlify

The next popular service to deploy a React application in our list is Netlify. This is an easy-to-use service. Developers can import projects from Bitbucket, GitHub, and GitLab. They can create multiple project aliases using this service and deploy it. 

Core Features

  • Modernize Architecture
  • Faster Time-to-Market
  • Multi-cloud Infrastructure
  • Robust Integrations
  • Effortless Team Management
  • Advanced Security
  • Seamless Migration

Pricing

  • The basic plan of Netlify is free, then it offers a Pro plan that charges $19 per month for each member, and the enterprise plan is custom-made for the companies as per their requirements and project types. 

Deploy React App with Netlify

Here, to overcome the daunting task of React deployment, we will use Netlify, an essential tool for it. This is the step-by-step process of deploying your React app on Netlify.

Prerequisites:

Here are a few prerequisites for working with Netlify:

  • A React application: The development team must have experience in working on the application that needs to be deployed. 
  • Netlify Account: An account on Netlify is required.

Step 1: Build Your ReactJS App

Step 2: Login into Netlify Account

To log into the Netlify account, the developer will have to click the “Log in with GitHub” button on the Netlify home page.

Netlify login

Step 3: Choose Your React App Repo

Now, to import the existing project, the developer must click on the “Import from Git” Button to link the GitHub Account to the Netlify Account.

Import from GIT

After this, by clicking on the “Deploy with GitHub” option, the developer will be able to import the app repositories from GitHub Account.

click Deploy with GitHub

From the list, the developer will have to select the git repository, for the application they want to deploy from your GitHub Account.

select repo

Step 4: Configure Your React App

After importing the application repository, the developer can look at all the application configurations. Here, the developer can check the code of the application and make changes to it if required. 

Now, as the system will set a default Project name, the developer can even change it. Similarly, the build command setting will be set by default, which can also be changed by going to the “Build Command” section. 

Besides this, from the same page, the developers can also set multiple environment variables in the React app. This can be done from the “Environment Variables” section.

Now, to deploy the application after configuring it, the developer needs to click on the “Deploy reactapp-demo” button.

Add variable

Step 5: Deploy React App

Now, the developer will have to go to the “Deploys” section and click on the “Open production deploy” option to get a preview of the deployed React app.

check deployed preview

 Step 6: Preview of Your Deployed App

While reviewing the application, the developers can also change the URL of the application from the configuring website domain option in the netlify account.

Preview
team overview

Congratulations on a successful deployment! Your React app is now live on Netlify and accessible to the world. Share the link as needed to showcase your application.

5. Heroku

Heroku is used by a large developer community to deploy React applications. This service offers support for various programming languages along with features like a custom domain, a free SSL certificate, and Git integration.

Core Features

  • Modern Open-Source Languages Support
  • Smart Containers
  • Elastic Runtime
  • Trusted Application Operations
  • Leading Service Ecosystem
  • Vertical and Horizontal Scalability
  • Continuous Integration and Deployment

Pricing

  • When someone wants to launch hobby projects, Heroku doesn’t charge anything. But for commercial projects, one will have to pay $ 25 per month as it gives advanced features like SSL, memory selection, and analytics.

Deploy React App with Heroku using Dashboard

After going through various React deployment platforms, we will now go through the step-by-step process of deploying the React application on Heroku.

Prerequisites:

Before starting with Heroku, here are a few prerequisites:

  • A React application: The developer should have worked with the same application that is going through the deployment process. 
  • Heroku Account: The development team must have an account in Heroku.

Step 1: Build Your React App

Step 2: Install Heroku on the System

The developer needs to install Heroku on the system. For this, the command “npm install heroku -g” must be run in the terminal of the system.

cmd

If the developer wants to check whether Heroku is already installed in the system or not, he will have to run the “heroku -v” command.

success install heroku

Step 3: Login Heroku Account on the system

No, after getting Heroku in the system, it’s time to log in to the platform. For this, the “heroku login” command can be used. It will also allow the developer to link the GitHub Account to the Heroku Account.

Heroku login

After login, you can check whether the login is successful or failed.

done login

Step 4: Create React App on Heroku

Now, the developer can start the app development process by choosing the “Create New App” button on the Heroku platform. 

create new app

The developer will have to enter the application name and then click on the “Create app” button.

create a app

Now, we need to connect the Git repository to the Heroku account.

After creating an app, the developer needs to find the option “Connect to GitHub” and choose that option.

choose gitHub

After clicking on the “Connect to GitHub” button, you will get a popup where you can write your login credentials for GitHub.

connect github

Now choose the git repository of the application that needs to be deployed and click on the  “Connect” button to connect that repository.

connect repo

After that, select the branch of the code that needs to be deployed and then click on the “Deploy Branch” button to deploy the application.

select branch and deploy

Step 5: Preview of Your Deployed React App

Once you deploy the application, you can see its preview from where you can check the URL of the application and by configuring the domain, you can change it if required. 

preview

Congratulations on a successful deployment! Your React app is now live on Heroku and accessible to the world. Share the link as needed to showcase your application.

6. AWS S3

AWS S3 (Simple Storage Service) is a platform that offers object storage that is required to create the storage and recover the data & information from the internet. It offers its services via a web services interface. 

Core Features

  • Lifecycle Management
  • Bucket Policy
  • Data Protection
  • Competitor Services
  • Amazon S3 Console

Pricing

  • The cost of AWS S3 Standard, a general-purpose storage starts from $0.023 per GB, the price of S3 Intelligent – Tiering, an automatic cost-saving approach for data starts from $0.023 per GB, and other all storage approaches by AWS S3 starts from $0.0125 per GB. 

Deploy React App to AWS S3

The last React app deployment tool in our list is Amazon S3 (Simple Storage Service). It is a simple and cost-effective solution for hosting applications and storing data. Here, we will go through the step-by-step process of deploying your React app on AWS S3.

Prerequisites:

Before starting with AWS S3, here are some of the prerequisites:

  • A React application: The development team must have experience in working on the application they want to deploy. 
  • AWS Account: An account in AWS is required to access AWS services.

Step 1: Build Your React App

In this guide, we’ll create React app using Vite, a powerful build tool designed to bridge the Here, first of all, we will create a React application and for that, the developer will have to run the following command in the terminal.

 npm create vite@latest demo-react-app

Now, to create a production-ready build, one needs to run the below-given command:

 npm run build

This command will help the developer to create an optimized production build in the “dist” directory.

Step 2: Create an AWS S3 Bucket

Now, in order to log into the AWS Management Console, and access the S3 service, the developer will have to click on the “Create bucket” button.

Create an AWS S3 Bucket

The developer will have to choose a unique name for the bucket and then select the region that is closest to the target audience of the business for improved performance.

create bucket

Step 3: Configure S3 Bucket to Host a Static Website

Enable Static Website Hosting

Enable Static Website Hosting

Now, after entering the bucket, the next thing to do is click on the “Properties” tab from the top of the page. After that scroll down to the “Static website hosting” section inside the Properties tab and click on the “Edit” button next to the “Static website hosting” section.

From here, you will be able to choose Host a static website and use index.html the Index Document, and the Error Document of the project.

Step 4: Configure Settings and Permissions

After this, it’s time to configure the permissions and settings of the AWS S3 bucket. This will ensure that the application can be accessed by users only. For this, the developer will have to follow the below-given steps:

 Disable Block Public Access Permissions

  1. Inside the “Permissions” tab of the bucket, find the “Block public access” section as these settings specify if the business owner wants the application to be accessed by the public or not.
  2. Then click on the “Edit” button to access the settings that can help in blocking public access.
  3. Disable all the “Block public access” settings as this will ensure that the app is publicly accessible, if you don’t want that then enable the settings. 

Besides, when you are inside the “Permissions” tab, find the “Bucket Policy” section. In this section, click on the “Edit” button if you want to create a policy to allow public access to the files of the application. Then copy and paste the below-given commands to generate a policy as per the requirements.

 {
	"Version": "2012-10-17",
	"Statement": [
	 {
		"Sid": "PublicReadGetObject",
		"Effect": "Allow",
		"Principal": "*",
		"Action": "s3:GetObject",
		"Resource": "arn:aws:s3:::www.sysgenpro.demo.com/*"
	 }
	]
}
edit bucket policy

By applying the above-given settings and permissions instructions, the AWS S3 bucket will be ready to serve your React application to the public with some controls. 

Step 5: Publish React App to S3 then Access it with a Public Link

Now, the developer will have to publish the application and for that, the following steps will be useful.

Upload the Contents of Your Build Folder to AWS S3

First, the developer will have to click on the “Upload” button to begin the process. Then select all the content present in the React application’s “dist” folder but not the folder itself. After that, the developer will have to commence the upload to copy these contents to the AWS S3 bucket.

Upload the Contents of Your Build Folder to AWS S3

 Use the Public Link to Access Your React App

Now, after the uploading process is completed, the developer will have to return to the “Properties” tab of the S3 bucket, find the public link in the “Static website hosting” section, and click the link to open the React application in a web browser.

Static website hosting

Congratulations on a successful deployment! Your React app is now live on AWS S3 and accessible to the world. Share the link as needed to showcase your application.

7. Conclusion

In this blog, we had a look at some amazing services that can be used to host and deploy React applications. All these platforms have their own pros and cons along with different methods and approaches to deploy an application. React app development companies can choose any of these platforms as per the application’s type, complexity, and requirement. 

FAQs:

Can I deploy React app without server?

Yes, it is possible to deploy your React App without a server. Just bundle your app during development by allowing the build tool to bundle and minify the JS code, CSS, and all dependencies in separate files. Those JS and CSS files contain necessary app code and are referred to by index.html. Now that you have everything in your app bundled together, you don’t need a server or an NPM module and can just serve your app as a static website. 

Is Firebase free for deployment?

Yes, you can use Firebase for free deployment. However, its no-cost tier plan is limited to a certain level of products. To use other high-level products, you have to subscribe to its paid-tier pricing plan. 

Which is better: AWS or Firebase?

Both platforms fulfills distinct project requirements. So, there is no competition between AWS and Firebase. If you want to improve app development, minimize the deployment time, and have seamless hosting then Firebase is the right pick for you. But if you are working on a more sophisticated project that demands extensive customized programming and server-level access then you have to go with AWS. 

Is Netlify better than Vercel?

Serverless functions are supported in both Nelify and Vercel. But what makes Vercel an excellent choice for serverless applications is that it comes with a serverless architecture. However, integration of serverless workflows with your project is seamless while using Netlify as it also supports AWS Lambda functions.

The post Guide to Deploy React App on Various Cloud Platforms appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/deploy-react-app/feed/ 0
Kubernetes Best Practices to Follow https://www.sysgenpro.com/blog/kubernetes-best-practices/ https://www.sysgenpro.com/blog/kubernetes-best-practices/#respond Tue, 06 Feb 2024 11:53:40 +0000 https://www.sysgenpro.com/blog/?p=12583 Kubernetes is one of the most widely used and popular container orchestration systems available in the market. It helps software development companies to create, maintain, and deploy an application with the latest features as this platform is the de-facto standard for the modern cloud engineer.

The post Kubernetes Best Practices to Follow appeared first on sysgenpro Blog.

]]>

Key Takeaways

  • When it comes to working with Kubernetes and following its best practices, there are some complications that developers face in deciding which best practice can help in which circumstances. To help with this confusion, in this blog, we will go through some of the top Kubernetes practices, and here is what a Kubernetes developer will take away with this blog:
    1. Developers will learn that security isn’t the afterthought for any Kubernetes app development process as DevSecOps can be used to emphasize the importance of integrating security at every phase of the process.
    2. These practices of Kubernetes combine authorization controllers for the developers to create modern applications.
    3. Though there are many security professionals available in the app development field, the key factor for being the best developer, automation, and DevSecOps practices is to know how to secure software.
    4. These best practices can help supply chain software to address emerging security issues.

Kubernetes is one of the most widely used and popular container orchestration systems available in the market. It helps software development companies to create, maintain, and deploy an application with the latest features as this platform is the de-facto standard for the modern cloud engineer.

This is how Kubernetes master, Google staff developer advocate, and co-author of Kubernetes Up & Running (O’Reilly) Kelsey Hightower acknowledges it:

“Kubernetes does the things that the very best system administrator would do: automation, failover, centralized logging, monitoring. It takes what we’ve learned in the DevOps community and makes it the default, out of the box.”Kelsey Hightower

In a cloud-native environment, many of the more common sysadmin duties, such as server upgrades, patch installations, network configuration, and backups, are less important. You can let your staff focus on what they do best by automating these tasks with Kubernetes. The Kubernetes core already has some of these functionalities, such as auto scaling and load balancing, while other functions are added via extensions, add-ons, and third-party applications that utilize the Kubernetes API. There is a huge and constantly expanding Kubernetes ecosystem.

Though Kubernetes is a complex system to work with, there are some of its practices that can be followed to have a solid start with the app development process. These recommendations cover issues for app governance, development, and cluster configuration. 

1. Kubernetes Best Practices

Here are some of the Kubernetes best practices developers can follow:

1.1 Kubernetes Configuration Tips

Here are the tips to configure Kubernetes:

  • The very first thing to do while defining Kubernetes configurations is to specify the latest version of the stable API.
  • Then the developer must see that the configuration files are getting saved in the version control before they get pushed to the Kubernetes cluster. This thing will help the development team to roll back changes in a configuration quickly and also help in the restoration and re-creation of a cluster.
  • Another tip to configure Kubernetes is that objects must be grouped into a single file whenever it is possible. This helps in managing files easily.
  • The developer must write the application configuration files by using  YAML technology rather than JSON. These formats can be interchanged and used in the majority of situations, but YAML is more user-friendly.
  • Another tip for configuring Kubernetes is to use many kubectl commands by calling them on a directory.
  • The developer must put the description in annotations if he wants to offer better introspection.
  • When values are specified without requirements, even the minimal and simple configuration makes errors.

1.2 Use the Latest K8s Version

Another best practice of Kubernetes is to use the latest version. The reason behind it is that the majority of the time, developers worry about having the latest new features in the new Kubernetes version and their unfamiliarity or limited support or incompatibility with the current application setup.

For this, the most important thing to do is to update Kubernetes with the latest version that is stable and offers performance and security to all the issues. Besides this, if any issues are faced while using the latest version, the developers must find community-based support.

1.3 Use Namespaces

Using namespaces in Kubernetes is also a practice that every Kubernetes app development company must follow. For this, the developers of any company must use namespaces to organize the objects of the application and create logical partitions within the Kubernetes cluster to offer high security. In addition to this, Kubernetes comes with three different namespaces Kube-public, Kube-system, and default. In this case, RBAC is used by the developers to control the access of some specific namespaces whenever they need to reduce the group’s access to control the blast radius.

Besides this, in the Kubernetes cluster, even the LimitRange objects can be configured against namespaces. This is done to specify the container’s standard size that needs to be deployed in the namespace. Here, the developers can use ResourceQuotas to limit the total resource consumption.

YAML Example:
# this yaml is for creating namespace
apiVersion: v1
kind: Namespace
metadata:
  name: my-demo-namespace
  labels:
    name: my-demo-namespace
# this yaml is for creating pod in the above created namespace
apiVersion: v1
kind: Pod
metadata:
  name: my-demo-app
  namespace: my-demo-namespace
  labels:
    image: my-demo-app
spec: 
  containers:
    - name: my-demo-app
    image: nginx

1.4 Avoid Using HostPort and HostNetwork

Avoiding the use of hostPort and hostNetwork is another best practice of Kubernetes, here is what can be done with it:First of all the developers must create a service before deployments or ReplicaSets and before any workloads that require access. Now, whenever Kubernetes starts a container, it will offer environment variables that point to the services that are running on the container. For instance, if a service called “foo” is present, then all the containers will get the below-specified variables in their environment:

FOO_SERVICE_HOST=
FOO_SERVICE_PORT=
  • This implies ordering requirements or services that a Pod requires to access the environment. Here, the DNS must not have any restrictions.
  • Besides this, the developers can also add an optional cluster add-on is a DNS server. This will help the DNS server to look after the Kubernetes API for all the new Services that are created. It also helps in creating a set of DNS records for each cluster.
  • Another best practice is to avoid the use of hostNetwork along with hostPort. The main reason behind not specifying a hostPort for a Pod is that when a Pod is bound to a hostPort, it will start limiting the number of places the Pod is being scheduled as each combination in the hostPort must be unique.
  • The developer must also consider the use of headless services to discover the service when there is no need for kube-proxy load balancing.

1.5 Using kubectl

Using kubectl is also a practice that can be considered by Kubernetes developers. Here, the development team can make use of the following things:

  • First of all, the developers must consider using kubectl apply -f <directory>. This is essential for configuring Kubernetes in all .yml, .yaml, and .json files in <directory>.
  • Then, Use kubectl create deployment and kubectl expose to quickly create single-container Deployments and Services.
  • Here, the label selectors must be used to get and delete operations. This can be used instead of specific object names.

1.6 Use Role-based Access Control (RBAC)

Using RBAC is another best practice that helps develop and create Kubernetes applications with ease. The general approach while working with Kubernetes is that minimal RBAC rights must be assigned to service accounts and users. Only Permissions Explicitly required for the use of operation should be used.  Here, as each cluster will be different, some general rules can be applied to all, and they are:

  • Whenever possible, the development team must avoid offering wildcard permissions to all the resources. The reason behind this is that as Kubernetes is an extensible system when rights are given to all object types of the current system version, the object types of the future system version will also be assigned to the users.
  • Permission must be assigned at the namespace level if possible by using RoleBindings instead of ClusterRoleBindings to offer rights as per the namespace.
  • The development team must avoid adding users to the master group of the system as any user who is a member of this group gets the authority to bypass all the RBAC rights.
  • Unless it is not required, cluster-admin accounts must not be used by the administrators. The reason behind this is that when the low-privileged account is offered with impersonation rights, it can help avoid accidental modification of cluster resources.
YAML Example:
# this yaml is for creating role named “pod-reading-role”
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: default
  name: pod-reading-role
rules:
- apiGroups: [""]      # "" indicates the core API group
  resources: ["pods"]
  verbs: ["get", "watch", "list"]
# This yaml is for creating role binding that allows user "demo" to read pods in the "default" namespace.
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: demo-user-read-pods
  namespace: default
subjects:
- kind: User
  name: demo    #name is case sensitive
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: pod-reading-role   # this must match the name of the Role
  apiGroup: rbac.authorization.k8s.io

1.7 Follow GitOps Workflow

Another Kubernetes best practice is to follow GitOps workflow. This means that to have a successful deployment of Kubernetes, developers must give thought to the workflow of the application. For this, the use of git-based workflow is an ideal choice as it helps in offering automation by using CI/CD (Continuous Integration / Continuous Delivery) pipelines. This also helps in increasing the application deployment process with efficiency. In addition to this, when CI/CD is used, it tends to offer an audit trail to the developers for the software deployment process.

1.8 Don’t Use “Naked” Pods

Not using naked pods is another best practice that must be considered. Here, there are a few points to look out for and they are as below:

  • Naked pods must not be used if avoidable as it will not be rescheduled if the node fails in an event.
  • Generally, deployment helps in creating a ReplicaSet to make sure that the number of Pods that are required are available and specifying a strategy that can help in replacing Pods. In this case, naked pods can create an issue.
YAML Example:
apiVersion: apps/v1
kind: Deployment
metadata:
   name: my-demo-deployment
spec:
   template:
      metadata:
         name: my-demo-app
         namespace: my-demo-namespace
	     labels:
            image: my-demo-app
      spec:
         containers:
          - name: my-demo-app
            image: nginx

1.9 Configure Least-Privilege Access to Secrets

Configuring least-privilege access to secrets is also a best practice as it helps developers plan the access control mechanism like Kubernetes RBAC (Role-based Access Control). In this case, the developers must follow the below-given guidelines to access Secret objects.

  • Humans: In this case, the software development teams must restrict watch, get, or list access to Secrets. Cluster administrators are the only ones that should be allowed access.
  • Components: Here, the list or watch access must be restricted access to only the most privileged components of the system.

Basically, in Kubernetes, the user who has access to create a Pod uses a Secret and can see the value of that particular Secret. Here, even if the Kubernetes cluster default policies don’t allow the user to react to the Secret, the same user can get access to the Secret when he runs the Pod. For this, limiting the impact caused by Secret data exposure is why the following recommendations should be considered:

  • Implementation of audit rules that alert the admin on some specific events.
  • Secrets that are used must be short-lived.

1.10 Use Readiness and Liveness Probes

Readiness and Liveness probes are known as the most important parts of the health checks in Kubernetes. They help the developers to check the health of the application.

A readiness probe is a popular approach that enables the development team to make sure that the requests sent to a pod are only directed when the pod is ready to serve it. If the pod is not ready to serve a request, it should be directed somewhere else. On the other hand, the Liveness probe is a concept that helps in testing if the application is running as expected by the health check protocols.

YAML Example:
apiVersion: apps/v1
kind: Deployment
metadata:
   name: my-demo-deployment
spec:
   template:
      metadata:
         name: my-demo-app
         namespace: my-demo-namespace
	     labels:
            image: my-demo-app
      spec:
         containers:
          - name: my-demo-app
            image: nginx:1.14.2
            readinessProbe:
               httpGet:
                  path: /ready
                  port: 9090
	   initialDelaySeconds: 30
	   periodSeconds: 5
	livenessProbe:
               httpGet:
                  path: /health
                  port: 9090
	   initialDelaySeconds: 30
	   periodSeconds: 5

1.11 Use a Cloud Service to Host K8s

Kubernetes cluster hosting on the hardware can be a bit complex but its cloud services can offer platform as a service (PaaS) like EKS (Amazon Elastic Kubernetes Service) on Amazon Web Services and AKS (Azure Kubernetes Service) on Azure. This means that the infrastructure of the application can be handled by the cloud provider along with other tasks like adding and removing nodes from the cluster can also be achieved by cloud services.

1.12 Monitor the Cluster Resources

Another Kubernetes best practice is to monitor cluster resource components in the Kubernetes version control system to keep them under control. As the control plane is known as the core of the Kubernetes, these components can help in keeping the system up and running. Besides this, Kubelet, Kubernetes API, controller-manager, etcd, kube-dns, and Kube-proxy make up the control plane.

1.13 Secure Network Using Kubernetes Firewall

The last in our list of Kubernetes best practices is securing the network using a Kubernetes firewall and using network policies to restrict internal traffic. When the firewall is put in front of the Kubernetes cluster, it will help restrict resource requests that are sent to the API server.

2. Conclusion

As seen in this blog, many different best practices can be considered to design, run, and maintain the Kubernetes cluster. These practices help developers in putting modern applications into the world. But which practice to put into action and which practice will help the application become a success needs to be decided by the Kubernetes app developers for which the engineers need to be experts in Kubernetes.

3. FAQs:

What is the main benefit of Kubernetes?

Some of the main benefits of Kubernetes are efficient use of namespaces, robust security through Firewall & RBAC, and monitoring control panel components. 

How do I improve Kubernetes?

To improve Kubernetes performance, the developer needs to focus on using optimized container images, defining resource limits, and more. 

What is a cluster in Kubernetes?

In Kubernetes, the cluster is an approach that contains a set of worker machines called nodes. 

What is the biggest problem with Kubernetes?

The biggest issue with Kubernetes is its vulnerability and complexity.

The post Kubernetes Best Practices to Follow appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/kubernetes-best-practices/feed/ 0
Microservices Testing Strategies: An Ultimate Guide https://www.sysgenpro.com/blog/microservices-testing-strategies/ https://www.sysgenpro.com/blog/microservices-testing-strategies/#respond Tue, 23 Jan 2024 05:41:36 +0000 https://www.sysgenpro.com/blog/?p=12295 In today's time, software development companies prefer to use the latest approaches for application development, and using microservices architecture is one such initiative. Developers use microservices architecture and divide functional units of the application that can work as individual processes. These singular functions can easily address user traffic and still remain lightweight.

The post Microservices Testing Strategies: An Ultimate Guide appeared first on sysgenpro Blog.

]]>

Key Takeaways

  1. For microservices, formulating an effective testing strategy is a challenging task. A Combination of Testing Strategies with right tools that provide support at each layer of testing is a key.
  2. Post integration of the service, the risk of failures and cost of correction is high. So a good testing strategy is required.
  3. Tools like Wiremock, Goreplay, Hikaku, VCR, Mountebank, and many others are used for microservices testing purposes.
  4. For the effective approach, there should be a clear Consensus on the test strategy. Required amount of testing should be focused at the correct time with suitable tools.
  5. For the microservices architecture, there is a scope of unit testing, integration testing, component testing, contract testing, and end to end testing. So the team must utilise these phases properly as per the requirements.

In today’s time, software development companies prefer to use the latest approaches for application development, and using microservices architecture is one such initiative. Developers use microservices architecture and divide functional units of the application that can work as individual processes. These singular functions can easily address user traffic and still remain lightweight. But they need to be frequently updated to scale up the application. Besides this, the developers also have to carry out microservices testing strategies to make sure that the application is performing the way it is expected to.

Let’s first understand what types of challenges developers are facing while using a microservices architecture.

1. Challenges in Microservices Testing

Microservices and monolithic architectures have many differences. It also comes with some challenges that every developer should know before testing microservices.

Challenge Description
Complexity
  • Though single services are very simple, the entire microservices system is a bit complex which means that to work with the developers one needs to be careful in choosing and configuring the databases and services in the system.
  • Even testing and deploying each service can be challenging as this is a distributed nature.
Data Integrity
  • Microservices offer distributed databases which are problematic for data integrity as business applications might require updates with time, but here database upgrade becomes compulsory.
  • This is the case when there is no data consistency. So, testing becomes more difficult.
Distributed Networks
  • Microservices can be deployed on various servers with different geographical locations by adding latency and making the application know about network disruptions.In this case, when tests rely on the network, it will fail if there is any fault in the code and this will interrupt the CI/CD pipeline.
Test Area
  • Every microservice usually points to many API endpoints which means that testable surfaces increase and developers have to work on more areas which is a bit time-consuming task.
Multiple frameworks used for development
  • Though the developers choose the best-suited microservices frameworks and programming languages for each microservice when the system is big, it becomes difficult to find a single test framework that can work for all the components.
Autonomous
  • The app development team can deploy microservices anytime but the only thing they need to take care of is that the API compatibility doesn’t break.
Development
  • Microservices can be independently deployable, so extra checks are required to ensure they function well.Even the boundaries need to be set correctly for microservices to run perfectly fine.

2. Microservices Testing Strategy: For Individual Testing Phases

Now let us understand the testing pyramid of microservices architecture. This testing pyramid is developed for automated microservices testing. It includes five components. 

Microservices Testing Strategies

The main purpose of using these five stages in microservices testing is: 

Testing Type Key Purpose
Unit Testing
  • To test various parts (class, methods, ) of the microservice. 
Contract Testing
  • To test API compatibility. 
Integration Testing
  • To test the communication between microservices, third-party services, and databases. 
Component Testing
  • To test the subsystem’s behavior. 
End-to-End Testing
  • To test the entire system. 

2.1 Unit Testing

The very first stage of testing is unit testing. It is mainly used to verify the function’s correctness against any specification. It checks a single class or a set of classes that are closely coupled in a system. The unit test that is carried out either runs with the use of the actual objects that are able to interact with the unit or with the use of the test doubles or mocks.

Basically, in unit tests, even the smallest piece of the software solution is tested to check whether it behaves the way it is expected to or not. These tests are run at the class level. Besides this, in unit testing one can see a difference in whether the test is performed on an isolated unit or not. The tests carried out in this type of testing method are written by developers with the use of regular coding tools, the only difference is in its types as shown below.

Solitary Unit Testing: 

  • Solitary unit tests ensure that the methods of a class are tested.  
  • It mainly focuses on the test result to always be deterministic. 
  • In this type of unit testing, collaborations and interactions between an object of the application and its dependencies are also checked.
  • For external dependencies, mocking or stubbing to isolate the code is used.
Solitary Unit Testing

Sociable Unit Testing: 

  • These tests are allowed to call other services. 
  • These tests are not deterministic, but they provide good results when they pass. 
Sociable Unit Testing

Basically, as we saw here, unit tests when used alone do not offer a guarantee of the system’s behavior. The reason behind it is that in unit testing, the core testing of each module is covered but it doesn’t cover the modules when they are in collaborative mode. Therefore, in such cases, to make sure that the unit tests are run successfully, developers make use of test doubles and ensure that each module works correctly.

2.2 Integration Testing

The second stage of microservices testing is Integration tests. This type of testing is used when the developer needs to check the communication between two or more services. Integration tests are specially designed to check error paths and the basic success of the services over a network boundary. 

In any software solution, there are different components that interact with one another as they may functionally depend on each other. Here, the integration test will be used to verify the communication paths between those components and find out any interface defects. All the test modules are integrated with each other and they are tested as a subsystem to check the communication paths in this testing method.

There can be three types of communications that happen in the microservices architecture: 

  1. Between two different microservices
  2. Between Microservice and third-party application
  3. Between Microservice and Database
Integration Testing

The aim of integration testing is to check the modules and verify if their interaction with external components is successful and safe. While carrying out such tests, sometimes it becomes difficult to trigger the external component’s abnormal behavior like slow response time or timeout. In such cases, special tests are written by the developers to make sure that the test can respond as expected. 

Basically, integration tests come with the goal of providing assurance that the coding schema matches the stored data.

2.3 Component Testing

Component tests are popular when it comes to checking the full function of a single microservice. When this type of testing is carried out, if there are any calls made by the code to the external services, they are mocked. 

Basically, a component is a coherent, well-encapsulated, and independently replaceable part of any software solution. But when it comes to a microservice architecture, these component tests become a service. This means that developers perform component tests in microservices architecture to isolate any component’s complex behavior.

Besides this, component tests are more thorough in comparison to integration testing as it has the capability to travel on all the paths. For instance, here we can know how the component responds to the network’s malformed requests. This process can be divided into two parts.

In-process Component Testing

  • Test runners exist in the same process or thread as microservices.
  • Microservices can be started in an offline test mode.
  • This testing works only with single-component microservices.
In-process Component Testing

Out-process Component Testing

  • Appropriate for any size of components.
  • Components here are deployed unaltered in a test environment.
  • All dependencies in microservices are stubbed or mocked out.
Out-of-Process Component Test

2.4 Contract Testing

This testing type is carried out when two microservices gather via an interface and they need a bond to specify all the possible transactions with their data structures. Here, even the possible side effects of the inputs and outputs are analyzed to make sure that there is no security breach in the future. This type of testing can be run by the client, the producer, or both.

Consumer-side 

  • The downstream team writes and executes the tests.
  • The testing method connects microservices in a mocked version of the producer service.
  • Microservices are checked to see if they can consume client-side API. 

Producer-side 

  • Producer-side contract tests run in upstream services. .
  • Clients’ API requests are checked along with the producer’s contract details.

2.5 End-to-End Testing

The last type of testing in our list is End-to-End Testing. This approach is used for testing microservices completely. This means that end-to-end testing checks the entire microservices application. It checks whether the system meets the client’s requirements and helps in achieving the goal. When this test is carried out by the developers, it doesn’t bother about the internal architecture of the application but just verifies that the system offers a business goal. In this case, when the software is deployed, it is treated as a black box before getting tested.

End-to-End Testing

Besides this, as end-to-end testing is more about business logic, it checks the proxies, firewall, and load balancer of the application because generally they are affected by the public interference from API and GUIs. In addition to this, end-to-end testing also helps developers to check all the interactions and gaps that are present in microservice-based applications. This means that testing microservices applications completely can be possible with end-to-end testing.

Now, let’s look at various scenarios and how these phases can apply. 

3. Microservices Testing Strategies for Various Scenarios

Here we will go through various microservices testing strategies for different scenarios to understand the process in a better way. 

Scenario 1: Testing Internal Microservices Present in the Application

This is the most commonly used strategy to test microservices. Let’s understand this with an example. 

For instance, there is a social media application that has two services like

  1. Selecting Photos and Videos
  2. Posting Photos and Videos 

Both services are interconnected with each other as there is close interaction between them in order to complete an event. 

Testing Internal Microservices Present in the Application
Testing Scopes  Description
Unit Testing
  • For each individual microservice, there is a scope of unit testing.
  • We can use frameworks like JUnit or NUnit for testing purposes.
  • First, one needs to test the functional logic.
  • Apart from that, internal data changes need to be verified.
  • For Example: If Selecting Photos and Videos Service returns a selectionID then the same needs to be verified within the service.
Integration Testing
  • Both the microservices are internally connected in our case.
  • In order to complete an event, both need to be executed in a perfect manner.
  • So, there is a scope for Integration testing.
Contract Testing
  • It is recommended to use testing tools that enable user-driven contract testing.Tools like Pacto, Pact, and Janus are recommended.In this testing, data passed between services needs to be validated and verified. For the same, one can use tools like SOAPUI.
End-to-End Testing
  • End to End Testing, commonly referred to as E2E testing, ensures that the dependency between microservices is tested at least in one flow.
  • For example, an event like making a post on the app should trigger both the services i.e. Selecting Photos and Videos and Posting Photos and Videos.

Scenario 2: Testing Internal Microservices and Third-party Services

Let’s look at the scenario where third-party APIs are integrated with Microservices. 

For Example, in a registration service, direct registration through the Gmail option is integrated. Here registration is modelled as a microservice and interacts with gmail API that is exposed for authenticating the user. 

Testing Internal Microservices and Third-party Services
Testing Scopes Descriptions 
Unit Testing
  • The developers can perform unit tests to check the changes that happened internally.
  • Frameworks like xUnit are used to check the functional logic of the application after the change.
  • The TDD approach can also be considered whenever possible.
Contract Testing
  • The expectations from the consumer’s microservices are checked which decouples itself from the external API.
  • Test doubles can be created here using Mountebank or Mockito to define Gmail API.
Integration Testing
  • Integration tests are carried out if the third-party developers offer sandbox API.This type of testing checks whether the data is being passed perfectly from one service to another and to see if the services are integrated as required.
End-to-End Testing
  • With end-to-end testing, the development team ensures that there are no failures in the workflow of the system.
  • One checks the dependencies between the microservices and ensures that all the functions of the application are working correctly.

Scenario 3: Testing Microservices that are Available in Public Domain

Let’s consider an e-commerce application example where users can check the items’ availability by calling a web API.  

Testing Microservices that are Available in Public Domain
Testing ScopesDescriptions 
Unit Testing
  • Here, the development team can carry out unit testing to check all functions of the application that the services have defined.
  • This testing helps to check that all the functions of the services work perfectly fine as per the user’s requirements.
  • It also ensures that the data persistence is taken care of.
Contract Testing
  • This testing is essential in such cases.
  • It makes sure that the clients are aware of the contracts and have agreed upon them before availing of the facilities provided by the application.
  • Here, the owner’s contracts are validated, and later consumer-driven contracts are tested.
End-to-end Testing
  • Here we can test the workflow using End-to-end Testing. It enables software testing teams to make sure that the developed application offers facilities as per the requirement. End-to-end testing also ensures that the integration of services with external dependencies is secured.

4. Microservices Testing Tools

Here are some of the most popular Microservices testing tools available in the market.

  • Wiremock: It is a very popular simulator that is used by developers when they want to do integration tests. Unlike any other general-purpose mocking tool, Wiremock has the capability to work by developing an actual HTTP server that the code that is being tested can connect to as a real web service.  
  • Goreplay: It is an open-source tool for network monitoring. It helps in recording live traffic of the application and this is why it is used by the developers to capture and replay live HTTP traffic.
  • Mountebank: It is a widely used open-source tool that enables software development companies to carry out cross-platform and multi-platform test doubles over the wire. With the help of Mountebank, the developers can simply replace the actual dependencies of the application and test them in the traditional manner.
  • Hikaku: It is a very popular test environment for microservices architecture. It helps the developers to ensure that the implementation of REST-API in the application actually meets its specifications. 
  • VCR: Developers use the VCR tool to record the tests that they carry out on the suite’s HTTP interactions. This recording can be later played for future tests to get accurate, fast, and reliable test results.

5. Conclusion

Microservices testing plays a very important role when it comes to modern software development tactics. It enables developers to offer applications that have greater flexibility, agility, and speed. There are some essential strategies that need to be carried out by the development teams when it comes to testing microservices applications in order to deploy a secure application and some of those microservices testing strategies are discussed in this blog. These automated tests enable the developers to easily cater to customer requirements by offering a top-notch application.

The post Microservices Testing Strategies: An Ultimate Guide appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/microservices-testing-strategies/feed/ 0
Staff Augmentation vs Managed Services https://www.sysgenpro.com/blog/staff-augmentation-vs-managed-services/ https://www.sysgenpro.com/blog/staff-augmentation-vs-managed-services/#respond Wed, 03 Jan 2024 06:01:53 +0000 https://www.sysgenpro.com/blog/?p=12404 Today, more than before, businesses are looking for ways to outsource their IT operations. The process of finding and employing in-house IT personnel may be lengthy, difficult, and expensive, not to mention unpleasant in the case of a fast-growing business or a temporary project.

The post Staff Augmentation vs Managed Services appeared first on sysgenpro Blog.

]]>

Key Takeaways

  1. In comparison of staff augmentation vs managed services, Staff Augmentation Model mainly offers an extension to the existing team whereas in Managed Services, the company outsources certain functions or projects to an experienced third party organization.
  2. Staff Augmentation is often utilized with Managed Services Model for specific services at certain points in time.
  3. Staff Augmentation Model may become risky, costly, and less productive when the resources are inexperienced, and there is lack of time for training and development.
  4. IT Companies utilizing Staff Augmentation Model eventually sourcing external resources for their work. Alternatively, They can also adopt an effective managed services model to maximize value.
  5. For short term goals, it is advisable to go with Staff Augmentation Model whereas for long term initiatives and requirement of large team, Managed Services Model is preferable.

Today, more than before, businesses are looking for ways to outsource their IT operations. The process of finding and employing in-house IT personnel may be lengthy, difficult, and expensive, not to mention unpleasant in the case of a fast-growing business or a temporary project.

IT staff augmentation vs managed services has always been an evergreen debate in the IT industry and these are the two most common types of IT outsourcing models. Both approaches are viable substitutes for hiring employees full-time, but which one works best for you will depend on the nature and scope of your projects.

With the help of the staff augmentation model, you may outsource a variety of different jobs and responsibilities. Under the managed services model, the client gives the entire problem and its resolution to an outside company. While managed services excel at the long-term management of operations like architecture and security, staff augmentation is excellent for the short-term scaling of specific operations.

In order to establish which option is preferable, we will compare staff augmentation vs managed services and explore the benefits and drawbacks of each.

1. What is IT Staff Augmentation?

In this context, “staff augmentation” refers to the practice of adding a new member to an organization’s existing team. A remote worker or team is hired for a limited time and for specific tasks, rather than being hired full-time.

They are not fixed workers, but they are completely incorporated into the internal team. Companies interested in adopting this strategy for project development will save significant time and resources.

If a corporation wants to fire an employee, it must first formally register the worker, go through a lengthy onboarding procedure, pay taxes, and sit through extensive interviews.

You are correct; firing an employee is a complicated process in many Western nations. It is normal procedure in Europe to present evidence that an employee lacks the necessary degree of certification, and a specialized commission will rule on your ability to do so. Think about all the time and effort you may waste dealing with the typical bureaucratic system. Therefore, staff augmentation is used as it can be advantageous because of its features that enable you to satisfy your demands for a super-specialist with less effort and expense. Let’s take a closer look at its benefits and drawbacks now.

Further Reading on:
IT Staff Augmentation vs Offshoring

Staff Augmentation vs Outsourcing

1.1 Pros of IT Staff Augmentation

Pros of IT Staff Augmentation

Effortless Teamwork
Your existing team will function normally with the additional “resources.” As a mechanism, they’re rock-solid reliable.

Staffing Adaptability
As needed, staffing levels may be quickly adjusted up or down. Furthermore, individuals sync up more efficiently and rapidly than disjointed teams.

High Proficiency at Low Cost
Adding new individuals to your team helps make up for any gaps in the expertise you may have. Because you are hiring people for their specialized expertise, you won’t have to spend time teaching them anything new.

In-house Specialist Expertise
You can put your attention where it belongs on growing your business and addressing IT needs by using staff augmentation to swiftly bridge skill shortages that arise while working on a software project that requires specialized knowledge and experience.

Reduce Management Issues
By using a staffing agency, you may reduce your risk and save money compared to hiring new employees. You have access to the big picture as well as any relevant details, are able to make decisions at any point in the procedure, and are kept in the loop the whole time.

Internal Acceptance
Permanent workers are more likely to swiftly adapt to working with temporary workers than they would be with an outsourced team, and they are less likely to worry about losing their employment as a result.

Keep to the Deadlines
When you need to get more tasks done in less time, but don’t have enough people to do it, staff augmentation can help. It can aid in the timely completion of tasks and the efficient execution of the project as a whole.

1.2 Cons of IT Staff Augmentation

Cons of IT Staff Augmentation

Training is Essential
As a result, it is imperative that you familiarize your temporary workers with the company’s internal procedures, as they will likely vary from the methods they have used in the past.

Lack of Managerial Resources
Bringing new team members up to speed can be a drain on the existing team’s time and energy, but this is only a problem if you lack the means and foresight to effectively oversee your IT project.

Acclimatization of New Team Members
It’s possible that your team’s productivity will dip temporarily as new members learn the ropes of the business.

Temporary IT Assistance
Hiring an in-house staff may be more cost-effective if your business routinely requires extensive IT support.

1.3 Process of IT Staff Augmentation

In most organizations, there are three main phases to the staff augmentation process:

Determining the Skill Gap
You should now be able to see where your team is lacking in certain areas of expertise and have the hiring specialists to fill those voids with the appropriate programmers.

Onboarding Recruited Staff
Experts, once hired, need to be trained in-house to become acquainted with the fundamental technical ideas and the team. Additionally, they need to be included in the client’s working environment.

Adoption of Supplemental Staff
At this point, it’s crucial that the supplementary staff actively pursues professional growth. The goal of hiring new team members is to strengthen the organization as a whole so that they can contribute significantly to the success of your initiatives.

1.4 Types of Staff Augmentation

Let us delve into the various staff augmentation models and their potential advantages for companies of any size:

Project-Based Staff Augmentation 
Designed for businesses that have a sudden demand for a dedicated team of software engineers or developers to complete a single project.

Skill-Based Staff Augmentation 
Staffing shortages in any industries like healthcare and financial technology with temporary developers.

Time-Based Staff Augmentation 
The time-based approach is the best choice if you want the services of external developers for a specified duration.

Hybrid Staff Augmentation 
The goal is to provide a unique solution for augmenting staff and assets by integrating two or more of the primary methods.

Onshore Staff Augmentation 
Recruiting information technology experts from the same nation as the business is a part of this approach. If your team and the IT department need to communicate and work together closely, this is the way to go.

Nearshore Staff Augmentation 
Nearshore software development makes use of staff augmentation, which involves recruiting a development team from a neighboring nation that shares the same cultural and time zone characteristics.

Offshore Staff Augmentation 
This term describes collaborating with IT experts located in a faraway nation, usually one with an extensive time difference. The best way to save money while adding staff is to hire developers from outside the country.

Dedicated Team Augmentation 
If you want highly specialized knowledge and experience, it’s best to hire a Payroll & Staffing Services that works only for your company.

2. What is Managed Services?

With managed IT services, you contract out certain IT tasks to an outside company, known as a “managed service provider” (MSP). Almost any topic may be addressed with the help of a service, including cybersecurity issues, VoIP issues, backup and recovery, and more. When businesses don’t have the resources to build and run their own IT departments, they often turn to outsource for help.

Having a reliable MSP allows you to put your attention where it belongs on running your business rather than micromanaging its information technology systems.

Even so, if you pick the incorrect supplier, you may be stuck in a long-term service level  agreement that doesn’t meet your company’s demands, and that might cause a lot of trouble down the road. Therefore, it is quite important that you take the MSP screening procedure seriously.

2.1 Pros of Managed Services

Pros of Managed Services

Efficient Use of Time and Money
You don’t need to buy any new equipment and also no need to pay regular salaries. In this way, you can effectively use time and money. 

Skills and Knowledge
If you outsource your business’s demands to qualified individuals, you may take advantage of their unbounded knowledge and experience to give your company a leg up on the competition.

Security
If you outsource the Technology, the service provider will make sure your company is secure enough to prevent data breaches.

Flexibility
Managed IT service providers, in contrast to in-house teams, are available around the clock, which boosts efficiency.

Monitoring
The service assumes control of the entire project or any part of the project, and acts as project manager, keeping tabs on all project activities and securing all required resources.

Outcome
In most cases, the managed services provider will analyze the potential dangers and propose the best course of action.

2.2 Cons of Managed Services

Cons of Managed Services

Actual Presence
Due to the distant nature of the IT managed services organization, you will be responsible for resolving any issues that develop at the physical location.

Additional Expenditure
A complete set of low-cost tools and resources is not always available. There are those who charge more than others.

Security and Control
When you engage a service provider, you are essentially giving them permission to view your company’s most private files.

Inconvenient Costs of Switching
It might be detrimental to your organization if your managed IT services provider suddenly shuts down without warning.

Changing IT Needs
Your company’s productivity and expansion potential will be stunted if you have to work with an IT service provider that doesn’t care about what it requires.

2.3 Process of Managed Services

An attitude of partnership is essential to the success of the managed services (outsourcing) model. It’s noteworthy that the idea of long-term partnerships with excellent suppliers has been more easily accepted in other sectors of an organization than in IT. Managed service providers base their whole business models on providing exceptional service to their customers, which is why they put so much effort into developing and maintaining their service delivery capabilities.

Partnership with a reliable managed services provider frees up time and resources for IT management to concentrate on maximizing technology’s contribution to the company’s bottom line. The biggest obstacle is the mistaken belief that you have to give up control in order to delegate day-to-day operations when, in reality, you always do thanks to your relationships and contracts.

Those IT departments that have grown to rely on the staff augmentation firms  might reap substantial economic and service benefits by transitioning to a managed services (outsourcing) model.

Managed service (outsourcing) models emphasize delivering “outcomes” (service levels and particular services tied to a volume of activity) for a fixed fee rather than “inputs” (resources). The client benefits from the assurance of fixed costs, while the supplier takes on the risk involved with making good on the promise of delivery.

Given that the expenses of meeting service level responsibilities can outpace price if they are not properly approximated or managed, the outsourcing provider has a strong incentive to incorporate productivity tools and operational “hygienic” tools and practices that encourage the servicing and retention of operational health, each of which inevitably brings value to the consumer.

Managed services (outsourcing) models are advantageous to the provider because they allow for more time for long-term planning, resource management, workload balancing between employees, and job allocation across a global delivery model.

2.4 Types of Managed Services

Security and Networking Solutions
Here, a managed services company often handles all network-related duties, such as setting up your company’s local area network (LAN), wireless access points (WAPs), and other connections. Options for data backup and storage are also managed by the MSP. These firms also provide reliable and faster networking and security solutions.

Security Management
This remote security infrastructure service in managed services models includes backup and disaster recovery (BDR) tools and anti-malware software, and it updates all of these tools regularly.

Communication Services
Messaging, Voice over Internet Protocol (VoIP), data, video, and other communication apps are all managed and monitored by this service. Providers can sometimes fill the role of a contact center in your stead.

Software as a Service
The service provider provides a software platform to businesses in exchange for a fee, typically in the form of a membership. Microsoft’s 365 suite of office applications, unified messaging and security programs are a few examples.

Data Analytics
Data analytics is a requirement if you’re seeking monitoring services for data management. This offering incorporates trend analysis and business analytics to plan for future success.

Support Services
In most circumstances, this choice includes everything from basic problem-solving to complex scenarios requiring IT support.

3. IT Staff Augmentation vs Managed Services

The contrasts between IT staff augmentation vs managed services are below in the following table.

Key Parameters IT Staff Augmentation Managed Services
Advantages
  • Effortless teamwork
  • The ability to stretch the workforce
  • It’s easier and cheaper to increase skills.
  • Proficient in-house specialists
  • fewer problems with management
  • Admitting to oneself
  • Meets deadlines
  • Cost-effectiveness with quick results
  • Competence and know-how Safe and adaptable
  • Rapidly Observable Outcome
Disadvantages
  • Integrating new members of the team is essential.
  • It’s suitable for temporary tech support situations
  • Presence in person necessary
  • Expenses that are much higher
  • Discrepancies in control and security
  • Fees incurred while changing service providers
  • Challenges in keeping up with ever-evolving information
ProcessesResponsibilities and operations to third parties (inputs)Management and solution outsourcing (outputs)
BillingTime and materials are billed for on a regular basis (usually every two weeks).Retainer fees are typically billed once a year.
Forms of ProjectsHighly adaptable and scalable; ideal for projects with a short yet intense growth phaseStrong foundation; ideal for long-term IT administration
HiringEmployed by a supplierPut together for action
Office FacilitiesVendorVendor
AdministrationCustomerVendor
EngagementFull-time EngagementFull time or part-time
Overhead ExpensesVendorVendor
PayrollVendorVendor
Employee BenefitsVendor/clientOnly Vendor
Payroll AnalysisCustomerVendor
Ratings EvaluationVendorVendor
CommunicationStraight Communication Through vendor’s PM
Best Use Cases
  • Short-term requirements
  • Minimal projects
  • Projects requiring adaptability
  • Long-term initiatives
  • Outsourcing complete projects
  • Cost savings increasing over time

4. Conclusion

Both staff augmentation vs managed service model may be disentangled into their constituent parts—out staffing for short-term services and outstaffing for long-term positions. 

Clearly, staff augmentation and managed services are the ideal solutions to implement your business ideas with a massive quantity of profit. However, there is a significant difference between the two approaches, making it challenging to tell which is the superior option simply by looking at them. The requirements are the primary factor in determining the response.

Staff augmentation is the way to go if you need a quick fix that involves bringing in skilled workers to fill in the gaps for a limited time. You may get the desired degree of adaptability and savings using this approach. The managed services approach is ideal if you want to outsource the entire project. Your project will be managed by a group of people who are solely responsible for it. You may save money in the long run by establishing a consistent budget for your IT outsourcing.

With the help of staff augmentation, you may outsource a variety of different jobs and responsibilities. Under the managed services model, the client gives the entire problem and its resolution to an outside company. While managed services excel at the long-term management of operations like architecture and security, staff augmentation is excellent for the short-term scaling of specific operations.

In a nutshell, it’s necessary to identify your needs before jumping to a suggested conclusion since every project has its own distinctive needs and objectives.

The post Staff Augmentation vs Managed Services appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/staff-augmentation-vs-managed-services/feed/ 0
AWS Cost Optimization Best Practices https://www.sysgenpro.com/blog/aws-cost-optimization/ https://www.sysgenpro.com/blog/aws-cost-optimization/#respond Wed, 20 Dec 2023 09:39:17 +0000 https://www.sysgenpro.com/blog/?p=12373 In today’s tech world where automation and cloud have taken over the market, the majority of software development companies are using modern technologies and platforms like AWS for offering the best services to their clients and to have robust in-house development.

The post AWS Cost Optimization Best Practices appeared first on sysgenpro Blog.

]]>

Key Takeaways

  1. AWS Cloud is a widely used platform and offers more than 200 services. These cloud resources are dynamic in nature and their cost is difficult to manage.
  2. There are various tools available like AWS Billing Console, AWS Trusted Advisor, Amazon CloudWatch, Amazon S3 Analytics, AWS Cost Explorer, etc. that can help in  cost optimization.
  3. AWS also offers flexible purchase options for each workload. So that one can improve resource utilization.
  4. With the help of Instance Scheduler, you can stop paying for the resources during non operating hours.
  5. Modernize your cloud architecture by scaling microservices architectures with serverless products such as AWS Lambda.

In today’s tech world where automation and cloud have taken over the market, the majority of software development companies are using modern technologies and platforms like AWS for offering the best services to their clients and to have robust in-house development.  

In such cases, if one wants to stay ahead of the competition and offer services to deliver efficient business values at lower rates, cost optimization is required. Here, we will understand what cost optimization is, why it is essential in AWS and what are the best practices for AWS cost optimization that organizations need to consider.

1. Why is AWS More Expensive?

The AWS cloud is the widely used platform by software development companies that offers more than 200 services to their clients. This cloud resource is dynamic in nature and because of this, its cost is difficult to manage and is unpredictable. Besides this, here are some of the main reasons that make AWS a more expensive platform to use for any organization. 

  • When any business organization does not use Elastic Block Store (EBS) volumes, load balancers, snapshots, or some other resources, they will have to pay more as these resources will still be incurring costs whether you use them or not.  
  • Some businesses are paying for computer instance services like Amazon EC2 but not utilizing them properly.  
  • Reserved or spot instances are not used when they are required and these types of instances generally offer discounts of 50-90%. 
  • Sometimes it happens that the auto-scaling feature is not implemented properly or is not optimal for the business. For instance, as the demand for something increases and you scale up your business to fulfill it but becomes too much as there are many redundant resources. This can also cost a lot.  
  • Savings Plans that come with AWS are not used properly which can affect the cost as it will not minimize the total spend on AWS. 

2. AWS Cost Optimization Best Practices

Here are some of the best practices of AWS cost optimization that can be followed by all the organizations opting for AWS.  

2.1 Perform Cost Modeling

One of the top practices that must be followed for AWS cost optimization is performing cost modeling for your workload. Each component of the workload must be clearly understood and then cost modeling must be performed on it to balance the resources and find out the correct size for each resource that is in the workload. This offers a specific level of performance.  

Besides this, a proper understanding of cost considerations can enable the companies to know more about their organizational business case and make decisions after evaluating the value realization outcomes.  

There are multiple AWS services one can use with custom logs as data sources for efficient operations for other services and workload components. For example:  

  1. AWS Trusted Advisor 
  2. Amazon CloudWatch 

This is how AWS Cost Advisor Works: 

Now, let’s look at how Amazon CloudWatch Works:  

how Amazon CloudWatch Works
Source: Amazon

These are some of the recommended practices one can follow:  

  1. Total number of matrices associated with CloudWatch Alarm can incur cost. So remove unnecessary alarms. 
  2. Delete the dashboards those are not necessary. In ideal case, dashboards should be three or less.  
  3. Also checkout your contributor insight reports and remove any non-mandatory rules.   
  4. Evaluating logging levels and eliminating unnecessary logs can also help to reduce ingestion costs. 
  5. Keep monitor of custom metrics off when appropriate. It will also reduce unnecessary charges. 

2.2 Monitor Billing Console Dashboard

AWS billing dashboard enables organizations to check the status of their month-to-date AWS expenditure, pinpoint the services that cost the highest, and understand the level of cost for the business. Users can get a precise idea about the cost and usage easily with the AWS billing console dashboard. The Dashboard page consists of sections like –  

  • AWS Summary: Here one can find an overview of the AWS costs across all the accounts, services, and AWS regions.  
  • Cost Trend by Top Five Services: This section shows the most recent billing periods.  
  • Highest Cost and Usage Details: Here you can find out the details about top services, AWS region, and accounts that cost the most and are used more. 
  • Account Cost Trend: This section shows the trend of account cost with the most recent closed billing periods. 

In the billing console, one of the most commonly viewed pages is the billing page where the user can view month-to-date costs and a detailed services breakdown list that are most used in specific regions. From this page, the user can also get details about the history of costs and usage including AWS invoices.  

In addition to this, organizations can also access other payment-related information and also configure the billing preferences. So, based on the dashboard statistics, one can easily monitor and take actions for the various services to optimize the cost. 

2.3 Create Schedules to Turn Off Unused Instances

Another AWS cost optimization best practice is to pay attention to create schedules to turn off the instances that are not used on the regular bases. And for this, here are some of the things that can be taken into consideration.  

  • At the end of every working day or weekend or during vacations, unused AWS instances must be shut down. 
  • The usage metrics of the instances must be evaluated to help you decide when they are frequently used which can eventually enable the creation of an accurate schedule that can be implemented to always stop the instances when not in use.  
  • Optimizing the non-production instances is very essential and when it is done, one should prepare the on and off hours of the system in advance.  
  • Companies need to decide if they are paying for EBS quantities and other relevant elements while not using the instances and find a solution for it. 

Now let’s analyze the different scenarios of AWS CloudWatch Alarms.

ScenarioDescription
Add Stop Actions to AWS CloudWatch Alarms
  • We can create an alarm to stop the EC2 instance when the threshold meets.
  • Example:
    • Suppose you forget to shut down a few development or test instances.
    • You can create an alarm here that triggers when CPU utilization percentage has been lower than 10 percent for 24 hours indicating that instances are no longer in use.
Add Terminate Actions to AWS CloudWatch Alarms 
  • We can create an alarm to terminate the EC2 instance when a certain threshold meets.
  • Example:
    • Suppose any instance has completed its work and you don’t require it again. In this case, the alarm will terminate the instance.
    • If you want to use that instance later, then you should create an alarm to stop the instance instead of terminating it.
Add Reboot Actions to AWS CloudWatch Alarms
  • We can create an alarm that monitors EC2 instances and automatically reboots the instance.
  • In case of instance health check failure, this alarm is recommended.
Add Recover Actions to AWS CloudWatch Alarms
  • We can create an alarm that monitors EC2 instances and automatically recovers the instance if it becomes nonfunctional due to hardware failure or any other cause.

2.4 Supply Resources Dynamically

When any organization moves to the cloud, it will have to pay for its requirements. But for that, the company will have to supply resources that can match the workload demand at the time of the requirement. This can help in reducing the cost that goes behind overprovisioning. For this, any organization will have to modify the demand for using buffer, throttle, or queue in order to smooth the demand of the organization processes via AWS and serve it with fewer resources. 

This benefits the just-in-time supply and balances it against the need to have high availability, resource failures, and provision time. Besides this, in spite of the demand being fixed or variable, the plan to develop automation and metrics will be minimal. In AWS, reducing the cost of optimization by supplying resources dynamically is known as the best practice. 

PracticeImplementation Steps
Schedule Scaling Configuration
  • When the changes in demand are predictable, time-based scaling can help in offering a correct number of resources.
  • If the creation and configuration of resources are not fast to respond to the demands generated, schedule scaling can be used.
  • Workload analysis can be configured using AWS Auto Scaling and even predictive scaling can be used to configure time-based scheduling.
Predictive Scaling Configuration
  • With predictive scaling, one can increase instances of Amazon EC2 in the Autoscaling group at an early stage.
  • Predictive analysis helps applications start faster during traffic spikes.
Configuration of Dynamic Automatic Scaling
  • Auto scaling can help in configuring the scaling as per the active workload in the system
  • Auto-scaling launches the correct resources level after the analysis and then verifies the scale of the workload in the required timeframe.

2.5 Optimizing your cost with Rightsizing Recommendations

One of the best practices of cost optimization in AWS is rightsizing recommendations. It is a feature in Cost Explorer that enables companies to identify cost-saving opportunities. This can be carried out by removing or downsizing instances in Amazon EC2 (Elastic Compute Cloud). 

Rightsizing recommendations is a process that will help in analyzing the Amazon EC2 resources of your AWS and check its usage to find opportunities to lower the spending. One can check the underutilized Amazon EC2 instances in the member’s account in a single view to identify the amount you can save and after that can take any action. 

2.6 Utilize EC2 Spot Instances

Utilizing Amazon EC2 Spot Instances is known as one of the best practices of AWS cost optimization that every business organization must follow. In the AWS cloud, this instance enables companies to take advantage of unused EC2 capacity.  

Spot Instances are generally available at up to a 90% discount in the cloud market in comparison to other On-Demand instances. These types of instances can be used for various stateless, flexible, or fault-tolerant applications like CI/CD, big data, web servers, containerized workloads, high-performance computing (HPC), and more.

How spot instances work Amazon EC2
Source: Amazon

Besides this, as Spot Instances are very closely integrated with AWS services like EMR, Auto Scaling, AWS Batch, ECS, Data Pipeline, and CloudFormation, any company will have to select the way they want to launch and maintain the apps that are running on Spot Instances. And for this, taking the below-given aspects need to be taken into consideration.  

  • Massive scale: Spot instances have the capability to offer major advantages for conducting massive operating scales of AWS. Because of this, it enables one to run hyperscale workloads at a cost savings approach or it also allows one to accelerate the workloads by running various tasks parallelly. 
  • Low, predictable prices: Spot instances can be purchased at lower rates with up to 90% of discount than other On-Demand instances. This enables any company to have provision capacity across Spot, RIs, and On-Demand by using the EC2 Auto Scaling approach in order to optimize workload cost. 
  • Easy to use: When it comes to Spot Instances, launching, scaling, and managing them by utilizing the AWS services like ECS and EC2 Auto Scaling is easy. 

2.7 Optimizing EC2 Auto Scaling Groups (ASG) Configuration

Another best practice of AWS cost optimization is to configure EC2 auto-scaling groups. Basically, ASG is known as a collection of Amazon EC2 instances and is treated as a group of logical approaches for automatic scaling and management of tasks. ASGs have the ability to take advantage of Amazon EC2 Auto Scaling features like custom scaling and health check policies as per the metrics of any application.  

Besides this, it also enables one to dynamically add or remove EC2 instances from predetermined rules that are applied to the loads. ASG also enables the scaling of EC2 fleets as per the requirement to conserve the cost of the processes. In addition to this, you can also view all the scaling activities by either Auto Scaling Console or describe-scaling-activity CLI command. In order to optimize the scaling policies to reduce the cost of scaling the processes up and down, here are some ways.  

  • For scaling up the processes, instances must be added which are less aggressive in order to monitor the application and see if anything is affected or not.  
  • And for scaling down the processes, reducing instances can be beneficial as it allows for minimizing the necessary tasks to maintain current application loads.  

This is how AWS auto scaling works:  

AWS Auto scaling works
Source: Amazon

2.8 Compute Savings Plans

Compute Savings Plans is an approach that is very beneficial when it comes to cost optimization in AWS. It offers the most flexibility to businesses using AWS and also helps in reducing costs by 66%. The computer savings plans can automatically be applied to the EC2 instance usage regardless of the size, OS, family, or region of the instances. For example, one can change the instances from C4 to M5 with the help of Compute Savings Plans or move the workload from EC2 to Lambda or Fargate. 

This is the snapshot of the Computation of AWS savings plan rates

AWS Saving plans rates
Source: Amazon

2.9 Delete Idle Load Balancers

One of the best practices of AWS cost optimization is to delete the ideal load balance and to do that initially the user needs to check the Elastic Load Balancing configuration in order to check which load balancer is not being used. Every load balancer that is working in the system incurs cost and if there is any that doesn’t have any backend instances or network traffic, it won’t be in use which will be costly for the company. This is why the first thing to do is identify the load balancer that is not in use for which one can use AWS Trusted Advisor. This tool will identify load balancers with a low number of requests. After identifying the balancer with less than 100 requests in a week, you can remove it to reduce the cost.  

2.10 Identify and Delete Orphaned EBS Snapshots

Another best practice for AWS cost optimization is to identify and remove orphaned EBS snapshots. To understand this and learn how to delete the snapshots, let’s go through the below points and see how AWS CLI allows businesses to search certain types of snapshots that can be deleted.  

  • The very first thing to do is use the describe-snapshots command. This will help the developers to get a list of snapshots that are available in your system which includes private and public snapshots owned by other Amazon Web Services accounts. These snapshots will require volume permissions which you will need to create and then in order to filter the created snapshots, one needs to add a JMESPath expression as shown in the below commands. 
aws ec2 describe-snapshots 
--query "Snapshots[?(StartTime<=`2022-06-01`)].[SnapshotId]" --output text 
  • Now it’s time to find old snapshots. For this, one can add a filter to the command while using tags. In the below example, we have a tag named “Team” which helps in getting back snapshots that are owned by the “Infra” team. 
aws ec2 describe-snapshots --filter Name=tag:Team,Values=Infra 
--query "Snapshots[?(StartTime<=`2020-03-31`)].[SnapshotId]" --output text 
  • After this, as you get the list of snapshots associated with a specific tag mentioned above, you can delete them by executing the delete-snapshot command. 
aws ec2 delete-snapshot --snapshot-id  

Snapshots are generally incremental which means that one snapshot is deleted which consists of data that has the reference of another one, the data won’t get deleted but will be transferred to another snapshot. This clearly means deleting a snapshot will not reduce the storage but if there is a block in the data, it will be captured and no longer will be a problem.  

2.11 Handling AWS Chargebacks for Enterprise Customers

The last practice in our list to optimize AWS cost is to handle the chargebacks for enterprise customers. The main reason behind doing this is that as AWS product portfolios and features start growing, so does the migration of an enterprise customer in the existing workloads to the new products on AWS. And in this situation, keeping the cloud charges low is very difficult. And when the resources and services of your business are not tagged correctly, the complexity grows. In order to help the businesses normalize the processes and reduce their costs after implementing the last updates of AWS, implementing auto-billing and chargebacks transparently is necessary. For this, the following steps must be taken into consideration. 

  • First of all, a proper understanding of blended and unblended costs in consolidated billing files (Cost & Usage Report and Detailed Billing Report) is important. 
  • Then the AWS Venting Machine must be used to create an AWS account and keep the account details and reservation-related data in the database in different tables. 
  • After that, to help the admin to add invoice details, a web page hosted on AWS Lambda or a web server is used. 
  • Then to begin the transformation process of the billing, the trigger is added to the S3 bucket to push messages into Amazon Simple Queue Services. After this, your billing transformation will run on Amazon EC2 instances.  

3. AWS Tools for Cost Optimization

Now, after going through all the different practices that can be taken into consideration for AWS cost optimization, let us have a look at different tools that are used to help companies track, report, and analyze costs by offering several AWS reports.  

  • Amazon S3 Analytics: It enables software development companies to automatically carry out analysis and visualization of Amazon S3 storage patterns which can eventually help in deciding whether there is a need to shift data to another storage class or not. 
  • AWS Cost Explorer: This tool enables you to check the patterns in AWS and have a look at the current spending, project future costs, observe Reserved Instance utilization & Reserved Instance coverage, and more. 
  • AWS Budgets: It is a tool that allows companies to set custom budgets that can trigger alerts when the cost increases that the pre-decided budget. 
  • AWS Trusted Advisor: It offers real-time identification of business processes and areas that can be optimized. 
  • AWS CloudTrail: With this tool, users can log into the AWS infrastructure, continuously monitor the processes, and retain all the activities performed by the account to take better actions which can help in reducing the cost. 
  • Amazon CloudWatch: It enables the companies to gather the metrics and track them, set alarms, monitor log files, and automatically react to changes that are made in AWS resources. 

4. Conclusion

As seen in this blog, there are many different types of AWS cost optimization best practices that can be followed by organizations that are working with the AWS platform to create modern and scalable applications for transforming the firm. Organizations following these practices can achieve the desired results with AWS without any hassle and can also stay ahead in this competitive world of tech providers.  

The post AWS Cost Optimization Best Practices appeared first on sysgenpro Blog.

]]>
https://www.sysgenpro.com/blog/aws-cost-optimization/feed/ 0