Skip to content

Integrating New Services and Applications

This page provides a guide on how to integrate a new service or application into the platform. It is mainly aimed at developers who are working on a tool or web application and are wondering how they can go from their local development environment to having their services deployed in the Kubernetes cluster within the platform's cloud services.

In this integration process, there are two distinct actors:

  • The developer of the web application or service: This developer possesses a deep understanding of their own code and has a clear vision of its requirements in terms of services, databases, and other components. However, they may not be familiar with the specific workings of the platform or the configuration of the Kubernetes cluster.
  • The administrator of the platform: This actor perceives the application provided by the developer as a "black box" and possesses the knowledge required to deploy this application in a way that enables it to function alongside existing services (such as databases) and be accessible to end users of the platform.

The contact email for the platform administrator can be found in the MODERATE Github organization.

Steps for Integration

The following diagram shows the steps of the development and integration process:

Collaboration approach

👩‍💻 Step 1: Development

Who: Developer

Development of the web application or service using whatever technology stack the developer prefers.

The credentials and configuration parameters for external services, such as databases, message queues, and object storage services, should be expected by the application as environment variables. These variables should be clearly documented in the repository's README file and would serve as the basis for the configuration of the application in the platform.

About configuration parameters

It is crucial to avoid hardcoding configuration parameters, such as database credentials and connection URIs, directly in the application or relying on manual updates in a configuration file once the application or service is deployed. Please use environment variables instead.

Although it is fine to start in a private repository, development should eventually be centralized in a repository within MODERATE's GitHub organization.

📦 Step 2: Dockerfile

Who: Developer

Write a Dockerfile and test that the containers work properly in a local environment.

For example, this is the Dockerfile used to build the moderate-docs image, which is the documentation website you are currently reading.

This Dockerfile should be contained in the repository of the application or service. It should be located in the root directory of the repository and named Dockerfile.

👷‍♂️ Step 3: Continuous Integration

Who: Developer or Administrator

Create an Action in the repository for the images to be built on each push to the main branch and then uploaded to MODERATE's image registry. Actions need to be located in a YAML file in the .github/workflows directory of the repository.

For example, the following is the Action workflow configuration file for the MODERATE HTTP API. Please note the following details:

  • The workflow configuration file for your own application should be mostly the same. The only parameter that should change is the image_name, which is the name of the image in the image registry. This name needs to be unique across the entire MODERATE platform.
  • We simply reuse an existing configuration file that is already present in the moderate-docs repository.
  • All the variables and secrets (e.g. secrets.WIF_PROVIDER) have already been configured at the organization level by the administrator. Note that the administrator needs to manually enable the secrets for a particular repository before they are available.
Example of a workflow file to build and push an image to MODERATE's image registry
name: Build and push an image to Google Artifact Registry (GAR)

on:
  push:
    branches:
      - main

jobs:
  call-build-push-artifact-registry:
    uses: MODERATE-Project/moderate-docs/.github/workflows/reusable-build-push-gar.yml@main
    with:
      project_id: ${{ vars.DEFAULT_GAR_PROJECT_ID }}
      gar_location: ${{ vars.DEFAULT_GAR_LOCATION }}
      gar_repo: ${{ vars.DEFAULT_GAR_REPOSITORY }}
      image_name: moderate-api
    secrets:
      wif_provider: ${{ secrets.WIF_PROVIDER }}
      wif_service_account: ${{ secrets.WIF_SERVICE_ACCOUNT }}

🏗️ Step 4: Terraform Resources

Who: Administrator

Create the Terraform resources that define the Kubernetes resources, which, in turn, represent the deployment of the web application or service.

Once the application is defined as Terraform resources within the moderate-infrastructure repository, it can seamlessly integrate into the platform's life cycle. This enables deployment, destruction, and re-creation with minimal effort.

☁️ Step 5: Deployment

Who: Administrator

Deploy these Terraform resources to MODERATE's cloud platform.

Frequently Asked Questions

Am I restricted in the technology stack that I can use?

No. You can use any technology stack that you want. However, you should be aware that the administrator won't know the internal details of your application, and will thus be unable to help you with issues outside of integration with the platform and deployment.

Are there any examples of applications or services that have already been integrated into the platform?

Sure! Take a look at the moderate-platform-api repository.

What should I do to deploy the databases or other services that my application depends on?

The administrator will take care of deploying the databases and other services that your application depends on. You should simply document the configuration parameters that your application expects to find in the environment variables.

You can get in touch with the administrator to discuss the requirements of your application and the best way to integrate it into the platform.

Do I have to do something specific for my container to run on Kubernetes?

No. The containers that you build with your Dockerfile will run on Kubernetes without any modifications. The administrator will take care of creating the Kubernetes resources that will run your containers.

Will new versions of my application be automatically deployed?

No for the time being. The administrator will take care of manually deploying new versions of your application. However, we will probably move to a Continuous Deployment model in the future.

My application is offline. What should I do?

Please note that the cloud platform will be intermittently unavailable during the development phase. If you need to test your application for a continued period of time, please contact the administrator to ensure that the platform is available.