Blog
/
/
A guide: setting up a Git-based internal developer portal & extending the data model with Kubernetes
Internal Developer Portal

A guide: setting up a Git-based internal developer portal & extending the data model with Kubernetes

Mor Paz
Jul 9, 2023
Sign up
Download PDF
Internal Developer Portal

Introduction

How do you define the initial developer portal software catalog and how do you set up initial developer self-service actions? How does it all work together, for both portal and platform?

This post is a hands-on discussion of that. 

  • Using Port, we will create an initial software catalog. The data model will be fairly “classic” at first, but we will then extend the data model with additional blueprints. Extending blueprints has the result of making the data model better and more nuanced, and is at the heart of a good initial setup of a software catalog. It’s also one of the reasons we think that the flexibility that comes with blueprints is so important.
  • Will also show how to extend the data model even when your portal is used by hundreds of developers every day.
  • When we finish the data model, we’ll also show how to test and deploy a microservice to an intermediate environment or the development kubernetes cluster. After that we’ll promote that service to production. This will complete the developer self-service part of the internal developer portal.

The ability to define and extend the data model, and to add developer self-service actions creates a fully fledged internal developer portal, and, at least from the point of view of the developer, a fully operational internal developer platform. 

The 5 steps to creating an operational internal developer platform

  1. Creating an initial software catalog. In this case we’ll use Port’s GitHub app and a Git-based template to populate it. 
  2. Extending the initial data model beyond the basic blueprints.
  3. Setting up the first developer self-service action, which will be scaffolding a new microservice, in this case.
  4. Extending the data model with more blueprints and kubernetes data, and then letting developers use more self-service actions, so that they can deploy to test and then to promote the service to production
  5. Adding scorecards and dashboards to provide developers with a good understanding of what’s going on as well as supporting quality and other initiatives within the organization.

{{cta_5}}

Creating the initial software catalog

In this case, we begin by installing Port’s GitHub app and providing it with access to all repositories. 

This will create an initial set of blueprints, based on the Port template for Git providers. This template creates a classic microservice catalog providing visibility into the software development lifecycle. The Git data will populate Port blueprints, creating software catalog entities that will show developers the information they need, in-context, with the right abstractions, permissions and more. The resulting microservice catalog will be made of

  • Workflow runs - that represent runs of our GitHub workflows pipelines. This part of the catalog gives us the option to filter the pipelines only to ones related to services we own.
  • Pull requests - that represent PRs in our GitHub organization, can be used to create custom views with PRs that are assigned to a specific developer.
  • Issues - that represent issues in our GitHub organization.
  • Workflows - can be used to explore the pipelines and workflows currently defined in our GitHub (and also make it easier to create Port self-service actions that directly trigger a workflow in github); and 
  • Microservices - microservices are represented by repositories and the contents of monorepos in the GitHub organization.
Microservice catalog template

Extending the data model: system and domain

The five basic blueprints can and should be extended. In this case, we’ll extend the data model by adding two blueprints: domain and system. This will bring us close to the backstage C4 model. The reason I suggest this is because we want our data model to reflect the organizational engineering architecture. A domain is typically a high level engineering function, maybe a major service or functionality in a product. In this case, let’s call the domain “subscription”. The system is used to represent a collection of microservices that contribute to a piece of functionality provided by the domain.

domain data model
system

The beauty of Port is that you can create the blueprint and relations that map to what your organization does. You can extend the data model whichever way that makes sense for the org. 

Setting up the first developer self-service action

Let’s assume we want to let developers scaffold a new microservice with a self-service action. The “portal” part is the developer self-service action UI. The “platform” part is a GitHub workflow that makes the scaffold process happen.  This is what developers will see:

When the self-service action is set up by the platform engineer, they not only select the backend process that the self-service action triggers. They also set up the actual UI in the developer self-service form. By controlling what’s in the form and what can be done with it, along with permissions, the golden path or guardrails are defined.  For instance, if we only work with node and python in our organization we can limit the different inputs that we can put in the language field, mark the language field as required, or define permitted language versions. 

When the developer executes the self-service action in the portal, they will be shown a job link to the corresponding workflow run in GitHub. The developer will see the logs flowing in and see when the action is completed successfully, so that they never have to leave the platform or deal with platforms they aren’t familiar with.

The page depicting a specific run is meant to keep the developer in the loop, without having to leave Port. This experience is enabled by the option to stream logs from the real workflow in GitHub, but also being able to filter the specific logs sent, making sure the developer understands the steps that are running, without seeing an overly-verbose output which will confuse him.

Extending blueprints with Kubernetes data

Up until now, our data model is basic - a backstage C4 model, with domain and system with some additional information about our available pipelines and the history of their runs. 

Real life enterprises have more than that, e.g. when they use Kubernetes. Let’s add blueprints to add some kubernetes information into the developer portal’s software catalog.

Let’s use the Port kubernetes template. This template uses Port’s Kubernetes exporter or ArgoCD. The resulting portal takes the different Kubernetes resources (deployments, pods, namespaces etc) and maps them into blueprints for a kubernetes catalog, creating Kubernetes entities. You can then configure Kubernetes abstractions for developers, to reduce cognitive load.

Extending blueprints with Kubernetes data

The cluster blueprint, when populated, will show all cluster entities. In this case we have two clusters: a test cluster and a production cluster. The workload entity shows all the different things that I have running in my cluster, such as deployments replicasets etc. 

Cluster

To tie everything together let’s create another blueprint. This time it's going to be an environment blueprint. This will give us the option to distinguish between different environments in the same system and also create a high-level view that gives us visibility into everything that is running in a given environment (including microservices, CI/CD pipelines etc). In this case, we’ll have a production environment and a test environment.

test environment

Let’s create a relation between the workload blueprint and the microservice blueprint. This will help us map every workload and know which microservice is in it. A workload is an actual running instance or a running service of a microservice. This shows us the microservice and all the different environments that it's running in so we can actually get a bird's eye view of what is happening and where. We can see if the service is healthy, on which cluster is it deployed etc. If we want the software catalog to track a version of a  service in a given environment, what we want to track is the “running service” - to see a “live” version of a service running in a specific environment. It will include references to the service, environment, and deployment, as well as real-time information such as status, uptime, and any other relevant data. 

Linking blueprints is like linking tables with a foreign key using the same name on both tables. You can select exactly what information you want to see for every entity or for every blueprint so you model those blueprints according to your needs. Relations can be one-to-one or one-to-many. In this case the relation from workflow to microservice is a one-to-one relation because every workload can only be one deployment of one microservice. In case of microservice dependencies you’re likely to use a one-to-many relation. 

Let’s also create a new relation from cluster to environment, to see where our clusters are running. We can also extend this to a cloud environment or a cloud region, whatever applies. 

Let’s also relate microservice to system, and workflow run to workload. This will let us see for every workload, the workflow run that deployed it. In addition, this will let us see all of the different microservices that make up a specific system in our architecture.

Now that we’ve finished setting up the data model, let’s add a self-service action that will deploy a new microservice to the test environment. In the microservice blueprint we will add an action called deploy to test, with a cluster input in the form, which will also include a filter and only display clusters tagged as test clusters, in case you have multiple test clusters and also want to prevent developers from deploying an unvalidated service to a production cluster. Once we click execute the action starts running right away, the logs will flow in and it's going to deploy a new service in the kubernetes cluster which will also appear as a new entity in the software catalog. 

Microservice Actions

Now let’s add a day-2  action to promote a service to production. In this case, we’ll add a manual approval step. I can also specify how I want to receive that approval, in this case I chose slack. In addition, as part of our extensions of the data model, we also added a calculated property that provides the  Swagger URL of a running service, making it easy to access the API documentation of a service and also test out its usage, without having to search for it outside of the portal.

Self-service hub

{{cta_7}}

Scorecards and dashboards: tying it all together

When we speak to customers, scorecards and the ability to define dashboards are of a great importance to them, since they are a way of pushing initiatives and driving engineering quality.

Showing the the service maturity and engineering quality of the different services in a domain is the basic set of capabilities required to drive internal initiatives and see how far are we from a complete production-ready service that is up to the organization's standards.

Visualizations provide a high-level managerial overview of the state of our domain, by creating a Tech Radar view that gives us insight into the different pieces that make up the organization's tech stack and how common each piece is.

Conclusion

While industry pundits may discuss the difference between a portal and a platform, in real life they are clearly demarcated. One is about the underlying infra and the backend that makes developer self-service actions happen. The other is about helping developers take control of their needs through the software catalog and developer self-service actions, and to show the entire organization the health of services and infra, using scorecards and visualizations.

This post is based on the detailed webinar video here. Check it out!

{{cta_6}}

{{cta-demo}}

Book a demo right now to check out Port's developer portal yourself

Book a demo
{{jenkins}}

It's a Trap - Jenkins as Self service UI

Read more
{{gitops}}

How do GitOps affect developer experience?

Read more
{{ebook}}

It's a Trap - Jenkins as Self service UI. Click her to download the eBook

Download eBook
{{cyberark}}

Learning from CyberArk - building an internal developer platform in-house

Read more
{{dropdown}}

Example JSON block

{
  "foo": "bar"
}

Order Domain

{
  "properties": {},
  "relations": {},
  "title": "Orders",
  "identifier": "Orders"
}

Cart System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Cart",
  "title": "Cart"
}

Products System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Products",
  "title": "Products"
}

Cart Resource

{
  "properties": {
    "type": "postgress"
  },
  "relations": {},
  "icon": "GPU",
  "title": "Cart SQL database",
  "identifier": "cart-sql-sb"
}

Cart API

{
 "identifier": "CartAPI",
 "title": "Cart API",
 "blueprint": "API",
 "properties": {
   "type": "Open API"
 },
 "relations": {
   "provider": "CartService"
 },
 "icon": "Link"
}

Core Kafka Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Kafka Library",
  "identifier": "CoreKafkaLibrary"
}

Core Payment Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Payment Library",
  "identifier": "CorePaymentLibrary"
}

Cart Service JSON

{
 "identifier": "CartService",
 "title": "Cart Service",
 "blueprint": "Component",
 "properties": {
   "type": "service"
 },
 "relations": {
   "system": "Cart",
   "resources": [
     "cart-sql-sb"
   ],
   "consumesApi": [],
   "components": [
     "CorePaymentLibrary",
     "CoreKafkaLibrary"
   ]
 },
 "icon": "Cloud"
}

Products Service JSON

{
  "identifier": "ProductsService",
  "title": "Products Service",
  "blueprint": "Component",
  "properties": {
    "type": "service"
  },
  "relations": {
    "system": "Products",
    "consumesApi": [
      "CartAPI"
    ],
    "components": []
  }
}

Component Blueprint

{
 "identifier": "Component",
 "title": "Component",
 "icon": "Cloud",
 "schema": {
   "properties": {
     "type": {
       "enum": [
         "service",
         "library"
       ],
       "icon": "Docs",
       "type": "string",
       "enumColors": {
         "service": "blue",
         "library": "green"
       }
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "system": {
     "target": "System",
     "required": false,
     "many": false
   },
   "resources": {
     "target": "Resource",
     "required": false,
     "many": true
   },
   "consumesApi": {
     "target": "API",
     "required": false,
     "many": true
   },
   "components": {
     "target": "Component",
     "required": false,
     "many": true
   },
   "providesApi": {
     "target": "API",
     "required": false,
     "many": false
   }
 }
}

Resource Blueprint

{
 “identifier”: “Resource”,
 “title”: “Resource”,
 “icon”: “DevopsTool”,
 “schema”: {
   “properties”: {
     “type”: {
       “enum”: [
         “postgress”,
         “kafka-topic”,
         “rabbit-queue”,
         “s3-bucket”
       ],
       “icon”: “Docs”,
       “type”: “string”
     }
   },
   “required”: []
 },
 “mirrorProperties”: {},
 “formulaProperties”: {},
 “calculationProperties”: {},
 “relations”: {}
}

API Blueprint

{
 "identifier": "API",
 "title": "API",
 "icon": "Link",
 "schema": {
   "properties": {
     "type": {
       "type": "string",
       "enum": [
         "Open API",
         "grpc"
       ]
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "provider": {
     "target": "Component",
     "required": true,
     "many": false
   }
 }
}

Domain Blueprint

{
 "identifier": "Domain",
 "title": "Domain",
 "icon": "Server",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {}
}

System Blueprint

{
 "identifier": "System",
 "title": "System",
 "icon": "DevopsTool",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "domain": {
     "target": "Domain",
     "required": true,
     "many": false
   }
 }
}
{{tabel-1}}

Microservices SDLC

  • Scaffold a new microservice

  • Deploy (canary or blue-green)

  • Feature flagging

  • Revert

  • Lock deployments

  • Add Secret

  • Force merge pull request (skip tests on crises)

  • Add environment variable to service

  • Add IaC to the service

  • Upgrade package version

Development environments

  • Spin up a developer environment for 5 days

  • ETL mock data to environment

  • Invite developer to the environment

  • Extend TTL by 3 days

Cloud resources

  • Provision a cloud resource

  • Modify a cloud resource

  • Get permissions to access cloud resource

SRE actions

  • Update pod count

  • Update auto-scaling group

  • Execute incident response runbook automation

Data Engineering

  • Add / Remove / Update Column to table

  • Run Airflow DAG

  • Duplicate table

Backoffice

  • Change customer configuration

  • Update customer software version

  • Upgrade - Downgrade plan tier

  • Create - Delete customer

Machine learning actions

  • Train model

  • Pre-process dataset

  • Deploy

  • A/B testing traffic route

  • Revert

  • Spin up remote Jupyter notebook

{{tabel-2}}

Engineering tools

  • Observability

  • Tasks management

  • CI/CD

  • On-Call management

  • Troubleshooting tools

  • DevSecOps

  • Runbooks

Infrastructure

  • Cloud Resources

  • K8S

  • Containers & Serverless

  • IaC

  • Databases

  • Environments

  • Regions

Software and more

  • Microservices

  • Docker Images

  • Docs

  • APIs

  • 3rd parties

  • Runbooks

  • Cron jobs

Check out Port's pre-populated demo and see what it's all about.

Check live demo

No email required

Contact sales for a technical product walkthrough

Let’s start

Open a free Port account. No credit card required

Let’s start

Watch Port live coding videos - setting up an internal developer portal & platform

Let’s start

Check out Port's pre-populated demo and see what it's all about.

(no email required)

Let’s start

Contact sales for a technical product walkthrough

Let’s start

Open a free Port account. No credit card required

Let’s start

Watch Port live coding videos - setting up an internal developer portal & platform

Let’s start

Let us walk you through the platform and catalog the assets of your choice.

I’m ready, let’s start