The Four Pillars of Internal Developer Portals

February 19, 2023

The Four Pillars of Internal Developer Portals

Ready to start?

Introduction

Internal developer portals are made of four main parts: a software catalog, a scorecard layer, a developer self-service actions layer and a workflow automation layer. All four elements interplay with one another. 

This post will take a closer look at these four elements and why they matter for internal developer portals. As we go along, we’ll add additional elements that need to be in an internal developer portal, from search capabilities through RBAC and the need for a loosely coupled catalog. 

Some terminology first

Internal developer portals are typically used as part of a platform engineering initiative. The past two years have seen an intense interest in platform engineering, seeking to make software development simpler through the creation of reusable elements in order to grow developer productivity and reduce software sprawl. What is the platform and what is the portal and how do they relate?

The platform, in a very general sense, is what helps engineers get their job done. At its foundation are CICD, cloud, IaC, GitOps etc. It is about creating reusable elements developers can utilize without unnecessary cognitive load. A key principle in platform engineering is that it takes a product-management perspective in understanding what developers need and how to best provide it to them and evolve the platform itself. 

The portal is the platform’s interface, and as such helps access the underlying heterogeneous SDLC resources as well as consume the developer self-service actions which the platform sought to expose. The Portal, according to analyst firm Gartner, is the most mature technology in the platform engineering space. A lot of the product thinking in the platform engineering space is reflected in the portal’s developer-facing interface.

To illustrate, let’s think of a developer that wants to add a cloud resource to a microservice. This may require deep knowledge of Terraform or IaC, expertise that the developer most likely doesn’t have. Instead of having developers create many tickets for devops to add those cloud resources, the self-service action that allows the addition of resources can simply appear in the developer portal, with the right information that the developer needs, and not more, without requiring IaC or Terraform expertise but through leveraging DevOps expertise in creating this infrastructure. 

The portal isn’t used just by developers. In a world full of software sprawl, hundreds of microservices and devops tools, operations people want to use the internal developer portal too, especially its software catalog. In that sense, the portal and the software catalog at its heart are more than a “UI on top of the platform”. Eventually, this will become a platform API, which will be crucial to the success of platform engineering. We’ll explain this idea below. 

The first pillar: the software catalog

The software catalog shows developers the data they need about software and its underlying resources, so they can see the relevant information in context (how does a running service behave in a specific environment, which packages are in which service etc). While catalogs should provide simple abstractions for developers, they can and should contain more than on-call, ownership and microservice information. They can and should cover everything from infrastructure, pipelines, resources and so on. By covering microservices, resources, environments, custom assets the software catalog becomes more than a point solution for developer self-service and a general purpose metadata repository, with stateful information. 

How exactly does one define what’s in the software catalog and how to best structure it? While there are common taxonomies, such as backstage’s C4 model, software catalogs shouldn’t be opinionated on the data model level but rather allow you to bring your own data model. You’re the one that should be opinionated about your data model and you should be defining the data model you need in the software catalog. You can create elements (called blueprints in Port) that reflect packages, services that are deployed in environments, kubernetes elements and more. Getting the data model right will drive the success of your platform engineering as a whole, since the software catalog will become the guide for developers and operations into what exists where. You can read more about creating the software catalog data model taxonomy here - it isn’t as complex as it sounds.  In short the software catalog should be neutral enough so that you can build an opinionated data model in it, reflecting your own environment. In Port this is done through what we call “blueprints”. 

A well-built software catalog will help you manage software, kubernetes & ArgoCD, temporary environments, packages, anything really. This means that although the software catalog should provide abstracted views to developers, the more data is in it, the better. In that sense it acts as a general purpose metadata store for software and resources. Data may be abstracted away for developers but the data will remain in the catalog, for other users, such as operations teams or (for example) to be used to make automated decisions from the CI/CD pipelines.  

The second pillar: scorecards

Scorecards are an important part of the platform engineering reason to exist: they drive quality and standards for the engineering organization. Using scorecards you can define metrics for quality, production readiness and even developer and devops productivity. Scorecards can be set for each individual element in the software catalog (“entity” in Port), so that the metrics and general score can be seen in context, for the individual entity, helping with package management, Kubernetes health metrics and more. Tracking scorecards in context (for a microservice, an environment, a cluster or any other entity) also communicates standards and helps drive visibility. 

Scorecards can be used to make automated decisions sourced from the CI/CD pipeline, such as subscribing to a scorecard indicating a degradation of a service/resource and then acing upon it using workflow automation. 

The third pillar: developer self-service

On the developer self-service side, developers need to do almost anything through the portal, with a simple UI and a product-like interface. You can read why Jenkins and similar solutions won’t work for developer self-service. Self-service actions cover everything from microservice scaffolding to temporary permissions, ephemeral environments etc. This is what saves DevOps work. The user interface for the self-service actions should be no-code and simple for the platform team to configure. Timers for actions with a TTL (ephemeral environments, permissions) are required for good self-service, as well as manual approvals.

Developer self-service should be loosely coupled from the underlying infrastructure and automations. Platforms evolve all the time, regardless of whether they were haphazardly built over time or are a result of a new technology stack. If the underlying cloud resources, CI or CD systems are replaced, the developers should have the same experience and this only works with loose coupling. 

The fourth pillar: workflow automation

On the workflow automation side, the portal acts as a single interface that both machines and humans can interact with. Remember the software catalog? It holds the entire context and state of the platform, and the scorecards built on top of it indicate the health, quality or readiness of any element in the software catalog. And when workflows, such as CI, check the status of entities in the software catalog it has become the API for the platform. This is what we mean by workflow automation.

{{cta_3}}

Additional features

IDPs need more. No-code RBAC is a big deal for IDPs since there needs to be control of who sees what, by developer and team, and who can change elements. Dashboards also matter since insights and reporting are needed, as well as robust search capabilities up and down dependency graphs. 

Exporters and integrations to get all the data in matter too, from kubernetes exporters to GitHub and Bitbucket

Summary

The interplay between the different elements in the internal developer portal is what drives its applicability and robustness. Lean software catalogs without stateful data, or limited developer self-service capabilities won’t do the job. Eventually, IDPs will evolve to become a platform API. For that to happen, the basic design of the IDP needs to be set in place, helping developers and operations work better together.

{{cta_1}}

Check out Port's pre-populated demo and see what it's all about.

Check live demo

No email required

{{cta_2}}

Contact sales for a technical product walkthrough

Let’s start
{{cta_3}}

Open a free Port account. No credit card required

Let’s start
{{cta_4}}

Watch Port live coding videos - setting up an internal developer portal & platform

Let’s start
{{cta_5}}

Check out Port's pre-populated demo and see what it's all about.

(no email required)

Let’s start
{{cta_6}}

Contact sales for a technical product walkthrough

Let’s start
{{cta_7}}

Open a free Port account. No credit card required

Let’s start
{{cta_8}}

Watch Port live coding videos - setting up an internal developer portal & platform

Let’s start
{{cta-demo}}
{{reading-box-backstage-vs-port}}

Example JSON block

{
  "foo": "bar"
}

Order Domain

{
  "properties": {},
  "relations": {},
  "title": "Orders",
  "identifier": "Orders"
}

Cart System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Cart",
  "title": "Cart"
}

Products System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Products",
  "title": "Products"
}

Cart Resource

{
  "properties": {
    "type": "postgress"
  },
  "relations": {},
  "icon": "GPU",
  "title": "Cart SQL database",
  "identifier": "cart-sql-sb"
}

Cart API

{
 "identifier": "CartAPI",
 "title": "Cart API",
 "blueprint": "API",
 "properties": {
   "type": "Open API"
 },
 "relations": {
   "provider": "CartService"
 },
 "icon": "Link"
}

Core Kafka Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Kafka Library",
  "identifier": "CoreKafkaLibrary"
}

Core Payment Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Payment Library",
  "identifier": "CorePaymentLibrary"
}

Cart Service JSON

{
 "identifier": "CartService",
 "title": "Cart Service",
 "blueprint": "Component",
 "properties": {
   "type": "service"
 },
 "relations": {
   "system": "Cart",
   "resources": [
     "cart-sql-sb"
   ],
   "consumesApi": [],
   "components": [
     "CorePaymentLibrary",
     "CoreKafkaLibrary"
   ]
 },
 "icon": "Cloud"
}

Products Service JSON

{
  "identifier": "ProductsService",
  "title": "Products Service",
  "blueprint": "Component",
  "properties": {
    "type": "service"
  },
  "relations": {
    "system": "Products",
    "consumesApi": [
      "CartAPI"
    ],
    "components": []
  }
}

Component Blueprint

{
 "identifier": "Component",
 "title": "Component",
 "icon": "Cloud",
 "schema": {
   "properties": {
     "type": {
       "enum": [
         "service",
         "library"
       ],
       "icon": "Docs",
       "type": "string",
       "enumColors": {
         "service": "blue",
         "library": "green"
       }
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "system": {
     "target": "System",
     "required": false,
     "many": false
   },
   "resources": {
     "target": "Resource",
     "required": false,
     "many": true
   },
   "consumesApi": {
     "target": "API",
     "required": false,
     "many": true
   },
   "components": {
     "target": "Component",
     "required": false,
     "many": true
   },
   "providesApi": {
     "target": "API",
     "required": false,
     "many": false
   }
 }
}

Resource Blueprint

{
 “identifier”: “Resource”,
 “title”: “Resource”,
 “icon”: “DevopsTool”,
 “schema”: {
   “properties”: {
     “type”: {
       “enum”: [
         “postgress”,
         “kafka-topic”,
         “rabbit-queue”,
         “s3-bucket”
       ],
       “icon”: “Docs”,
       “type”: “string”
     }
   },
   “required”: []
 },
 “mirrorProperties”: {},
 “formulaProperties”: {},
 “calculationProperties”: {},
 “relations”: {}
}

API Blueprint

{
 "identifier": "API",
 "title": "API",
 "icon": "Link",
 "schema": {
   "properties": {
     "type": {
       "type": "string",
       "enum": [
         "Open API",
         "grpc"
       ]
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "provider": {
     "target": "Component",
     "required": true,
     "many": false
   }
 }
}

Domain Blueprint

{
 "identifier": "Domain",
 "title": "Domain",
 "icon": "Server",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {}
}

System Blueprint

{
 "identifier": "System",
 "title": "System",
 "icon": "DevopsTool",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "domain": {
     "target": "Domain",
     "required": true,
     "many": false
   }
 }
}
{{tabel-1}}

Microservices SDLC

  • Scaffold a new microservice

  • Deploy (canary or blue-green)

  • Feature flagging

  • Revert

  • Lock deployments

  • Add Secret

  • Force merge pull request (skip tests on crises)

  • Add environment variable to service

  • Add IaC to the service

  • Upgrade package version

Development environments

  • Spin up a developer environment for 5 days

  • ETL mock data to environment

  • Invite developer to the environment

  • Extend TTL by 3 days

Cloud resources

  • Provision a cloud resource

  • Modify a cloud resource

  • Get permissions to access cloud resource

SRE actions

  • Update pod count

  • Update auto-scaling group

  • Execute incident response runbook automation

Data Engineering

  • Add / Remove / Update Column to table

  • Run Airflow DAG

  • Duplicate table

Backoffice

  • Change customer configuration

  • Update customer software version

  • Upgrade - Downgrade plan tier

  • Create - Delete customer

Machine learning actions

  • Train model

  • Pre-process dataset

  • Deploy

  • A/B testing traffic route

  • Revert

  • Spin up remote Jupyter notebook

{{tabel-2}}

Engineering tools

  • Observability

  • Tasks management

  • CI/CD

  • On-Call management

  • Troubleshooting tools

  • DevSecOps

  • Runbooks

Infrastructure

  • Cloud Resources

  • K8S

  • Containers & Serverless

  • IaC

  • Databases

  • Environments

  • Regions

Software and more

  • Microservices

  • Docker Images

  • Docs

  • APIs

  • 3rd parties

  • Runbooks

  • Cron jobs

Starting with Port is simple, fast and free.

Let’s start