Blog
/
/
5 steps to whiteboard your software catalog taxonomy
Software Catalog

5 steps to whiteboard your software catalog taxonomy

Sign up
Download PDF
Software Catalog

Introduction

When getting started with backstage or other software catalog, devops and platform engineers may feel intimidated by the need to create a "taxonomy" for what's in the software catalog and how to show the relationships and dependencies between microservices and the resources they run on. Actually, creating a software catalog isn’t that difficult. In this post, we’ll set 7 steps to do just that.

Why software catalog? Why an internal developer portal?

Platform engineering and developer experience are here to stay. At their core is enabling a better developer experience with less cognitive load on developers and less context switching. This should increase developer productivity and help make sense of software sprawl. 

One of the core concepts in the world of platform engineering is the internal developer portal. It consists of a software catalog with service maturity in mind and a portal for developer self-service actions. 

  • The software catalog shows all software, infrastructure, cloud assets and more, and the dependencies between them, creating a context-rich way of understanding the software development lifecycle, and an ability to query and create KPIs on top of it, from DORA metrics to health and more (backstage calls this “soundcheck” and other vendors call it “service maturity” or “scorecards”). 
  • The second part is a developer self-service capability that delivers on the promise of platform engineering: creating a product-like experience when interacting with devtools and the operations side. Using developer self-service, developers can spin up ephemeral environments, request temporary permissions, provision a cloud resource, perform day-two operations and scaffold a microservice. 

The concept of the software catalog is also intimidating to some. How can you possibly map all that’s in the scope of software, resources, cloud and devtools? This post was made to show that it’s actually pretty simple, and that the results (if done right) are powerful and achievable.

It’s important to note that most engineering orgs already have some ad-hoc software catalog. Data about services and environments is usually stored somewhere, whether in csv files, tags in the cloud or some sort of documentation. You can use these sources of information when going through these steps.

Let’s get started.

Step number 1: Identify your central entity and map it first

You can’t create a software catalog without identifying the central entity that serves as the focal point for the software catalog. In most cases this is a microservice. This may seem obvious since a software catalog is also called a “microservice catalog” but this isn’t always the case. Sometimes the core requirement is mapping all environments or cloud resources. 

Assuming the service is the central entity, you should begin by mapping all the information you want to track and associate with any given service. Forget about dependencies (we’ll get to them later). Think about this first unit and what needs to be known about it. In Port’s Internal developer portal this is called a blueprint. In backstage.io a service is one of the five “entity kinds” (according to backstage’s adaptation of the C4 model), but the same principles apply everywhere. List the services you have, and then add the data that describes them: relevant links, the team that manages them, the language they are written in. You can add the slack channel, APIs, a health metric and a github link.

 

Step number 2: Define the questions you want answered

Now ask which questions you want to answer in your software catalog. This will help you identify the data that needs to be populated in it.

In most cases this list will be along the lines of:

  • What’s the current running version in production for a given service?
  • What services a particular service depends on
  • What is the general status of a k8s cluster? 
  • Who owns this microservice and where can I find API docs?
  • Which kubernetes clusters exist in which cloud provider?
  • Why did this deploy fail?
  • Who is on-call?
  • Is this version production-ready?
  • DORA metrics for a given team, service or developer
  • Which resources run in different regions across different cloud providers (multi cloud)?
  • Can I deploy a new version now?
  • Production, security, cost & production readiness of services
  • What services have inter-dependencies?

Different questions apply to different personas. A front-end developer has different questions than a data engineer. An architect also thinks about the world differently than a team lead. Make sure you cover them all since this is what will influence what data you include in the software catalog.

Step number 3: Determining what is deployed where

Answering all the questions listed in the previous step requires thinking about the data that is needed in the software catalog and this is where we need to think about modeling the software catalog. 

Backstage’s visualization of software for architecture diagrams is based on the C4 model. This approach works well since it is much more than a microservice catalog. It catalogs all the elements related to the software: microservices, resources, deployments and the dependencies and relationships between all of them. 

As in the C4 model implementation, it uses three core entities: 

  1. API
  2. Component; and
  3. Resource

It then adds two additional entities:

  1. Systems; and
  2. Domains

While we believe the adaptation of the c4 model is excellent, let’s try to imply it to a microserivce oriented architecture and be more specific about the implications with Port.

Port’s take on this comes with a slight variation compared to backstage. When thinking about “What is deployed where” remember that the software catalog is closely tied to the software development lifecycle (SDLC):

  1. Services
  2. Environments
  3. Deployments

To understand this, let’s examine this ontology:

 

  • Service is a microservice, or any other software architecture (including a monolith).
  • Environment is any production, staging, QA, DevEnv, on-demand, or any other environment type.
  • Deployment Config is a representation of the current “live” version of a service running in a specific environment. It will include references to the service, environment, and deployment, as well as real-time information such as status, uptime, and any other relevant metadata.
  • Deployment Service Pod is an instance of a service. It includes a reference to the deployment config and details such as the specification and runtime status of containers. This provides us with information about how the service is running now on a specific pod.
  • Deployment is an object representing a CD job. It includes the version of the deployed service and a link to the job itself. Unlike other objects, the deployment is an immutable item in the software catalog. It is important to keep it immutable to ensure the catalog remains reliable.

What we’re suggesting is that storing data about deployments in a developer portal is essential for a number of reasons. 

  • First and foremost, it allows developers to track and review the deployment process for their applications. This includes access to information such as the link to the deployment, the data used in the deployment, the user who initiated the deployment, the status of the deployment (e.g. successful or unsuccessful), the duration of the deployment job, and the version of the service that was deployed.
  • Having this information readily available in a developer portal helps developers quickly identify any issues or problems that may have occurred during the deployment process, and allows them to troubleshoot and resolve these issues more efficiently. It also helps developers stay up-to-date on the status of their deployments and understand the impact that any changes or updates may have on their applications.

Overall, storing data about deployment in a developer portal helps to improve the transparency and accountability of the deployment process, and helps to ensure that applications are deployed smoothly and efficiently. Most importantly, taking the c4 model variation by backstage approach (or the Port approach, which is slightly different) will enable answering most questions listed in the previous step.

Here are two smaller points comparing the Port approach to the c4 model variation by backstage approach:

  • For resources, you want to have a hierarchy, such as this namespace belongs to this cluster and this cluster belongs to this cloud environment. The hierarchy provides additional context that can be valuable in several cases associated with the software catalog. Without this hierarchy you can not ask questions like which lambda functions are running in a specific region or which services are deployed on the cluster and not the opposite. This capability isn’t available today in backstage io, but is supported in Port. 
  • A deployed service should provide us with information about how the service behaves in a specific environment. This also contains operational parameters, such as who made this deployment, where is the list of all historical deployments, so I can detect code changes that matter. Deployed also reflects the pull requests that triggered the deployment, as well as runtime etc. In this respect it is important to include both metadata AND live data. Catalogs that contain metadata only (usually as a result of limitations on their API, and currently this is the case with backstage io) do not reflect much needed live data about “deployed”. As a result, such software catalogs do not reflect data about versions, packages, alerts, and cannot serve as a software catalog whose data can be consumed by automated workflows and pipelines (this is the case for Port, which contains metadata and live data).

 

Step 4 - Use role based access control to define who sees what

The data in the software catalog can be endless. Certain personas want to keep it simple, while others need more depth. Does each developer need (or want) to see all the services in all the domains? Is it too much information? 

In the past, this question of who gets to see what required resolution early on and the types of data included were decided upon early on. But in most systems including backstage io (and Port) you can just connect integrations and “stuff” all the information into the catalog, and through using RBAC rules you can define who sees what. 

I’d like to make the point that with regards to services it’s pretty clear how to set RBAC. The farther the service is from a team/developer, the less they get to see. However, it’s more difficult to make the decision about RBAC with regards to where (namespaces, cloud accounts and environments).

 

Step 5: Your next step is developer self-service.

It’s easy to complete the setup of the software catalog, perhaps set KPIs and track them over each element in the catalog, and say “that’s all folks!”. But this isn’t the case. The next step that is no less important is to define self-service actions and curate those experiences for developers, reducing cognitive load, preventing context switching and enabling better developer productivity.

{{cta_8}}

{{cta-demo}}

Book a demo right now to check out Port's developer portal yourself

Book a demo
{{jenkins}}

It's a Trap - Jenkins as Self service UI

Read more
{{gitops}}

How do GitOps affect developer experience?

Read more
{{ebook}}

It's a Trap - Jenkins as Self service UI. Click her to download the eBook

Download eBook
{{cyberark}}

Learning from CyberArk - building an internal developer platform in-house

Read more
{{dropdown}}

Example JSON block

{
  "foo": "bar"
}

Order Domain

{
  "properties": {},
  "relations": {},
  "title": "Orders",
  "identifier": "Orders"
}

Cart System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Cart",
  "title": "Cart"
}

Products System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Products",
  "title": "Products"
}

Cart Resource

{
  "properties": {
    "type": "postgress"
  },
  "relations": {},
  "icon": "GPU",
  "title": "Cart SQL database",
  "identifier": "cart-sql-sb"
}

Cart API

{
 "identifier": "CartAPI",
 "title": "Cart API",
 "blueprint": "API",
 "properties": {
   "type": "Open API"
 },
 "relations": {
   "provider": "CartService"
 },
 "icon": "Link"
}

Core Kafka Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Kafka Library",
  "identifier": "CoreKafkaLibrary"
}

Core Payment Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Payment Library",
  "identifier": "CorePaymentLibrary"
}

Cart Service JSON

{
 "identifier": "CartService",
 "title": "Cart Service",
 "blueprint": "Component",
 "properties": {
   "type": "service"
 },
 "relations": {
   "system": "Cart",
   "resources": [
     "cart-sql-sb"
   ],
   "consumesApi": [],
   "components": [
     "CorePaymentLibrary",
     "CoreKafkaLibrary"
   ]
 },
 "icon": "Cloud"
}

Products Service JSON

{
  "identifier": "ProductsService",
  "title": "Products Service",
  "blueprint": "Component",
  "properties": {
    "type": "service"
  },
  "relations": {
    "system": "Products",
    "consumesApi": [
      "CartAPI"
    ],
    "components": []
  }
}

Component Blueprint

{
 "identifier": "Component",
 "title": "Component",
 "icon": "Cloud",
 "schema": {
   "properties": {
     "type": {
       "enum": [
         "service",
         "library"
       ],
       "icon": "Docs",
       "type": "string",
       "enumColors": {
         "service": "blue",
         "library": "green"
       }
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "system": {
     "target": "System",
     "required": false,
     "many": false
   },
   "resources": {
     "target": "Resource",
     "required": false,
     "many": true
   },
   "consumesApi": {
     "target": "API",
     "required": false,
     "many": true
   },
   "components": {
     "target": "Component",
     "required": false,
     "many": true
   },
   "providesApi": {
     "target": "API",
     "required": false,
     "many": false
   }
 }
}

Resource Blueprint

{
 “identifier”: “Resource”,
 “title”: “Resource”,
 “icon”: “DevopsTool”,
 “schema”: {
   “properties”: {
     “type”: {
       “enum”: [
         “postgress”,
         “kafka-topic”,
         “rabbit-queue”,
         “s3-bucket”
       ],
       “icon”: “Docs”,
       “type”: “string”
     }
   },
   “required”: []
 },
 “mirrorProperties”: {},
 “formulaProperties”: {},
 “calculationProperties”: {},
 “relations”: {}
}

API Blueprint

{
 "identifier": "API",
 "title": "API",
 "icon": "Link",
 "schema": {
   "properties": {
     "type": {
       "type": "string",
       "enum": [
         "Open API",
         "grpc"
       ]
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "provider": {
     "target": "Component",
     "required": true,
     "many": false
   }
 }
}

Domain Blueprint

{
 "identifier": "Domain",
 "title": "Domain",
 "icon": "Server",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {}
}

System Blueprint

{
 "identifier": "System",
 "title": "System",
 "icon": "DevopsTool",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "domain": {
     "target": "Domain",
     "required": true,
     "many": false
   }
 }
}
{{tabel-1}}

Microservices SDLC

  • Scaffold a new microservice

  • Deploy (canary or blue-green)

  • Feature flagging

  • Revert

  • Lock deployments

  • Add Secret

  • Force merge pull request (skip tests on crises)

  • Add environment variable to service

  • Add IaC to the service

  • Upgrade package version

Development environments

  • Spin up a developer environment for 5 days

  • ETL mock data to environment

  • Invite developer to the environment

  • Extend TTL by 3 days

Cloud resources

  • Provision a cloud resource

  • Modify a cloud resource

  • Get permissions to access cloud resource

SRE actions

  • Update pod count

  • Update auto-scaling group

  • Execute incident response runbook automation

Data Engineering

  • Add / Remove / Update Column to table

  • Run Airflow DAG

  • Duplicate table

Backoffice

  • Change customer configuration

  • Update customer software version

  • Upgrade - Downgrade plan tier

  • Create - Delete customer

Machine learning actions

  • Train model

  • Pre-process dataset

  • Deploy

  • A/B testing traffic route

  • Revert

  • Spin up remote Jupyter notebook

{{tabel-2}}

Engineering tools

  • Observability

  • Tasks management

  • CI/CD

  • On-Call management

  • Troubleshooting tools

  • DevSecOps

  • Runbooks

Infrastructure

  • Cloud Resources

  • K8S

  • Containers & Serverless

  • IaC

  • Databases

  • Environments

  • Regions

Software and more

  • Microservices

  • Docker Images

  • Docs

  • APIs

  • 3rd parties

  • Runbooks

  • Cron jobs

Check out Port's pre-populated demo and see what it's all about.

Check live demo

No email required

Contact sales for a technical product walkthrough

Let’s start

Open a free Port account. No credit card required

Let’s start

Watch Port live coding videos - setting up an internal developer portal & platform

Let’s start

Check out Port's pre-populated demo and see what it's all about.

(no email required)

Let’s start

Contact sales for a technical product walkthrough

Let’s start

Open a free Port account. No credit card required

Let’s start

Watch Port live coding videos - setting up an internal developer portal & platform

Let’s start

Let us walk you through the platform and catalog the assets of your choice.

I’m ready, let’s start