The Practical Guide to internal developer portals

Scorecards

Chapter 5

Introduction

Scorecards provide a structured real-time view that enables teams to track, assess, and maintain engineering excellence. They do for standards what dashboards do for performance: offer a repeatable framework for tracking key metrics, ensuring alignment across teams and projects, benefiting engineering leaders and platform teams.

Scorecards give developers clarity that they’re adhering to standards and following golden paths right from the beginning. This helps prevent duplication of work and delays because standards were not followed. Crucially, there’s no guesswork; developers get a clear definition of the standard together with the measurement. While a written standard might say “fix important security issues”, a scorecard will show exactly how many important security issues you need to fix, and an internal developer portal will tell you which security issues they are and enrich this with information about how to fix them.

Scorecards are a core feature of an internal developer portal because many engineering teams struggle to define, establish, and enforce engineering standards (not for lack of trying). They know that software needs standards — after all, standards are a definition of done, a definition of what good looks like. But they lack the tools to effectively communicate and track them. As a result, they end up using legacy methods such as tracking production readiness or AppSec in spreadsheets, meaning a lot of manual labor is involved. For developers, this means slowing down releases for tedious reviews and reworking due to unclear standards. 

In this section, we’ll define scorecards, explain why they matter, provide examples of the most common types of scorecards, articulate how scorecards are used in practice, and provide a step-by-step “quickstart” guide to new portal users for implementing scorecards.

What is a scorecard?

A scorecard is a structured way to define, measure, and track key metrics for each service or entity within your internal developer portal. Each scorecard comprises a set of rules, each specifying one or more conditions that must be met. Scorecards include tiered levels—such as gold, silver, and bronze—to indicate varying levels of compliance or maturity.

Scorecards help teams assess production readiness, code quality, migration progress, operational performance, and more. They can be used to track DORA metrics AppSec compliance, and even enforce working agreements, like PR review times. They can also drive initiatives, like a security team initiative to eradicate all old instances of a vulnerable open source package (more on initiatives later in this chapter). 

Scorecards complement the internal developer portal’s software catalog, which serves as a central system of record for your entire software development environment. They provide teams with a reliable framework for defining, tracking, and maintaining engineering standards.

Why should you use scorecards?

There are four main reasons organizations use scorecards: 

1. Communicating and enforcing standards

Many developers struggle with unclear expectations— only 15% feel they fully understand required standards, and nearly half actively disagree with the statement that standards are easy to understand. Scorecards help domain owners define, communicate, and enforce these standards effectively, reducing inconsistencies, improving compliance, and mitigating security risks.

By integrating scorecards into an internal developer portal, teams gain real-time tracking, better collaboration, and a structured approach to maintaining engineering excellence.

2. Standardizing measurement and alignment

Scorecards shift teams away from vague wording around standards by tying them to actual measurements. They enable teams to define clear, consistent standards for evaluating key metrics like code quality, documentation, and operational performance. By setting baselines, they ensure teams work toward shared expectations. For example, a scorecard can enforce a minimum requirement for test coverage or documentation completeness. As this is quantitative, developers have a clear idea of whether they are passing or failing compliance and an understanding of why. Qualitative data gathering via surveys can help engineering leaders understand if there are any particular obstacles or issues that are preventing compliance.  

3. Enhancing visibility and prioritization

By continuously tracking service scores, scorecards highlight areas needing attention before they become critical. Automated alerts can notify teams when a service falls below thresholds—such as uptime dropping under 99%—enabling proactive issue resolution and ensuring critical improvements are addressed first.

4. Driving accountability and quality improvements

Scorecards assign ownership to specific teams, ensuring responsibility for maintaining standards. Engineering leaders can group scorecards under strategic initiatives, aligning team efforts toward coordinated improvements in focus areas and fostering continuous enhancement.

{{cta_survey}}

Types of scorecards

Below are some of the most common types of scorecards we see our customers use.

Operational readiness scorecards: These scorecards check whether services are production-ready by looking at factors like ownership, documentation, runbooks, and infrastructure setup. 

At Port, we use readiness scorecards to ensure services meet standards before they are launched or pushed to production. For example, you can require that a runbook, alert setup, and monitored uptime are created for all of your top production services.

Documentation scorecards: Libertex uses scorecards to improve their documentation standards. After gathering feedback from developers about their pain points with writing and maintaining documentation, they implemented scorecards to track documentation compliance across their systems. They categorized compliance into tiers—gold, silver, and bronze—allowing them to see where improvements were needed. 

Service maturity scorecards: Service maturity scorecards grade factors like code coverage, versioning, README quality, and other code health metrics. For example, Port tracks service maturity with a scorecard that checks for ownership, test coverage, documentation, and more.

Operational maturity scorecards: For services already in production, these scorecards measure operational health. Example metrics include test coverage, on-call activity, mean time to recovery, and deployment frequency.

DORA metrics scorecards: Scorecards can track DORA metrics like deployment frequency, lead time for changes, and change failure rates. This provides insight into engineering efficiency and quality, and may help you understand the effort-to-output ratio of your team's experience per deployment.

Space International uses scorecards to gain better visibility into their DORA metrics and engineering performance. While they were already deploying multiple times per day with a lead time of five days, they needed a clearer picture of trends and bottlenecks. They decided to use Port to visualize historical data through line charts and dashboards. This data was then used to generate scorecards for each team, allowing them to track and improve their own performance over time. Each team can access its own dashboard daily, making it easier to monitor trends and take action.

Migration scorecards: Migration scorecards provide visibility into progress, blocking issues, and other key metrics as you migrate services to new platforms. For example, you can use scorecards to track your completion percentage and stalled tasks during a move to Kubernetes.

Health scorecards: Health scorecards give you high-level visibility into the overall status of your running services, often using a simple red/yellow/green scoring system. For example, services with no errors or incidents in the past week are green.

Code quality scorecards: Code quality scorecards analyze code on dimensions like test coverage, readability, extensibility, and reproducibility. For instance, tracking pylint scores over time.

Cloud cost scorecards: Scorecards focused on cloud cost management (FinOps) track metrics like spend by team and time-to-live for cloud resources. Scorecards make it easy to identify any unused S3 buckets or overprovisioned resources and can show you where to take action to right-size resources.

Security scorecards: Security scorecards monitor for misconfigurations, unprotected resources, provisioning issues, and other risks. For instance, you can flag unencrypted RDS instances or insecure bucket policies.

Microservice scorecards

Microservice scorecards help you monitor various aspects of a microservice, such as security, reliability, documentation, and production readiness. For example, a production readiness scorecard might require a service to:

  • Be linked to a business domain
  • Have a defined on-call team
  • Use a supported programming language
  • Be updated within the last year
  • Have minimal open incidents
  • Pass SonarQube code scans

By setting thresholds and assigning scores, teams can enforce quality standards and even automate actions—such as preventing the deployment of a service that does not meet production readiness criteria.

Running service scorecards

Once a service is live, it’s crucial to track its health continuously. Running service scorecards monitor metrics such as availability, active issues, monitoring coverage, throughput, and error rates. Internal developer portals make it easy to track running services in context, ensuring that teams always have up-to-date insights into service performance.

API scorecards

APIs are critical assets used both internally and externally, and their quality and reliability directly impact developers and consumers. API scorecards help assess compliance with discoverability, standardization, and security requirements. For example, an API might need to:

  • Have clear documentation
  • Follow a standardized naming convention
  • Implement security best practices such as authentication and rate limiting

By using scorecards, teams can quickly determine whether an API meets quality standards before adoption.

What are initiatives in Port?

An initiative in an internal developer portal refers to a strategic project to align developer workflows and activities towards business goals and outcomes.

Initiatives are concerted, organized efforts to improve engineering quality on a specific dimension. They contain collections of related scorecards representing priority areas like improving reliability, security, velocity, visibility etc. — driving developers to adopt practices and tooling that support those goals. For example, an initiative to improve reliability may contain scorecards for crash-free releases, mean time to recover, end-to-end testing coverage etc.

How do initiatives and scorecards work together?

Initiatives differ from individual scorecards in both their goals and scope: 

  • Scorecards allow teams to benchmark themselves and get guidance on incremental steps to improve their scores. Scorecards may track automation, testing, monitoring, documentation, and other best practices.
  • Initiatives align developer workflows to business KPIs. An initiative to accelerate your release velocity could track your deployment frequency, lead time, change failure rate, and other metrics, all in one place.
  • Initiatives connect dispersed efforts around a common cause. Developers have a clear, purposeful path to achieve a goal rather than perform ad-hoc tasks that lack clear meaning. Platform teams streamline tooling and workflows to support the initiative.
  • By rallying the organization around targeted initiatives, internal developer portals can drive lasting cultural change in your engineering organization, not just one-off optimizations. Initiatives create clarity, urgency and focus.

Ultimately, initiatives represent strategic programs to achieve business outcomes by guiding developers, project teams, and platform organizations towards practices and behaviors that matter most. 

If you’re launching a specific initiative, you can create an initiative dashboard like below:

A dashboard in Port

How to use engineering scorecards in practice

Organizations use scorecards and initiatives in a number of ways. 

As an automated check

Using the software catalog, users can automatically check for a set of rules. For instance, does a repository have a pipeline? Does a service have an on-call?

Each scorecard has an owner (typically a manager) who can ensure all requirements are being met across all teams. For example, the security team lead is in charge of the security scorecard.

Managers can visualize compliance with scorecards in a dashboard that can be grouped by teams.

A dashboard for Services standards in Port

They can also monitor their scorecard distribution, for an easier glance into teams’ compliance.

A visual representation of scorecard compliance in Port

As a periodical review of initiatives

Managers should periodically review scorecards to check their teams’ progress toward goals and identify areas that need attention. For example, a CTO might review code quality scorecards across teams quarterly.

Managers can also look out for patterns by using trend lines.

A production readiness dashboard

As alerts and automation triggers

  • Users can be alerted via their chosen communication channel such as Slack when certain preset thresholds are crossed. For example, a low operational readiness score could trigger an email to the service owner.
  • Alerts can kick off automated workflows to resolve issues surfaced by scorecards. At Port, we’ve built workflows to auto-assign tasks to managers when scores for their services drop below those thresholds.
  • CI/CD pipelines can check the relevant scorecard (e.g., code coverage, security policy adherence) for the service deployment — and stop the deployment if the scorecard tier is too low.

Distributing an initiative across multiple teams

If a security leader notices a large amount of critical cloud alerts that need to be fixed, they can create an initiative to ensure that no service has a critical cloud security issue by the end of the quarter — and apply it to all relevant teams at once. 

An action item can be automatically created for each team with the relevant due date and a description can be included in the initiative dashboard to communicate the importance of the issue. The security manager can then use their cloud security dashboard within the portal to track the initiative and check progress per team and domain.

Making exceptions for users and teams

There are times when teams can’t meet certain requirements - and with good reason. For example, an update may not be relevant for one team, or there may simply not be enough time left in the quarter to meet a specific requirement. Or perhaps there’s no available fix for a vulnerability. In this case, you can request an exception from the team that has established the requirement — requesting an extension or an exception. Each scorecard can have its own approval chain, ensuring that the right stakeholders are involved.

What an engineering scorecard looks like

A microservice scorecard can monitor documentation, security, reliability, production readiness, health, and more. 

How to use scorecards in a portal

  1. Define quality and standards
    • Identify software assets (e.g., services, APIs, applications) to monitor.
    • Align on quality expectations and a "definition of done”.
  2. Configure checks
    • Set up automated checks and metrics (e.g., code coverage for a code quality scorecard).
  3. Establish thresholds
    • Define thresholds for status levels (e.g., bronze, silver, gold or best, at risk, critical).
    • Example: If code coverage drops below a set percentage, its status changes.
  4. Create dashboards
    • Group scorecard data visually to track compliance and identify gaps.
  5. Automate and act
    • Trigger actions when thresholds are breached (e.g., create Jira tickets, send Slack alerts).
    • Establish escalation processes and categorize services into tiers.
  6. Continuously improve
    • Regularly review and refine scorecard metrics and thresholds.
    • Adjust classification levels and use initiatives to drive change.
Download guide

No email required

That is how the 'info box' will look like:
Further Reading:
Read: Why "running service" should be part of the data model in your internal developer portal
{{cta_1}}

Check out Port's pre-populated demo and see what it's all about.

Check live demo

No email required

{{cta_survey}}

Check out the 2025 State of Internal Developer Portals report

See the full report

No email required

{{cta_2}}

Contact sales for a technical product walkthrough

Let’s start
{{cta_3}}

Open a free Port account. No credit card required

Let’s start
{{cta_4}}

Watch Port live coding videos - setting up an internal developer portal & platform

{{cta_5}}

Check out Port's pre-populated demo and see what it's all about.

(no email required)

Let’s start
{{cta_6}}

Contact sales for a technical walkthrough of Port

Let’s start
{{cta_7}}

Open a free Port account. No credit card required

Let’s start
{{cta_8}}

Watch Port live coding videos - setting up an internal developer portal & platform

{{cta-demo}}
{{reading-box-backstage-vs-port}}
{{cta-backstage-docs-button}}

Let us walk you through the platform and catalog the assets of your choice.

I’m ready, let’s start