Two fundamental goals of an internal developer portal are to foster a culture of engineering excellence and to encourage end-to-end developer ownership: a culture of “you build it, you own it.”
Scorecards, a core element of a portal, provide clear standards of quality and maturity to drive alignment across engineering teams – and the toolkit to drive a variety of initiatives from continuous improvement through production readiness and more.
In this section, we’ll define scorecards, explain why they matter, provide examples of the most common types of scorecards, articulate how scorecards are used in practice, and provide a step-by-step “quickstart” guide to new portal users for implementing scorecards.
What is a scorecard?
A scorecard is a way to measure and track the health and progress of each service and application within your software catalog. Scorecards establish metrics to grade production readiness, code quality, migration quality, operational performance, and more.
For example, a service maturity scorecard could track code coverage, ownership, and documentation for a critical customer-facing service. This provides visibility into whether practices are improving month-over-month.
Scorecards drive alignment, prioritization, and quality
There are three main reasons organizations use scorecards:
- Organizational alignment: Scorecards allow you to set clear, established standards and baselines for code quality, documentation, operational performance, and other metrics. This ensures consistency across teams and services. For example, you may set a standard for minimum code coverage across all services.
- Alerting and prioritization: By monitoring service scores against thresholds, you can be alerted when a service drops below acceptable baseline levels. This allows you to proactively identify services that need attention before problems occur. Scorecards enable you to prioritize which services need help first based on their scores. For example, an alert could notify if a service's uptime drops below 99%. Such alerting functions can also be tied to automations.
- Driving quality improvements: Initiatives (see “Initiatives roll it up,” below) are groups of scorecards that all fit within a strategic focus area. Engineering leaders rely on initiatives to set organizational priorities – and invest team energy and focus in concerted improvements in a given area.
Capturing health and maturity throughout the SDLC
There are many types of scorecards that provide visibility into different aspects of your services and systems:
- Operational readiness: These scorecards check if services are production-ready by looking at factors like ownership, documentation, runbooks, and infrastructure setup. At Port, we use readiness scorecards to ensure services meet standards before launch. For example, requiring a runbook, alert setup, and monitored uptime.
- Service maturity: Service maturity scorecards grade factors like code coverage, versioning, README quality, and other code health metrics. For example, Port tracks service maturity with a scorecard that checks for ownership, test coverage, documentation, and more.
- Operational maturity: For services already in production, these scorecards measure operational health. Example metrics include test coverage, on-call activity, mean time to recovery, and deployment frequency.
- DORA metrics: Scorecards can track DORA metrics like deployment frequency, lead time for changes, and change fail rates. This provides insight into engineering efficiency and quality.
- Migrations: Migration scorecards provide visibility into progress, blocking issues, and other key metrics as you migrate services to new platforms. For example, tracking completion percentage and stalled tasks during a move to Kubernetes.
- Health: Health scorecards give high-level visibility into the overall status of services, often using a simple red/yellow/green scoring system. For example, services with no errors or incidents in the past week are green.
- Code quality: These scorecards analyze code on dimensions like test coverage, readability, extensibility, and reproducibility. For instance, tracking pylint scores over time.
- Cloud cost: Scorecards focused on cloud cost management (FinOps) track metrics like spend by team and time-to-live for cloud resources. For example, identifying unused S3 buckets or overprovisioned resources.
- Security: Security scorecards monitor for misconfigurations, unprotected resources, provisioning issues, and other risks. For instance, flagging unencrypted RDS instances or insecure bucket policies.
Initiatives roll it up
An "initiative" in the internal developer portal context refers to a strategic focus area that aligns developer workflows and activities towards business goals and outcomes.
Initiatives differ from individual scorecards in their goals and scope:
- Scorecards allow teams to benchmark themselves and get guidance on incremental steps to improve their scores. Scorecards may track automation, testing, monitoring, documentation, and other best practices.
- Initiatives align developer workflows to business KPIs. An "Accelerate Release Velocity" initiative could track deployment frequency, lead time, change failure rate, and other metrics.
- Initiatives connect dispersed efforts around a common cause. Developers have a purposeful path rather than ad-hoc tasks. Platform teams streamline tooling and workflows to support the initiative.
- By rallying the organization around targeted initiatives, internal developer portals drive lasting culture change, not just one-off optimizations. Initiatives create clarity, urgency and focus.
Ultimately, initiatives represent strategic programs to achieve business outcomes by guiding developers, project teams, and platform organizations towards practices and behaviors that matter most.
When the rubber hits the road
Organizations use scorecards in two ways:
As part of a periodical review of initiatives
Managers periodically review scorecards to check progress towards goals and identify areas that need attention (see below). For example, a CTO might review code quality scorecards across teams quarterly.
As alerts and automation triggers
- Alerts notify users via email, SMS, Slack, or other channels when thresholds are crossed. For example, a low operational readiness score could trigger an email to the service owner.
- Alerts can kick off automated workflows to resolve issues surfaced by scorecards. At Port, we've built workflows to auto-assign tasks to managers if scores for their services drop too low.
- CI/CDs can check the relevant scorecard (e.g., code coverage, security policy adherence) for the service deployment – and stop the deployment if the scorecard tier is too low.
Create your own scorecard
Scorecards are created through a four-step process:
The final word
By providing standards, visibility, and automatic alerts, scorecards help engineering teams achieve consistency, improve practices, and proactively identify issues. They are an essential capability of portals like Port.
No email required
That is how the 'info box' will look like:
Further Reading:
Read: Why "running service" should be part of the data model in your internal developer portal