Introduction
Given its role as a “single pane of glass,” it’s no surprise that an internal developer portal is only as valuable as the data it contains. A portal on its own offers developer self-service, insight into the SDLC, documentation, and more. But its value multiplies exponentially when integrated with cloud resources, kubernetes, CI/CD data, and data coming from FinOps, AppSec, incident management, and similar tools.
By centralizing metadata from these tools into a unified catalog, developers and technical leaders can gain a holistic picture of the development lifecycle and status of services and applications. Extensions also broaden the range of developer self-service actions. For instance, executing a runbook when an incident occurs.
But how does one actually implement robust integrations to unite all these disparate data sources?
More data, more value
Out-of-the-box, a portal provides core services for documentation and self-service. But the range of possible integrations is endless:
- CI/CD platforms to surface build statuses, deployments, and releases
- Issue/project trackers to link tickets and epics to codebases and services
- Monitoring systems to associate alerts, traces, and telemetry data
- Cloud infrastructure to visualize storage and functions (e.g., S3, lambdas)
- Internal custom and legacy systems that provide additional context
This requires a flexible integration framework that can extract metadata from these systems and accurately sync it to the portal’s searchable catalog in real-time. More importantly, the metadata should be stored in one place, so that visualizations, reporting and automation all work as they should.
For example, when you integrate alerts into your software catalog, you get a single pane of glass for everything alerts-related, in-context within the relevant software catalog entities, complete with all the information you need (like the service or resource owner). Beyond the convenience of not needing to check multiple alert tools, the fact that alerts are in context significantly reduces the cognitive load on developers. Each alert is linked to its origin, such as a production issue tied to a specific service, and it can even be associated with day-2 operations that help resolve the underlying problem.
Real-time, flexible, and secure-by-design matter
Every internal developer portal must define its own approach to extensibility. But broadly, everyone is trying to solve the following challenges in a reliable, performant, and secure manner:
- Real-time continuous sync - Data must be continuously streamed from sources to the software catalog to provide an accurate up-to-date system of record, rather than just periodic snapshots.
- Bi-directional sync - Beyond ingesting data, the portal should be able to propagate changes back to source systems after actions are taken, completing the loop. For example, the software catalog should get updated when I make changes in the source environment (e.g., if a pod in K8s is deleted, I need to make sure that the replica count is updated in my IDP). Equally, the software catalog should also get updated when I run a developer self-service action initiated from the IDP itself (e.g., provisioning an ephemeral environment).
- Reconciliation - The integration framework should handle reconciliation by treating the source system as the “ground truth” and seamlessly syncing additions, updates, and deletions. For example, if I want to display all the services from PagerDuty as part of the catalog, I also need an automatic process that removes a service from the portal’s catalog if that service no longer exists in PagerDuty (thus keeping the catalog synced with the data in PagerDuty, or any other integration).
- Secure by design - A robust integration framework should eliminate the need to share sensitive credentials or secrets, as well as avoid the requirement to whitelist IPs. Data filtering and security measures should be implemented on the customer's side, ensuring data privacy and protection.
- Flexible protocols - The framework should support common protocols like webhooks, APIs, and message queues to integrate with any source system.
Why go with an open-source extensibility framework
While core integrations can be provided out-of-the-box, open sourcing an internal developer portal’s integration framework unlocks community extensibility.
Open source principles align seamlessly with the concept of portal extensibility. Open source fosters collaboration, encourages contributions, and empowers developers to customize and extend the platform according to their unique requirements. An open source integration framework offers several advantages:
- Community contributions: An open source framework invites developers from diverse backgrounds to contribute integrations that cater to a wide array of use cases.
- Flexibility: Open source allows for the creation of custom integrations tailored to specific needs, giving organizations the freedom to enhance their portal’s functionality.
- Rapid innovation: By sharing integrations as open source projects, the development community can collectively drive innovation and expand the capabilities of the portal.
Finally, transparent open source code enables greater trust and customization to meet each company’s security policies, rather than opaque closed solutions.
The final word
Extensibility is a core driver of the value of an internal developer portal. And an open-source approach to integrations – paired with a framework that prioritizes real-time bi-directional connectivity, reconciliation, and security – ensures that a portal can integrate with critical data sources in a flexible and scalable way.
No email required
That is how the 'info box' will look like:
Further Reading:
Read: Why "running service" should be part of the data model in your internal developer portal