Ready to start?
Introduction
If you want self-sufficient developers working in a “you build it you own it” world, you need an internal developer portal. Developers need security information, in context. AppSec teams can use the power of platform engineering to provide that.
This blog will discuss how we can help developers own AppSec by creating a better context for dealing with vulnerabilities and misconfigurations in the internal developer portal.
Securing the entire software development life cycle, made easy for developers
Snyk’s State of Open Source Security report tells us that, on average, enterprises use more than 9 security tools. This can add quite a cognitive load on developers and the AppSec teams that serve them.
There are many tools for AppSec. Some focus on the different software pillars, such as infrastructure, apps, containers, namespaces, pods, networks, etc. Others focus on the different development phases. Developer portals can show all this data in one place, reducing cognitive load, providing context and reducing developer dependency on AppSec.
With the plethora of security tools available, such as Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), it becomes challenging to solely rely on one tool to cover all aspects of security. What's more is that the expertise around these tools is also siloed, making it even more difficult to provide context for developers.
What we suggest is that by consolidating and showcasing vulnerabilities and misconfigurations across various tools and stages in the development life cycle, all in one place, the internal developer portal becomes a centralized hub for security information. This approach allows developers to gain a holistic view of potential risks, enabling them to understand the security state of a resource or microservice within a certain context and then be able to address vulnerabilities and misconfigurations proactively.
{{cta_3}}
Internal developer portals and AppSec
A core underlying tool of platform engineering, internal developer portals are made of several pillars that can easily make vulnerability and misconfiguration data easy to track, unifying data from different tools and different stages of the software development lifecycle.
- The software catalog can be designed with a data model that shows all vulnerabilities and misconfigurations in context
- Developer self-service actions can be used (as day-2 actions) to remediate such issues
- Scorecards can provide a simple way to communicate security expectations and status
- Automations can be used to push resolution of any issues.
Let’s take a look at how this works in real life.
Using the power of blueprints for vulnerabilities and misconfigurations
Blueprints are a detailed plan, outline, or design that serves as a guide for creating or constructing something. In the context of software development, blueprints help ensure consistency and efficiency in deploying and managing complex software systems and infrastructure. In the case of internal developer portals, blueprints are one of the single most important concepts to grasp, since their flexibility lies at the core of making good internal developer portals.
In port, a blueprint, also known as a custom entity definition, serves as a data model enabling the specification of metadata linked to software catalog entities. Blueprints act as the fundamental components within Port, where data is ingested according to blueprints and then appears as actual catalog entities. Blueprints offer the versatility to represent a wide range of assets in Port, including microservices, environments, packages, clusters, databases, and more. Once blueprints are defined, the platform engineer also defines the relationships between them. This is how dependencies and related entities are defined, and will prove very useful in the case of vulnerabilities, misconfigurations and really anything in the software catalog.
In Port, blueprints are how we define the data model for the software catalog and the internal developer portal. The main idea is that organizations have different engineering infrastructures and that a rigid data model won’t create a valuable software catalog. For instance, if you want to abstract kubernetes for developers, you want to be able to define what they’ll see, and be free to show that data in any context needed. Here’s an example of a Kubernetes cluster blueprint, whose properties will impact what is ingested into the catalog, creating catalog entities.
This blog will focus on how to use vulnerability and misconfiguration blueprints, but there are other examples, too. One is managing packages in the internal developer portal, by using a set of blueprints - packages, package versions etc. Here’s an example of how the software catalog shows package entities, in Port’s demo environment. Another example is using blueprints for Alert management - which can also work for AppSec issues.
Defining a vulnerability blueprint in the internal developer portal
By integrating vulnerabilities and misconfigurations into the software catalog in the internal developer portal, developers will be able to immediately understand the impact of security issues and how to resolve them. Being able to tell whether a vulnerability affects a service running in production or not, for instance, allows for precise and automated actions, such as sending notifications to specific Slack channels, opening relevant Jira issues, and triggering appropriate responses by the respective teams. Handling these issues in a unified way and in context fosters efficient collaboration between developers, security experts, and other stakeholders.
The flexibility of blueprints is what allows you to go beyond the simplistic representation of “microservices” and “resources” and track, in-context and in a meaningful way, what you really want to see (compare this to creating a vulnerability scorecard at the microservice level, discussed at the end of this post). In this case, let’s use a simplistic definition - all the vulnerabilities developers should care about - and see what we come up with in terms of a blueprint definition.
The vulnerability blueprint shown below is a representation of a generic vulnerability that can be sourced from various sec tools. The beauty here is that it is one schema to represent vulnerability properties coming from different tools.
Here it is (note that you also define self-service actions and scorecards in the blueprint)::
Let’s see what types of software catalog entities are created based on this blueprint.
The software catalog now contains vulnerability entities created according to the blueprint schema. Their sources are varied: Snyk, Trivy, Dependabot, Kics, SonarQube, StackHawk and more. To learn more, check out the vulnerability entities in Port’s live demo, here.
When blueprints are defined, we also set the relationships among them. This allows us to search and understand dependencies along the graph. In this case, scrolling down in the entity page shows us the related entities, who in turn, were also defined in blueprints:
- Package versions
- Running services (this blog explains the running service blueprint in detail)
- Developer environments and
- Deployments
Misconfigurations
Ideally we should track misconfigurations together with vulnerabilities, but in this case, let’s examine a sample misconfigurations blueprint.
Here are some of the corresponding software catalog entities (you can check them out in Port’s demo)
{{cta_1}}
Actions and automations
You may be reading this and nodding your head - great data model, great dependencies in context - but is this actionable?
The power of Port’s graph-based software catalog comes into play here, again. For example, for a misconfiguration on a specific Kubernetes workload, we can understand from the catalog if this workload is in the production environment or not, and also who is responsible for this workload. This can drive an automated message to the relevant slack channel or opening a Jira issue and automatically assigning it to the relevant team
There is a choice here. For example, the action “create a new Jira issue for each identified vulnerability” can be
- Enable the action through a developer self-service action will require the developer to make a decision in the internal developer portal and create the issue
- Create an automation that would automatically create the Jira ticket or use the power of the graph in Port to tell where to send the alert in slack (right team, channel and/or developer).
Parting words
Ops and AppSec teams don’t need to be bottlenecks for developers. Vulnerability and misconfiguration representation in the internal developer portal lets developers understand them in one place with the right data, context and permissions. They won’t need to use many tools - the one view in the internal developer portal will give them all they need.
Check out Port's pre-populated demo and see what it's all about.
No email required
Contact sales for a technical product walkthrough
Open a free Port account. No credit card required
Watch Port live coding videos - setting up an internal developer portal & platform
Check out Port's pre-populated demo and see what it's all about.
(no email required)
Contact sales for a technical product walkthrough
Open a free Port account. No credit card required
Watch Port live coding videos - setting up an internal developer portal & platform
Book a demo right now to check out Port's developer portal yourself
Apply to join the Beta for Port's new Backstage plugin
It's a Trap - Jenkins as Self service UI
Further reading:
Example JSON block
Order Domain
Cart System
Products System
Cart Resource
Cart API
Core Kafka Library
Core Payment Library
Cart Service JSON
Products Service JSON
Component Blueprint
Resource Blueprint
API Blueprint
Domain Blueprint
System Blueprint
Microservices SDLC
Scaffold a new microservice
Deploy (canary or blue-green)
Feature flagging
Revert
Lock deployments
Add Secret
Force merge pull request (skip tests on crises)
Add environment variable to service
Add IaC to the service
Upgrade package version
Development environments
Spin up a developer environment for 5 days
ETL mock data to environment
Invite developer to the environment
Extend TTL by 3 days
Cloud resources
Provision a cloud resource
Modify a cloud resource
Get permissions to access cloud resource
SRE actions
Update pod count
Update auto-scaling group
Execute incident response runbook automation
Data Engineering
Add / Remove / Update Column to table
Run Airflow DAG
Duplicate table
Backoffice
Change customer configuration
Update customer software version
Upgrade - Downgrade plan tier
Create - Delete customer
Machine learning actions
Train model
Pre-process dataset
Deploy
A/B testing traffic route
Revert
Spin up remote Jupyter notebook
Engineering tools
Observability
Tasks management
CI/CD
On-Call management
Troubleshooting tools
DevSecOps
Runbooks
Infrastructure
Cloud Resources
K8S
Containers & Serverless
IaC
Databases
Environments
Regions
Software and more
Microservices
Docker Images
Docs
APIs
3rd parties
Runbooks
Cron jobs