Kubernetes’ popularity stemmed from the increased usage of container technologies, which were the perfect host for microservices - so much so, that applications now comprise hundreds or even thousands of containers. But managing these across multiple environments using scripts and self-made tools can be very challenging. In short, open source container orchestration tools like Kubernetes provide a way to manage these complexities.
However, tackling this complexity comes at a cost of its own; Kubernetes requires knowledge that takes time to acquire. And, considering developers have an increasing number of responsibilities already (with no sign of this slowing down anytime soon) - it’s unfair to suggest they should have to learn about the intricacies of Kubernetes - or to believe they can create, deploy and manage applications on Kubernetes without making costly errors.
Kubernetes solves a lot of complexity, but has also increased cognitive load for developers.
Some of the challenges developers face include:
- Having to understand more about the Kubernetes ecosystem - things like pods, nodes, clusters and namespaces.
- Having to learn how to use kubectl to interact with Kubernetes clusters.
- A requirement to write and manage configmaps using YAML files and Helm charts, which can be complicated.
It’s no wonder that developers find creating and managing applications on Kubernetes a tough gig; and many engineering organizations resort to telling their developers to add their queries on complexities to an ever-growing list of DevOps tickets.
So it seems as if the options for developers are:
- Learning everything about Kubernetes - which is unrealistic.
- Trying to work with Kubernetes without the required knowledge, and hoping that costly errors won’t be made.
- Waiting for DevOps tickets - causing stress for both themselves and DevOps and slowing down the entire software development life cycle.
But there is a fourth option, which is to abstract away the complexity of Kubernetes for developers, while enabling them to carry on developing applications. An internal developer portal provides the basis for this.
Using a developer portal to abstract Kubernetes
Create, deploy and manage a K8s application using self-service
The underlying infrastructure shouldn’t matter to developers - whether it is Kubernetes, ECS, Terraform or VM - what matters is the actions they want to take when it comes to developing applications.
Using an internal developer portal, platform engineers can create self-service forms that enable Kubernetes self-service for developers, effectively creating a golden path while also ensuring that these actions adhere to standards. Crucially, in the self-service form, developers only have to select from a number of options to ensure they can perform the action required - they don’t need knowledge of Kubernetes or even know what’s on the backend. This is taken care of by platform engineers and thus Kubernetes self-service is abstracted for developers.
Self-service actions can be used to:
Scaffold a new application
If a developer wants to scaffold a new service, they simply click on ‘scaffold a new application’, provide the relevant inputs in the self-service form, and watch as the action initiates and the scaffolding process is reflected in the portal, including its status and logs. In the background (abstracted away for the developer), a payload that includes the user inputs and relevant action metadata is sent to the desired CI workflow. A job is triggered and the user gets continuous indication about its progress. Self-service drastically reduces the time to scaffold a new application and reduces tickets.
Deploy
After creating new applications, the default is usually for developers to have to go to DevOps to ask for a CI/CD pipeline to build and deploy their app. This can take a while for DevOps to provide - and then it requires developers to piece together the pipeline. With a portal’s self-service, developers can easily deploy their apps to the cloud and they can also add resources like databases in just a few clicks.
Scale
Sometimes, developers need to scale in/out because of an unexpected condition such as when maintenance is required.This can also be accomplished using developer self-service in the portal.
Portals that provide self-service actions will speed up deployment velocity while creating a much better developer experience. What’s more, even if the organization wants to switch to a different tech it will be seamless for the devs, meaning there’s no vendor lock-in.
The portal should then provide all the information developers need including architecture diagrams, visibility into what was deployed,created, done and more.
Better developer Kubernetes visibility and monitoring using the software catalog
The portal’s foundation is a software catalog (otherwise known as a service catalog), which enables developers to see where their services are running and the health metrics behind them. While raw Kubernetes data can be overwhelming for developers, the portal only shows the important details in a way that’s easy for developers to digest. Typically, Kubernetes UIs are overloaded with data and many developers don’t have the expertise to work with Kubectl to get the information they need.
Once the portal is set up by platform engineers, developers can see their clusters, namespaces and workloads right in the portal. The software catalog also shows additional information, such as runtime details, images and their health status.
Platform engineers make developer lives easier by choosing the Kubernetes metadata that will be shown to developers. That way, they can create personalized views for different personas, so the dashboard for managers is different to the one for SREs, and different for developers, etc.
Enforcing Kubernetes best practices and standards using the portal
One of the big challenges for engineering organizations is improving communication between developers, and the SRE team responsible for managing production. By implementing a scorecard, developers and the SRE team can assess whether the service meets the criteria for code quality and production readiness.
From a Kubernetes perspective, scorecards can be used:
- To check that containers are set up correctly by monitoring metrics that validate container resource configurations such as memory requests and limits, and ensuring that liveness and readiness probes are configured for all containers.
- To ensure that workloads aren’t deployed in the default Kubernetes namespace, preventing potential issues that may arise from interfering with system components.
- To ensure high availability, metrics should require a minimum number of replicas to keep services running smoothly, even if something goes wrong.
Concluding thoughts
While Kubernetes has transformed the way we manage and deploy microservices, it also introduces significant complexity that can overwhelm developers. By leveraging a developer portal, organizations can abstract away this complexity, allowing developers to focus on what they do best—building great applications. With self-service capabilities, simplified deployment processes, and enhanced visibility through a software catalog, a developer portal streamlines interactions with Kubernetes, reduces cognitive load, and speeds up the development lifecycle. Ultimately, the portal not only improves developer productivity but also ensures that best practices are enforced consistently across the organization, paving the way for software that is more reliable and scalable.
Book a demo right now to check out Port's developer portal yourself
It's a Trap - Jenkins as Self service UI
How do GitOps affect developer experience?
It's a Trap - Jenkins as Self service UI. Click her to download the eBook
Learning from CyberArk - building an internal developer platform in-house
Example JSON block
Order Domain
Cart System
Products System
Cart Resource
Cart API
Core Kafka Library
Core Payment Library
Cart Service JSON
Products Service JSON
Component Blueprint
Resource Blueprint
API Blueprint
Domain Blueprint
System Blueprint
Microservices SDLC
Scaffold a new microservice
Deploy (canary or blue-green)
Feature flagging
Revert
Lock deployments
Add Secret
Force merge pull request (skip tests on crises)
Add environment variable to service
Add IaC to the service
Upgrade package version
Development environments
Spin up a developer environment for 5 days
ETL mock data to environment
Invite developer to the environment
Extend TTL by 3 days
Cloud resources
Provision a cloud resource
Modify a cloud resource
Get permissions to access cloud resource
SRE actions
Update pod count
Update auto-scaling group
Execute incident response runbook automation
Data Engineering
Add / Remove / Update Column to table
Run Airflow DAG
Duplicate table
Backoffice
Change customer configuration
Update customer software version
Upgrade - Downgrade plan tier
Create - Delete customer
Machine learning actions
Train model
Pre-process dataset
Deploy
A/B testing traffic route
Revert
Spin up remote Jupyter notebook
Engineering tools
Observability
Tasks management
CI/CD
On-Call management
Troubleshooting tools
DevSecOps
Runbooks
Infrastructure
Cloud Resources
K8S
Containers & Serverless
IaC
Databases
Environments
Regions
Software and more
Microservices
Docker Images
Docs
APIs
3rd parties
Runbooks
Cron jobs
Check out Port's pre-populated demo and see what it's all about.
No email required
Contact sales for a technical product walkthrough
Open a free Port account. No credit card required
Watch Port live coding videos - setting up an internal developer portal & platform
Check out Port's pre-populated demo and see what it's all about.
(no email required)