How to upgrade your start-up to an enterprise?

September 12, 2022

How to upgrade your start-up to an enterprise?

Ready to start?

Let’s start with a few introductions. Welcome to Port.

After years of pain working in siloed, chaotic Developer teams and infrastructures, we created the solution we should’ve had all along. Port is a Developer Portal that brings everyone together. It serves as a one-stop-shop for engineering teams to get a complete view of their environment. 

We’ve interviewed over 150 companies with different backgrounds and profiles to learn how they handled DevOps. Adding these to our own lived experiences, we had an idea that would solve the pain that Devs around the world were experiencing. And so, Port was born.

{{cta}}

From humble beginnings

Port started as a POC back in 2021. This POC, its codebase, and infrastructure were our space to improve and polish the original idea. Its increasing capabilities brought in our first testers, clients, and, most importantly, our design partners. 

Design partners help refine the vision of the product. They are critical in the early stages of product development by providing feedback on existing features, telling you what else they would like to see, and keeping you on track to implement these changes. Essentially, we used design partners to reference the market requirements and needs. 

In return, design partners get to impact the product roadmap and prioritize certain features for development. They also have a dedicated team working on a solution that’s closely tailored to their needs and requirements—a win-win for everyone.

We designed our POC as a starting point but not the base of our product for years to come. Once we’d nailed down our Product-Market Fit and spotted a tangible gap in the market, we knew we’d need a new architecture. One that was well thought out and could serve us in a reliable and scalable way.

The challenges of transitioning to a new architecture

We began to rewrite the system, incorporating some fairly significant changes.

Among the changes we made:

  • Changed our core platform language from Python to Typescript
  • Moved from a Polyrepo pattern to a standardized Monorepo (more on that in a future blogpost!)
  • Migrated from MongoDB to Redis as our main datastore
  • Invested heavily into a brand new testing infrastructure using Jest for both the frontend and the backend
  • Rewrote our documentation using Docusaurus2
  • Wrote standard and generic workflows for use with Github Actions as our primary CI/CD

Note: None of the tools, platforms or languages we had previously used had caused us any issue, we just did our research and decided that a transition to our new set of tooling would allow us to move faster and deliver a better product to our customers. Stay tuned for future blog posts explaining what products we are using and how they help us super-power our Platform.

Of course this is just a brief outline of the changes we made, keep reading to understand how all of this comes together.

In addition, every feature from the original POC had to be accounted for and, in some cases, modified to fit into the matured realization of our vision.

All the while, the original POC was still alive and serving existing customers as a product. We needed to balance our time between fixing bugs, providing customer support for the original platform, and writing our new system. To streamline our workload, we considered each bug/task/issue and whether it was worth developing for the old system or just implementing in the new one. A delicate balance to achieve when existing client satisfaction is mission-critical.

A change in infrastructure

After that came the infrastructure changes. We use AWS for our cloud infrastructure and already had a working deployment for our POC. The production-ready environment would be similar to the original but have a greater capacity to scale, ready for future growth.

We went with two main infrastructure architectures: 

  • For the frontend: 
  • S3 - for file storage and hosting
  • Cloudfront - for CDN services and efficient file serving
  • Route53 - for friendly and recognizable URLs
  • For the backend:
  • Elastic Container Registry (ECR) - for container image storage
  • AWS App Runner - for a hosted, managed and scalable container environment that gives us speed, performance and flexibility
  • Note: We had previously used AWS Lambda for the backend of our platform, but decided that we needed more control over our deployed image and AWS App Runner gives us exactly that

For our datastore, we decided on Redis Cloud - being one of the most notorious in-memory cache products on the market, we knew it would give us the level of performance needed for a world-class platform. It is also a very versatile platform - an important feature for a fast-moving startup - combining RedisGraph, RedisJSON, and RediSearch all in one place. 

Testing and documentation got a big makeover. New, automated architectures guaranteed our customers always receive fully functional features accompanied by clear, up to date feature docs. Tests were now based on Jest, Docker Compose, and Github Workflows. Additionally, documentation was now based on ReDoc and Docusaurus, deployed using AWS Amplify.

The final flourish was to invest heavily in Github Workflows for quick and easy deployments. These workflows are under the Developer’s control, and they choose when a new version of the code goes live - no DevOps assistance is needed. Remember that our mission is to make Developers happy - this is the level of power and independence they can get with Port.

Speaking of which, part of our internal integration process for the new version included utilizing Port ourselves, new deployments of the different microservices are reported back to the system so every developer can tell exactly what is deployed and where.

After some intense and motivating development sprints, we successfully moved to a stable production environment. One that could serve our growing customer base and deliver a better product faster.

{{ebook}}

Migration time

Time to put the pieces together. We used a staging environment to deploy all the new code, perform integration testing, and trial pre-conceived test scenarios - an ongoing, repeating process. This meant we could keep moving forward with new feature development as existing features were being validated. Then finally, the whole company got onto the new system to put it through its paces - precisely what our customers would do.

There were just two more steps to the finish line: data migration and customer migration. 

For data migration, we developed a script to take data from our old MongoDB, convert it into our new data format, and ingest it into our new Redis. In order to be certain that no data gets lost in the process, we also added a Kafka Cluster to store all intermediate data and to ensure 100% data consistency between the old system and the new one. Customers are always working with our system, new data is always being ingested and Port is constantly used as a Source-Of-Truth, so data integrity, reliability and consistency can’t be overlooked.
This is a simple but critical process which had to be validated. Maintaining customer trust is vital, and missing data would undermine confidence in the new system. This couldn’t happen.

Once we were sure the script was working as intended, it was “go time” - otherwise known as customer migration. We scheduled the move to the new system with our customers, sharing new URLs to our new deployments and rerouting existing traffic to the new infrastructure. 

Cue wild celebrations! 

{{cta_1}}

The (first) finish line

Now that customers had the new system, feedback inevitably started flowing in. Slight fixes were needed, and there will be many more features to introduce before we achieve our vision. 

As I mentioned, we’re serious about making Developers happier. So we’re continuing to give Developers and DevOps teams the best Developer Platform they could hope for. One that offers them observability, control, monitoring, and execution in a single, convenient platform: Port.

And that’s how we took Port from POC to Enterprise-Grade. We hope you’ll join us as we continue on this journey!

{{cta_1}}

Check out Port's pre-populated demo and see what it's all about.

Check live demo

No email required

{{cta_2}}

Contact sales for a technical product walkthrough

Let’s start
{{cta_3}}

Open a free Port account. No credit card required

Let’s start
{{cta_4}}

Watch Port live coding videos - setting up an internal developer portal & platform

{{cta_5}}

Check out Port's pre-populated demo and see what it's all about.

(no email required)

Let’s start
{{cta_6}}

Contact sales for a technical product walkthrough

Let’s start
{{cta_7}}

Open a free Port account. No credit card required

Let’s start
{{cta_8}}

Watch Port live coding videos - setting up an internal developer portal & platform

{{cta-demo}}
{{reading-box-backstage-vs-port}}

Example JSON block

{
  "foo": "bar"
}

Order Domain

{
  "properties": {},
  "relations": {},
  "title": "Orders",
  "identifier": "Orders"
}

Cart System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Cart",
  "title": "Cart"
}

Products System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Products",
  "title": "Products"
}

Cart Resource

{
  "properties": {
    "type": "postgress"
  },
  "relations": {},
  "icon": "GPU",
  "title": "Cart SQL database",
  "identifier": "cart-sql-sb"
}

Cart API

{
 "identifier": "CartAPI",
 "title": "Cart API",
 "blueprint": "API",
 "properties": {
   "type": "Open API"
 },
 "relations": {
   "provider": "CartService"
 },
 "icon": "Link"
}

Core Kafka Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Kafka Library",
  "identifier": "CoreKafkaLibrary"
}

Core Payment Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Payment Library",
  "identifier": "CorePaymentLibrary"
}

Cart Service JSON

{
 "identifier": "CartService",
 "title": "Cart Service",
 "blueprint": "Component",
 "properties": {
   "type": "service"
 },
 "relations": {
   "system": "Cart",
   "resources": [
     "cart-sql-sb"
   ],
   "consumesApi": [],
   "components": [
     "CorePaymentLibrary",
     "CoreKafkaLibrary"
   ]
 },
 "icon": "Cloud"
}

Products Service JSON

{
  "identifier": "ProductsService",
  "title": "Products Service",
  "blueprint": "Component",
  "properties": {
    "type": "service"
  },
  "relations": {
    "system": "Products",
    "consumesApi": [
      "CartAPI"
    ],
    "components": []
  }
}

Component Blueprint

{
 "identifier": "Component",
 "title": "Component",
 "icon": "Cloud",
 "schema": {
   "properties": {
     "type": {
       "enum": [
         "service",
         "library"
       ],
       "icon": "Docs",
       "type": "string",
       "enumColors": {
         "service": "blue",
         "library": "green"
       }
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "system": {
     "target": "System",
     "required": false,
     "many": false
   },
   "resources": {
     "target": "Resource",
     "required": false,
     "many": true
   },
   "consumesApi": {
     "target": "API",
     "required": false,
     "many": true
   },
   "components": {
     "target": "Component",
     "required": false,
     "many": true
   },
   "providesApi": {
     "target": "API",
     "required": false,
     "many": false
   }
 }
}

Resource Blueprint

{
 “identifier”: “Resource”,
 “title”: “Resource”,
 “icon”: “DevopsTool”,
 “schema”: {
   “properties”: {
     “type”: {
       “enum”: [
         “postgress”,
         “kafka-topic”,
         “rabbit-queue”,
         “s3-bucket”
       ],
       “icon”: “Docs”,
       “type”: “string”
     }
   },
   “required”: []
 },
 “mirrorProperties”: {},
 “formulaProperties”: {},
 “calculationProperties”: {},
 “relations”: {}
}

API Blueprint

{
 "identifier": "API",
 "title": "API",
 "icon": "Link",
 "schema": {
   "properties": {
     "type": {
       "type": "string",
       "enum": [
         "Open API",
         "grpc"
       ]
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "provider": {
     "target": "Component",
     "required": true,
     "many": false
   }
 }
}

Domain Blueprint

{
 "identifier": "Domain",
 "title": "Domain",
 "icon": "Server",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {}
}

System Blueprint

{
 "identifier": "System",
 "title": "System",
 "icon": "DevopsTool",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "domain": {
     "target": "Domain",
     "required": true,
     "many": false
   }
 }
}
{{tabel-1}}

Microservices SDLC

  • Scaffold a new microservice

  • Deploy (canary or blue-green)

  • Feature flagging

  • Revert

  • Lock deployments

  • Add Secret

  • Force merge pull request (skip tests on crises)

  • Add environment variable to service

  • Add IaC to the service

  • Upgrade package version

Development environments

  • Spin up a developer environment for 5 days

  • ETL mock data to environment

  • Invite developer to the environment

  • Extend TTL by 3 days

Cloud resources

  • Provision a cloud resource

  • Modify a cloud resource

  • Get permissions to access cloud resource

SRE actions

  • Update pod count

  • Update auto-scaling group

  • Execute incident response runbook automation

Data Engineering

  • Add / Remove / Update Column to table

  • Run Airflow DAG

  • Duplicate table

Backoffice

  • Change customer configuration

  • Update customer software version

  • Upgrade - Downgrade plan tier

  • Create - Delete customer

Machine learning actions

  • Train model

  • Pre-process dataset

  • Deploy

  • A/B testing traffic route

  • Revert

  • Spin up remote Jupyter notebook

{{tabel-2}}

Engineering tools

  • Observability

  • Tasks management

  • CI/CD

  • On-Call management

  • Troubleshooting tools

  • DevSecOps

  • Runbooks

Infrastructure

  • Cloud Resources

  • K8S

  • Containers & Serverless

  • IaC

  • Databases

  • Environments

  • Regions

Software and more

  • Microservices

  • Docker Images

  • Docs

  • APIs

  • 3rd parties

  • Runbooks

  • Cron jobs

Starting with Port is simple, fast and free.

Let’s start