Enabling developer independence: delivering world class documentation

November 8, 2022

Enabling developer independence: delivering world class documentation

Ready to start?

Introduction

In this blog post, I will show you how we got our docs up and running, from zero to hero. I will also let you in on our internal discussions and considerations before choosing our documentation infrastructure.

By the end of this blog post, you will get all the data you need to have docs that look like this: https://docs.getport.io. (and yes, dark mode is supported, too).

In this blog post, we’re going to take an in-depth look into our documentation infrastructure, how we write, validate and ship new documentation, and how we make sure the process is as simple as possible to allow everyone in the company to pitch in and help with delivering the best documentation possible.

Let’s start by reviewing the different options that companies and products often face when choosing a documentation solution.

Choosing a docs solution

When companies, Open Source Software (OSS) libraries, and development teams need to choose a documentation solution, they are often faced with the following available options:

  1. Off-the-shelf products.
  2. Open Source documentation platforms.
  3. Writing a custom solution.

Each solution has its advantages and disadvantages which I’ll briefly outline:

Off-the-shelf products - Notable examples include Atlassian’s Confluence and Notion. These solutions usually offer excellent availability and collaborative tools, and take away most of the need to manage infrastructure, deployments, hosting and distributing of documentation. But because they follow strict templates and use a well-known design, they are sometimes difficult to customize so that the documentation is unique and specific to your product.

Open Source documentation platforms - Notable examples include Docusaurus (which we use!) and Read the Docs. These solutions are usually highly customizable, look great and have the support of the open source community. They might require more work to get started with, and as with any open source project, if support is dropped and the maintainers of the project stop working on it at any point, it could be hard to transition to a different platform, or continuing to maintain it yourself.

Writing a custom solution - This approach offers the highest level of customizability and control, but it is also the hardest to implement and maintain. By developing a custom solution, you are in charge of everything from the architecture, through the content and serving of the documentation. With this approach, the documentation is essentially an entire additional product, and that added overhead could be too high for companies and teams to justify.

Now that we are aware of the different options available to us, let’s see what the Port team ended up choosing and why.

{{cta_5}}

Why Docusaurus?

In order to make documentation development as simple as possible, we chose to use Docusaurus as the framework for our documentation. Docusaurus is an open source site generation framework developed by Meta, it is highly customizable, fast and performant, looks very good out-of-the-box and is very simple to work with. It also has a built-in documentation feature, which is exactly what we use to power our own docs.

Docusaurus is also very modular and we can extend its capabilities (such as adding analytics, live code editors, etc.) using plugins provided by the Docusaurus team and also by the open source community.

Docusaurus is also a very popular framework, both  supported by the open source community (with 39K stars on GitHub as of the publishing of this blog post), and widely adopted by many companies as their documentation framework.

In practice, Docusaurus allows us to generate a static website for our documentation, which means it is blazing fast and gives the best possible experience to our users.

The fact that Docusaurus is so easy to work with is critical for us, our documentation is supposed to make it very easy to understand how to work with Port and how to utilize all of its features in a way that works best for all of our users.

In addition, Having a documentation framework that is easy to work with lowers the barrier to  entry for our own developers to contribute to our docs.

In order to deploy our documentation we use AWS Amplify, I will discuss this in greater depth later in this blog post. Before that, let’s go over the process of adding a new article to our docs.

{{gitops}}

Lifecycle of a documentation article

The Port platform is evolving quickly and new features and capabilities are added frequently. This means that the documentation also has to keep up, and include all of the latest information; as a result developers need to update the documentation often. 

We believe it’s important for our developers to write the docs themselves ,and the reasoning behind this philosophy is:

  • As the developers of the platform, they know all of the ins and outs of the platform better than anyone;
  • Actively working on the documentation helps our developers know exactly which content exists in the docs. That allows them to better help users and send them references to the documentation quickly and efficiently.
  • Port is a product made for developers and DevOps professionals, so our developers know their customers best, and know what information the documentation needs to include to be effective.

So a developer developed a new feature, and also wrote the first draft to document said feature. Now what? Now they open a pull request in our documentation repository, the new pull request automatically triggers a preview deployment of the documentation, meaning others in the R&D team can see what the new draft looks like in a preview environment, without exposing the new documentation to users before it is ready.

After the pull request and once the preview version is up, our content team performs another pass on the article, fixes any grammatical errors and makes sure the article is clear and comprehensive.

Now that the new article is ready to be published, all that’s left is to merge the pull request. After the merge, AWS Amplify will take care of the deployment automatically, thus saving the need for someone to manually trigger a deployment process, or to even verify that a deployment process is triggered, because all of that is handled by Amplify.

And of course, just as we validate the builds of microservices in our production platform, if there is any issue in the build process of the new version of the docs, it cannot be published until those issues are fixed, thus making sure the documentation is protected from broken builds and unexpected errors.

Now let’s take a look at the final piece of the puzzle - AWS Amplify - in-depth, and see how it helps streamline our documentation development process.

AWS Amplify as a force multiplier

For those of you who don’t know AWS Amplify, it is a complete solution to quickly and easily develop full-stack apps, build and deploy them using integrated CI/CD pipelines and serve them securely, efficiently and reliably using AWS Cloudfront. (For those of you using GCP or Azure, you might be familiar with Firebase or Static Web Apps respectively).

We use AWS Amplify to build, host and deploy our documentation. Amplify greatly streamlines our documentation development process, we just connect Amplify to the documentation Git repository, and every time a new merge is made to the main branch, a build and deployment process of a new version is automatically triggered, completely automatically and without a need for a developer to manually initiate the deployment.

In addition, as mentioned in the previous section, we also use a feature of Amplify called pr previews:

By using PR previews, every time a new pull request in the documentation repository is opened, a preview version matching the pull request is deployed by Amplify to a unique URL, allowing us to preview the latest version of the documentation, without exposing to our users drafts or articles that are still a work-in-progress.

PR previews also make it easier for our content team and other less technical staff members to see a deployed version of the documentation before it goes live, without having to install an IDE and use services such as Git which they are unfamiliar with and are rarely necessary in their daily work. This allows the content team to suggest fixes and point out issues quickly and easily.

{{ebook}}

Conclusion

In this blog post we reviewed Port’s documentation architecture, our documentation development process and how we use cloud services such as AWS Amplify to streamline our work.

Port’s documentation website is essential to the success of both the platform and its users. Our goal was to deliver world-class documentation from day 1, and we continue to constantly improve our docs by adding content, polishing existing articles and making sure our documentation is the most comprehensive resource to learn about the Port platform.

{{cta_1}}

Check out Port's pre-populated demo and see what it's all about.

Check live demo

No email required

{{cta_2}}

Contact sales for a technical product walkthrough

Let’s start
{{cta_3}}

Open a free Port account. No credit card required

Let’s start
{{cta_4}}

Watch Port live coding videos - setting up an internal developer portal & platform

Let’s start
{{cta_5}}

Check out Port's pre-populated demo and see what it's all about.

(no email required)

Let’s start
{{cta_6}}

Contact sales for a technical product walkthrough

Let’s start
{{cta_7}}

Open a free Port account. No credit card required

Let’s start
{{cta_8}}

Watch Port live coding videos - setting up an internal developer portal & platform

Let’s start
{{cta-demo}}
{{reading-box-backstage-vs-port}}

Example JSON block

{
  "foo": "bar"
}

Order Domain

{
  "properties": {},
  "relations": {},
  "title": "Orders",
  "identifier": "Orders"
}

Cart System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Cart",
  "title": "Cart"
}

Products System

{
  "properties": {},
  "relations": {
    "domain": "Orders"
  },
  "identifier": "Products",
  "title": "Products"
}

Cart Resource

{
  "properties": {
    "type": "postgress"
  },
  "relations": {},
  "icon": "GPU",
  "title": "Cart SQL database",
  "identifier": "cart-sql-sb"
}

Cart API

{
 "identifier": "CartAPI",
 "title": "Cart API",
 "blueprint": "API",
 "properties": {
   "type": "Open API"
 },
 "relations": {
   "provider": "CartService"
 },
 "icon": "Link"
}

Core Kafka Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Kafka Library",
  "identifier": "CoreKafkaLibrary"
}

Core Payment Library

{
  "properties": {
    "type": "library"
  },
  "relations": {
    "system": "Cart"
  },
  "title": "Core Payment Library",
  "identifier": "CorePaymentLibrary"
}

Cart Service JSON

{
 "identifier": "CartService",
 "title": "Cart Service",
 "blueprint": "Component",
 "properties": {
   "type": "service"
 },
 "relations": {
   "system": "Cart",
   "resources": [
     "cart-sql-sb"
   ],
   "consumesApi": [],
   "components": [
     "CorePaymentLibrary",
     "CoreKafkaLibrary"
   ]
 },
 "icon": "Cloud"
}

Products Service JSON

{
  "identifier": "ProductsService",
  "title": "Products Service",
  "blueprint": "Component",
  "properties": {
    "type": "service"
  },
  "relations": {
    "system": "Products",
    "consumesApi": [
      "CartAPI"
    ],
    "components": []
  }
}

Component Blueprint

{
 "identifier": "Component",
 "title": "Component",
 "icon": "Cloud",
 "schema": {
   "properties": {
     "type": {
       "enum": [
         "service",
         "library"
       ],
       "icon": "Docs",
       "type": "string",
       "enumColors": {
         "service": "blue",
         "library": "green"
       }
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "system": {
     "target": "System",
     "required": false,
     "many": false
   },
   "resources": {
     "target": "Resource",
     "required": false,
     "many": true
   },
   "consumesApi": {
     "target": "API",
     "required": false,
     "many": true
   },
   "components": {
     "target": "Component",
     "required": false,
     "many": true
   },
   "providesApi": {
     "target": "API",
     "required": false,
     "many": false
   }
 }
}

Resource Blueprint

{
 “identifier”: “Resource”,
 “title”: “Resource”,
 “icon”: “DevopsTool”,
 “schema”: {
   “properties”: {
     “type”: {
       “enum”: [
         “postgress”,
         “kafka-topic”,
         “rabbit-queue”,
         “s3-bucket”
       ],
       “icon”: “Docs”,
       “type”: “string”
     }
   },
   “required”: []
 },
 “mirrorProperties”: {},
 “formulaProperties”: {},
 “calculationProperties”: {},
 “relations”: {}
}

API Blueprint

{
 "identifier": "API",
 "title": "API",
 "icon": "Link",
 "schema": {
   "properties": {
     "type": {
       "type": "string",
       "enum": [
         "Open API",
         "grpc"
       ]
     }
   },
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "provider": {
     "target": "Component",
     "required": true,
     "many": false
   }
 }
}

Domain Blueprint

{
 "identifier": "Domain",
 "title": "Domain",
 "icon": "Server",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {}
}

System Blueprint

{
 "identifier": "System",
 "title": "System",
 "icon": "DevopsTool",
 "schema": {
   "properties": {},
   "required": []
 },
 "mirrorProperties": {},
 "formulaProperties": {},
 "calculationProperties": {},
 "relations": {
   "domain": {
     "target": "Domain",
     "required": true,
     "many": false
   }
 }
}
{{tabel-1}}

Microservices SDLC

  • Scaffold a new microservice

  • Deploy (canary or blue-green)

  • Feature flagging

  • Revert

  • Lock deployments

  • Add Secret

  • Force merge pull request (skip tests on crises)

  • Add environment variable to service

  • Add IaC to the service

  • Upgrade package version

Development environments

  • Spin up a developer environment for 5 days

  • ETL mock data to environment

  • Invite developer to the environment

  • Extend TTL by 3 days

Cloud resources

  • Provision a cloud resource

  • Modify a cloud resource

  • Get permissions to access cloud resource

SRE actions

  • Update pod count

  • Update auto-scaling group

  • Execute incident response runbook automation

Data Engineering

  • Add / Remove / Update Column to table

  • Run Airflow DAG

  • Duplicate table

Backoffice

  • Change customer configuration

  • Update customer software version

  • Upgrade - Downgrade plan tier

  • Create - Delete customer

Machine learning actions

  • Train model

  • Pre-process dataset

  • Deploy

  • A/B testing traffic route

  • Revert

  • Spin up remote Jupyter notebook

{{tabel-2}}

Engineering tools

  • Observability

  • Tasks management

  • CI/CD

  • On-Call management

  • Troubleshooting tools

  • DevSecOps

  • Runbooks

Infrastructure

  • Cloud Resources

  • K8S

  • Containers & Serverless

  • IaC

  • Databases

  • Environments

  • Regions

Software and more

  • Microservices

  • Docker Images

  • Docs

  • APIs

  • 3rd parties

  • Runbooks

  • Cron jobs

Starting with Port is simple, fast and free.

Let’s start