Architecting a Cloud Native API Solution
In this video, Ms. Whitney Lee, Custom Success Manager at IBM cloud discusses the efficient and robust way to architect an API solution in the cloud.

In this video, Ms. Whitney Lee, Custom Success Manager at IBM cloud discusses the efficient and robust way to architect an API solution in the cloud.
At 0:21, the presentation begins with the concept of source control. Source control is also known as an artifact repository. The most popular version of source control is Git. All the artifacts related to the system can be stored in the repository.
The repository can contain server configuration files, files for the development test and prod environment. If the system involves APIs, you can store the API definition files in the artifact repository. In addition, the pipeline run file must be defined in the source control.
Infrastructure as Code
At 2:22, Ms. Whitney mentions that the source control is also popularly known as ‘Infrastructure as Code’ as we are defining the infrastructure of the system before the implementation of the system. If any part of the system fails, it can be re-built easily using the definition files.
At 2:51, the presenter begins to explain the Kubernetes cluster and its role. The Kubernetes cluster will have the physical resources, memory, and CPU for the nodes and physical disk space for storage. At 3:23, the speaker briefs that the development environment can be built according to the specifications that are already defined in the repository.
Once the API is developed, its definition file needs to be pushed to the source control by the developer. This will trigger a webhook which will eventually trigger a pipeline build.
At 5:16, the speakers talk about the steps that follow after the pipeline is being triggered. The pipeline build will promote the API from the dev environment to the test environment. A Canary environment can also be built by defining it in the repository. The canary environment should ideally be the exact replication of the production environment.
At 6:30, the speaker points out an example where an end-user makes a call to the cluster. The call made by the end-user to the Kubernetes cluster is made to pass through the API gateway. The gateway is responsible for sending the traffic to a load balancer. Load balance will decide where the traffic goes between the prod and canary environments. If we have the newly developed APIs in the canary environment, we can test the functionality of the API with some real-time traffic.
Conclusion
At 7:28, Ms. Whitney talks about logging and metrics collection. Tools like Prometheus or Grafana can be used to publish the metrics collected in a UI with graphs. Business analysts, operational managers are usually the people who are most interested in knowing the metrics of the system.
At 8:47, she briefs that the tools like pipeline build, load balancer, logging, and metric collecting will be the third-party tools that need to be installed and maintained separately from the Kubernetes cluster. At 9:09, Ms.Whitney mentions an alternative option which is to use the ‘OpenShift’ that is built on top of the Kubernetes. With OpenShift, the pipeline tools, load balancers are all built into the platform.
She concludes that this solution is beneficial as Infrastructure as code is used that provides collaboration, visibility and acts as a source of truth for any piece of the system.