Deploy to Kubernetes Engine

Deploy to Kubernetes Engine

Kubernetes Engine allows you to create a cluster of machines and deploy any number of applications to it. Kubernetes abstracts the details of managing machines and allows you to automate the deployment of your applications with simple CLI commands.

To deploy an application to Kubernetes, you first need to create the cluster. Then you need to add a configuration file for each application you will deploy to the cluster.

On the Navigation menu (Navigation menu icon), click Kubernetes Engine. If a message appears saying the Kubernetes API is being initialized, wait for it to complete.

Click Create.

In the Create Cluster dialog box, to the right of the GKE Standard option, click Configure.

Accept all the defaults, and click Create. It will take a couple of minutes for the Kubernetes Engine cluster to be created. When the cluster is ready, a green check appears.

Click the three dots to the right of the cluster and then click Connect.

In the Connect to the cluster screen, click Run in Cloud Shell. This opens Cloud Shell with the connect command entered automatically.

Press Enter to connect to the cluster.

To test your connection, enter the following command:

kubectl get nodes

This command simply shows the machines in your cluster. If it works, you're connected.

In Cloud Shell, click Open Editor (Cloud Shell Editor icon).

Expand the training-data-analyst/courses/design-process/deploying-apps-to-gcp folder in the navigation pane on the left. Then, click main.py to open it.

In the main() function, change the title to Hello Kubernetes Engine as shown below:

@app.route("/")
def main():
    model = {"title" "Hello Kubernetes Engine"}
    return render_template('index.html', model=model)

Save your change.

Add a file named kubernetes-config.yaml to the training-data-analyst/courses/design-process/deploying-apps-to-gcp folder.

Paste the following code in that file to configure the application:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: devops-deployment
  labels:
    app: devops
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: devops
      tier: frontend
  template:
    metadata:
      labels:
        app: devops
        tier: frontend
    spec:
      containers:
      - name: devops-demo
        image: <YOUR IMAGE PATH HERE>
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: devops-deployment-lb
  labels:
    app: devops
    tier: frontend-lb
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: devops
    tier: frontend

Note: In the first section of the YAML file above, you are configuring a deployment. In this case, you are deploying 3 instances of your Python web app. Notice the image attribute. You will update this value with your image in a minute after you build it. In the second section, you are configuring a service of the type "load balancer". The load balancer will have a public IP address. Users will access your application through the load balancer.

For more information on Kubernetes deployments and services, see the links below:


To use Kubernetes Engine, you need to build a Docker image. Enter the following commands to use Cloud Build to create the image and store it in Container Registry:

cd ~/gcp-course/training-data-analyst/courses/design-process/deploying-apps-to-gcp
gcloud builds submit --tag gcr.io/$DEVSHELL_PROJECT_ID/devops-image:v0.2 .

When the previous command completes, the image name will be listed in the output. The image name is in the form gcr.io/project-id/devops-image:v0.2.

Highlight your image name and copy it to the clipboard. Paste that value in the kubernetes-config.yaml file, overwriting the string <YOUR IMAGE PATH HERE>.

You should see something similar to below:

spec:
  containers:
  - name: devops-demo
    image: gcr.io/test-1-263611/devops-image:v0.2
    ports:
Enter the following Kubernetes command to deploy your application:

kubectl apply -f kubernetes-config.yaml

In the configuration file, three replicas of the application were specified. Type the following command to see whether three instances have been created:

kubectl get pods

Make sure all the pods are ready. If they aren't, wait a few seconds and try again.

A load balancer was also added in the configuration file. Type the following command to see whether it was created:

kubectl get services

You should see something similar to below:

Output

If the load balancer's external IP address says "pending", wait a few seconds and try again.

When you have an external IP, open a browser tab and make a request to it. It should return Hello Kubernetes Engine. It might take a few seconds to be ready.


Source: GCP

Post a Comment

0 Comments