In addition to the health of your application’s pods, OpenShift will watch the containers inside those pods. Let’s forcibly inflict some issues and see how OpenShift responds.
Choose a running pod and shell into it:
$ oc get pods $ oc exec PODNAME -it /bin/bash
You are now executing a bash shell running in the container of the pod. Let’s kill our webapp and see what happens.
If we had multiple containers in the pod we could use "-c CONTAINER_NAME" to select the right one
Choose a running pod and shell into its container:
$ pkill -9 node
This will kick you out off the container with an error like “Error executing command in container”
Do it again - shell in and execute the same command to kill node
Watch for the container restart
$ oc get pods -w
If a container dies multiple times quickly, OpenShift is going to put the pod in a CrashBackOff state. This ensures the system doesn’t waste resources trying to restart containers that are continuously crashing.
Navigate to browse the pods list, and click on a running pod
In the tab bar for this pod, click on "Terminal"
Click inside the terminal view and type $ pkill -9 node
This is going to kill the node.js web server, and kick you off of the container’s terminal.
Click the refresh button (on your web browser), and do that a couple more times
Go back to the pod overview
If the container dies multiple times so quickly; then OpenShift is going to put the pod in a CrashBackOff state. This ensures the system doesn’t waste resources trying to restart containers that are continuously crashing.
Let’s scale back down to 1 replica. If you are using the web console just click the down arrow from the Deployments Configs Overview page. If you are using the command line use the “oc scale” command.
In this lab we learned about replication controllers and how they can be used to scale your applications and services. We also tried to break a few things and saw how OpenShift responded to heal the system and keep it running.