Red Hat OpenShift Container Platform provides a feature-rich CLI based on the kubectl command. The CLI is invoked via the oc command.

The OpenShift CLI (oc) is mainly used to interact with the API and provides a large number of commands to work with applications and clusters.

oc and kubectl

Since OpenShift 4 is a certified Kubernetes distribution, it is also possible to interact with the cluster with the provided kubectl binary.

Together with OpenShift, Red Hat also provides the oc binary, which is a kubectl with more functionalities: oc provides a number of additional commands (for example, new-app, login) making the interaction with the cluster smoother and more integrated with the OpenShift components.

In support of pre-existing workflows based on kubectl, the kubernetes CLI is also supported.

Extending oc

kubectl, and consequently also oc, support a plugin mechanism allowing users to extend the CLI with customized commands.

Main oc commands (create, get, describe, among others) are the fundamental bricks to interact with a cluster. Plugins can be seen as additional components that, using those bricks, create new features and integrations to the oc command.

The kubectl binary (and thus oc) adopted the same successful plugin mechanism of git: an oc plugin is nothing more than an executable file located into the PATH with a name starting with oc- or kubectl-.

The command oc plugin list provides a list of available plugins searching into the PATH for executables with name oc-* or kubectl-*.

A plugin cannot implement a command that already exists as an oc “native” command. For example, a plugin cannot implement the get command; in that case, the oc plugin list will report an error:

$ oc plugin list
The following compatible plugins are available:

 - warning: oc-get overwrites existing command: "oc get"

For that reason it is also not possible to extend any existing commands, such as creating an oc get foo command.

NOTE: oc will find all the plugins named oc-* or kubectl-*, but kubectl will not find a plugin starting with oc-*.

Writing a Plugin

You can extend the oc CLI with a programming language of your choice: It is possible to write a plugin using Go, Python, or even a shell script.

There is no need to load or init the plugin; as mentioned before, a plugin is just an executable dropped somewhere in the PATH.

For example, if you have the oc-hello binary into /usr/local/bin, oc-hello will be listed as a plugin and will be invoked with the command oc hello.

Hello World

Let’s start with a basic Hello plugin: Create a simple bash script with the following content named oc-hello into a folder of the PATH:


echo "Hello"

echo "****** COMMAND ******"
echo $0 $@

Make it executable (chmod +x oc-hello) and check if it is listed:

$ oc plugin list
The following compatible plugins are available:


Now we can test our shiny new plugin:

$ oc hello
****** COMMAND ******

As you can see, the command oc hello spawns the oc-hello script.

Is fundamental to know the plugin is executed via the execve (2) syscall. This means that the oc process is replaced by the plugin. In this way, the plugin gets the same environment of the original command (think about the KUBECONFIG variable).

Plugin Names

The plugin mechanism allows you to implement main and sub commands.

The command hierarchy is created using a dash (-) as a separator in the plugin name.

To better understand the concept, let’s create a new plugin named oc-hello-world. In this case, the main command is hello and the sub command is world.

The oc-hello-world is created from the oc-hello:

sed -e 's/Hello/Hello World/' ~/bin/oc-hello > ~/bin/oc-hello-world
chmod +x ~/bin/oc-hello-world

and execute it:

$ oc hello world
Hello World
****** COMMAND ******

It is also possible to implement command names containing dashes. In this case, the corresponding plugin will be invoked with underscores in the name. For example, the command oc hello-world will invoke the plugin oc-hello_world.

Command Arguments and Flags

As any oc command, plugins also can use arguments and flags. All the arguments after the command name will be passed to the plugin:

$ oc hello world arg1 arg2 --flag1
Hello World
****** COMMAND ******
/home/pietro/bin/oc-hello-world arg1 arg2 --flag1

Managing Conflicts

The plugin mechanism is designed to intuitively manage conflicts between plugin names and arguments:

  • duplicated plugins: in case a plugin with the same name is placed in multiple locations, the plugin that comes first in the PATH variable will be executed. This is a feature named overshadowing that allows the user to overwrite system plugins and choose which should be executed. The command oc plugin list reports a warning for all the overshadowed plugins.
  • longest matching name: the oc command will try to find a plugin with the longest match in the file name. All the subsequents words will be passed to the plugin as arguments. For example, oc hello world arg1 arg2 command will execute the plugin named oc-hello-word and arg1 arg2 will be passed as arguments. Instead the command oc hello universe arg1 arg2 will execute the oc-hello plugin and universe arg1 arg2 will be used as arguments.

A Real Case: Writing a Pod Network Debugging Plugin

Let’s engage in a real hands-on writing a more complex plugin.

The Idea

The plugin we want to develop should help us troubleshoot a pod instance facilitating the task of getting a dump of the network traffic in a pod.

In order to get a tcpdump on a specific pod, the steps should be:

  1. From the pod name, get the node where the pod is scheduled.
  2. Spawn a node debug pod on that host. This pod is a privileged pod scheduled on a specific node with the host filesystem mounted under /host.
  3. Use the cri-o crictl tool to find the process of the pod.
  4. Use nsenter to run tcpdump on the same Kernel network namespace of the identified process.

Testing the Procedure

To test the plugin, we first need to have a small playground where we can run all the tests.

First of all, we create a new project:

$ oc new-project test-tcpdump

Now we can deploy a sample new Ruby app:

$ oc new-app

We expose the service and we check the route:

$ oc expose svc/ruby-ex exposed
$ oc get routes
NAME      HOST/PORT                                                    PATH   SERVICES   PORT       TERMINATION   WILDCARD
ruby-ex          ruby-ex    8080-tcp                 None

Now after a bit of time, the build should be completed, and the application pod should be Running:

$ oc get pods -o wide
NAME               READY   STATUS      RESTARTS   AGE     IP             NODE                       NOMINATED NODE   READINESS GATES
ruby-ex-1-build    0/1     Completed   0          4m59s   <none>           <none>
ruby-ex-1-deploy   0/1     Completed   0          2m56s   <none>           <none>
ruby-ex-1-wbn4l    1/1     Running     0          2m50s   <none>           <none>

To make sure the application is up and running, try accessing the route URL with a browser.

To check the procedure, we first manually test all the actions that should be automatised by the plugin:

  1. The debug pod needs to be spawn on the node where the pod ruby-ex-1-wbn4l is scheduled:
    $ oc debug node/
  2. The previous command provides us a shell into the debug pod, with chroot /host we can execute commands on the node. Using crictl pods we get the pod ID related to our pod:
    # chroot /host crictl pods -q --name ruby-ex-1-wbn4l --namespace test-tcpdump
  3. At this point, we should find the container ID. Please note that this command will return an ID for each container into the pod. Since the kernel namespace is shared for all the containers in a pod, it is fine to get just the first ID:
    # chroot /host crictl ps -q --pod 90e725733edc9a71e6693dae99704a576cbc592b86c8f36660e8b837a28b0058 | head -n 1

  4. Parsing the output of crictl inspect we get the PID of the process running the container:
    # chroot /host crictl inspect e15bcd65b94f02cb9093d6ed5cc3bc6abe07ff0046b843af123c7431584453fb --output yaml | grep 'pid:' | awk '{print $2}'
  5. Now we can finally execute tcpdump into the same namespace of our process:
    # nsenter -n -t 405853 -- tcpdump
  6. In another terminal let’s send an HTTP request to the route:
    $ curl >/dev/null
  7. On the debug pod, we should now see the data flowing.

Now that we know all the steps to get a network trace on a pod, we can see how to automate the process in a plugin.

The Pod Tcpdump Plugin

The following code snippet is quite self explanatory and implements all the previous steps we did.

Save it in a file named oc-pod-tcpdump, make it executable (chmod +x oc-pod-tcpdump), and move it into a folder of your PATH (Eg. mv oc-pod-tcpdump /usr/local/bin).

#!/usr/bin/env bash

# print an error and exit
exit_err() {
  echo >&2 "${1}"
  exit 1

# print a basic command help
usage() {
   local SELF
   SELF="oc pod-tcpdump"
   cat <<EOF
pod-tcpdump is a debugging tool helping to start a tcpdump on a running pod.

# check if $pod is running otherwise exit
is_running() {
   local phase=$(oc get pod -n "$namespace" "$pod" -o jsonpath='{.status.phase}')
   if [[ "$phase" != "Running" ]]; then
       exit_err "Pod is not in Running phase ($phase)"

# print the node where $pod is scheduled
get_node() {
   oc get pod -n "$namespace" "$pod" -o jsonpath='{.spec.nodeName}'

main() {
   # at least the pod name is required
   [ $# -eq 0 ] && exit_err "You must specify a pod for dumping network traffic"

    # walk over the command arguments to set the namespace and the tcpdump options
   while [ $# -gt 0 ]; do
       case "$1" in
           -h | --help)
           -n | --namespace)

    # if namespace is not provided use the current namespace 
   if [[ "$namespace" == "" ]]; then
       namespace=$(oc config view --minify --output 'jsonpath={..namespace}')

    # check if the pod is running
   is_running $pod
   # get the node where the pod is scheduled

   echo "Dumping traffic on pod $pod in $namespace, pod is running on node $node"
   echo "Data gathered via 'tcpdump $tcpdump_opts'"

    # spawn the debug pod on the node, run nsenter -n on the target PID
   # Pod ID => Container ID => Process ID => nsenter -n -t $PID -- tcpdump
   cat <<EOF | oc debug node/"$node"
cid=\$(chroot /host crictl ps -q --pod \$(chroot /host crictl pods -q --name $pod --namespace $namespace) | head -n 1)
pid=\$(chroot /host crictl inspect \$cid --output yaml | grep 'pid:' | awk '{print \$2}')
nsenter -n -t \$pid -- tcpdump $tcpdump_opts

main "$@"

Now we can test the plugin. Let's start the tcpdump:

$ oc pod tcpdump ruby-ex-1-wbn4l --namespace test-tcpdump -- port 8080 -X -v

In another terminal or with a browser, we send the HTTP request to the application, and we should see the network data flowing from the tcpdump.

Best Practices

When writing a plugin, it is important to implement the -h/--help flag with basic instructions and usage information.

It is a good practice to keep consistency with the oc and kubectl commands. For example, when using command flag the same flags should be used:


In case you are writing plugins with Go, there is a useful cli-runtime providing the same kubectl command-line arguments, kubeconfig parser, Kubernetes API REST client, and printing logic.

Managing Plugins

As we have seen, plugin installation is really straightforward, but when it comes to using third party, plugins requiring a lifecycle (install / upgrade / remove), it is handy to have a tool to manage them.

The Kubernetes SIG-CLI group created Krew, a package manager for Kubernetes plugins.

Krew is itself a kubectl plugin and can be installed with a few easy steps on Linux, MacOS, or Windows.

Once installed, Krew allows the user to search, install, upgrade, and remove plugins. Krew plugins are stored in an index hosted on github.

Submitting a new plugin to the Krew index requires a peer review and acceptance of the plugin from the Krew community. This process should ensure the quality of the included plugins.

Because of the agnostic nature of Krew, plugins requiring the oc client cannot be accepted on the official Krew index; by the way, at the moment of this writing, the Krew team released the multi index feature. In this way, a user can build an OpenShift specific custom index.

Security Considerations

As we highlight, the plugin is executed calling the execve (2) syscall, so the plugin is executed on the local machine with the privileges of the current user.

For that reason some considerations need to be kept in mind:

  • Never use the root user to run the oc or kubectl command.
  • Be careful using and installing third-party plugins.
  • Use the appropriate Kubernetes user with the minimum required privilege level.


Thanks to the power and the flexibility of CLI plugins, it is possible to create an infinite range of tools to simplify and automate cluster management tasks, all integrated with the native OpenShift oc command.

Is important to know that the oc CLI plugins feature is released by Red Hat as a tech preview feature.


OpenShift Container Platform, How-tos, OpenShift 4, automation

< Back to the blog