from–https://stackoverflow.com/questions/42793382/exec-commands-on-kubernetes-pods-with-root-access
I have one pod running with name ‘jenkins-app-2843651954-4zqdp’. I want to install few softwares temporarily on this pod. How can I do this?
I am trying this- kubectl exec -it jenkins-app-2843651954-4zqdp -- /bin/bash
and then running apt-get install commands but since the user I am accessing with doesn’t have sudo access I am not able to run commands
17 Answers
- Use
kubectl describe pod ...
to find the node running your Pod and the container ID (docker://...
) - SSH into the node
- Run
docker exec -it -u root CONTAINER_ID /bin/bash
- 20
- 16Nevermind, I found the answer myself. I am using google cloud. The command to ssh into node is: gcloud compute instances list gcloud compute ssh <your_instance_name>– Wenjing
- 16
- 10If it helps anyone, ID above means docker container id. AFAIK, kubectl won’t show the correct docker container id. We have to use docker ps to get the correct docker container id.
- 20Kinda obsolete answer now, considering that Docker has been deprecated in K8s version 1.20.
There are some plugins for kubectl that may help you achieve this: https://github.com/jordanwilson230/kubectl-plugins
One of the plugins called, ‘ssh’, will allow you to exec as root user by running (for example) kubectl ssh -u root -p nginx-0
- Super! I can’t believe this plugin hasn’t become as popular as it deserves. However, the
krew
plugin forkubectl exec-as ...
doesn’t seem to be keeping up with the latest Kubernetes. See my comment in the follow-up answer below. - 2This plugin is not working with a modern k8s version, like 1.22 for example, that is using containerd. See github.com/jordanwilson230/kubectl-plugins/issues/40.
Adding to the answer from henning-jay, when using containerd as runtime.
get containerID via
kubectl get pod <podname> -o jsonpath=”{.status.containerStatuses[].containerID}” | sed ‘s,.*//,,’
containerID will be something like 7e328fc6ac5932fef37f8d771fd80fc1a3ddf3ab8793b917fafba317faf1c697
lookup the node for pod
kubectl get pod <podname> -o wide
on node, trigger runc – since its invoked by containerd, the –root has to be changed
runc –root /run/containerd/runc/k8s.io/ exec -t -u 0 <containerID> sh
Building on @jordanwilson230’s answer he also developed a bash-script called exec-as
which uses Docker-in-Docker to accomplish this: https://github.com/jordanwilson230/kubectl-plugins/blob/krew/kubectl-exec-as
When installed via kubectl plugin manager krew → kubectl krew install exec-as
you can simply
kubectl exec-as -u <username> <podname> -- /bin/bash
This only works in Kubernetes clusters which allow priviledged containers.
- 5Maybe this
exec-as
plugin hasn’t been maintained lately? It doesn’t work on AWS EKS v1.21. It simply hangs there tillerror: timed out waiting for the condition
. However, the originalkubectl ssh -u root ...
works as shown in @jordanwilson230’s original answer above. That makes me think that theexec-as
plugin version is falling behind of that ofssh
plugin. - 1This also seems to only work on clusters that use docker runtime, or at least it didn’t work on one that uses containerd.– Andrew
Just in case you come across to look for an answer for minikube, the minikube ssh
command can actually work with docker
command together here, which makes it fairly easy:
- Find the container ID:
$ minikube ssh docker container ls
- Add the
-u 0
option to docker command (quote is necessary for the whole docker command):$ minikube ssh "docker container exec -it -u 0 <Container ID> /bin/bash"
NOTE: this is NOT for Kubernetes in general, it works for minikube only. While I feel we need the root access quit a lot in local development environment, it’s worth to mention it in this thread.
For my case, I was in need for root access (or sudo) to container to give the chown
permission to a specific mount path.
I cannot SSH to machine because I designed my infrastructure to be fully automated with Terraform without any manual access.
Instead, I found that initContainers
does the job:
initContainers:
- name: volume-prewarming
image: busybox
command: ["sh", "-c", "chown -R 1000:0 {{ .Values.persistence.mountPath }}"]
volumeMounts:
- name: {{ .Chart.Name }}
mountPath: {{ .Values.persistence.mountPath }}
I’ve also created a whole course about Production grade running kubernetes on AWS using EKS
- Hi Abdennour. I am running through a similar issue, however I am using a git-sync sidecar that I mount. Once the sidecar is mounted the owner of the volume becomes root. I have added a question here if you can help : ) stackoverflow.com/questions/65457870/…– alt-f4
If you’re using a modern Kubernetes version it’s likely running containerd instead of docker for it’s container runtime.
To exec as root you must have SSH access and SUDO access to the node on which the container is running.
- Get the container id of the pod. Example:
kubectl get pod cassandra-0 -n cassandra -o jsonpath="{.status.containerStatuses[].containerID}" | sed 's/.*\/\///'
8e1f2e5907087b5fd55d98849fef640ca73a5ca04db2e9fc0b7d1497ff87aed9
- Use
runc
to exec as root. Example:
sudo runc --root /run/containerd/runc/k8s.io/ exec -t -u 0 8e1f2e5907087b5fd55d98849fef640ca73a5ca04db2e9fc0b7d1497ff87aed9 sh
In case anyone is working on AKS, follow these steps:
- Identify the pod that is running the container
- Identity the node that is running that pod (
kubectl describe pod -n <namespace> <pod_name> | grep "Node:"
, or look for it on Azure portal) - SSH to AKS the cluster node
Once you are inside a node, perform these commands to get into the container:
sudo su
(you must get root access to usedocker
commands)docker exec -it -u root ID /bin/bash
(to get the container id, usedocker container ps
)
In k8s deployment configuration, you can set to run the container as root
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-app
spec:
template:
spec:
containers:
- image: my-image
name: my-app
...
securityContext:
allowPrivilegeEscalation: false
runAsUser: 0
Notice that runAsUser: 0
property. Then connect to the POD/container as usual and you will be authenticated as root
from the beginning.
- Insecure. This would run the main service as root. I guess the question was about just debugging/administering as root.
- @RaúlSalinas-Monteagudo, yep, the question was about a temporary solution for installing extra software, then after the debugging is done that change should be reverted back.– humkins
Working with kubernetes 1.21, none of the docker and kubectl-plugin approaches worked for me. (since k8s 1.21 uses cri-o as container runtime).
What did work for me was using runc:
- get containerID via
kubectl get pod -o jsonpath=”{.status.containerStatuses[].containerID}” | sed ‘s/.*////’
- containerID is something like
4ed493495241b061414b94425bb03b682534241cf19776f8809aeb131fa5a515
- get node pod is running on
kubectl describe pod <podname> | grep Node:
Node: mynode.cluster.cloud.local/1.1.148.63
- ssh into node
- on node, run (might have to use sudo):
runc exec -t -u 0 containerID sh
so something like:
runc exec -t -u 0 4ed493495241b061414b94425bb03b682534241cf19776f8809aeb131fa5a515 sh
Lets sumarize what I found here in posts, comments and links. This works for me:
# First get list of nodes:
kubectl get nodes
$ NAME STATUS ROLES AGE VERSION
$ node-control-plane Ready control-plane,master 4d16h v1.21.1
$ node-worker NotReady <none> 4d16h v1.21.1
$ node-worker2 Ready <none> 4d16h v1.21.1
# Start pod based on ubuntu which will connect direct inside the node:
kubectl debug node/node-worker -it --image=ubuntu
$ Creating debugging pod node-debugger-ip-10-0-5-223.eu-west-2.compute.internal-6gs8d with container debugger on node ip-10-0-5-223.eu-west-2.compute.internal.
$ If you don't see a command prompt, try pressing enter.
root@ip-10-0-5-223:/#
# Now you are connected on debug pod and content of node filesystem is at /host
# Lets chroot there:
root@ip-10-0-5-223:/# chroot /host
sh-4.2#
# Now you are connected inside node, so you can check used space (df -h), running processes (top) or install things with yum (yum install htop mc -y)
# Lets get root access to some pod on this node, you need to find its CONTAINER ID:
sh-4.2# docker ps
#or "docker ps | less" then move around with arrows to find correct CONTAINER ID and quit with q
$ CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
$ 0d82a8768c1e centos "/bin/bash" 2 minutes ago Up 2 minutes k8s_prometheus-ip-11-2-6-218.compute.internal-alksdjf_default_asldkfj-lsadkfj-lsafjdk
sh-4.2# docker exec -it -u root 0d82a8768c1e /bin/bash
$ root@centos:/#
#and here we are, inside pod with root account - good luck.
Sources: Open a shell to a node using kubectl and post above
To login as different i use exec-as plugin in kubernetes here are the steps you can follow
Make sure git is installed
Step : 1 Install Krew plugin
begin
set -x; set temp_dir (mktemp -d); cd "$temp_dir" &&
set OS (uname | tr '[:upper:]' '[:lower:]') &&
set ARCH (uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/') &&
set KREW krew-$OS"_"$ARCH &&
curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/latest/download/$KREW.tar.gz" &&
tar zxvf $KREW.tar.gz &&
./$KREW install krew &&
set -e KREW; set -e temp_dir
end
Step : 2 Install exec-as
kubectl krew install exec-as
Step : 3 Try with root or different user
kubectl exec-as -u root frontend-deployment-977b8fd4c-tb5pz
WARNING: You installed plugin “prompt” from the krew-index plugin repository. These plugins are not audited for security by the Krew maintainers. Run them at your own risk.
- Thanks for providing an easy way to use this plugin, but it has been recommended in previous answers before. And it’s not working with modern k8s using containerd instead of docker.
A colleague of mine found this tool: https://github.com/ssup2/kpexec
It runs a highly privileged container on the same node as the target container and joins into the namespaces of the target container (IPC, UTS, PID, net, mount)
[…] kpexec now supports the following container runtimes.
- containerd
- CRI-O
- Docker
[…] The cnsenter pod must be created with hostPID and Privileged Option
To get root you could run something like
kpexec -it jenkins-app-2843651954-4zqdp -- /bin/bash
That’s all well and good, but what about new versions of kubernetes that use containerd? using nerdctl exec -uroot -ti 817d52766254 sh
there is no full-fledged root, part of the system in this read-only mode
- 1
After you get the node name, another way to get ssh into it is using:
kubectl debug node/<node_name> -it --image=<image>
eg:
kubectl debug node/my_node -it --image=ubuntu
and after that:
chroot /host
docker container ls
to find container IDdocker exec -it -u root ID /bin/bash
- 3The question is about kubernetes cluster. This solution does not work for remote cluster.– eNca
We can exec into kubernetes
pod through the following command.
kubectl exec --stdin --tty pod-name -n namespace-name -- /bin/bash
- 4The post is asking about executing commands as root. I don’t understand what you mean.
Mar 14, 2017 at 19:52