Dockerfile tips and tricks: Difference between revisions

From HPCWIKI
Jump to navigation Jump to search
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Docker complains to version of requests ==
ERROR: docker 7.0.0 has requirement requests>=2.26.0, but you'll have requests 2.22.0 which is incompatible
install right version of requests packages
pip install -U urllib3 requests==<right version>


== Set up OpenCL for GPUs on Docker ==
== Set up OpenCL for GPUs on Docker ==
Line 185: Line 191:


  <code>docker rmi $(docker images -f dangling=true -q)</code>
  <code>docker rmi $(docker images -f dangling=true -q)</code>
OR
Or
  <code>docker images --quiet --filter=dangling=true | xargs --no-run-if-empty docker rmi</code>
  <code>docker images --quiet --filter=dangling=true | xargs --no-run-if-empty docker rmi</code>


 
Or
 
docker images -a | grep none | awk '{ print $3; }' | xargs docker rmi --force
For easy life, we can add alias command in .bashrc file like
For easy life, we can add alias command in .bashrc file like


Line 205: Line 211:
   
   
  docker run -it <container> /path-to-script/entrypoint.sh bash
  docker run -it <container> /path-to-script/entrypoint.sh bash


== Reference ==
== Reference ==
<references />
<references />

Latest revision as of 09:23, 18 January 2025

Docker complains to version of requests

ERROR: docker 7.0.0 has requirement requests>=2.26.0, but you'll have requests 2.22.0 which is incompatible

install right version of requests packages

pip install -U urllib3 requests==<right version>

Set up OpenCL for GPUs on Docker

Following is essential components to setup OpenCL inside Docker[1]

FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y upgrade \
  && apt-get install -y \
    ocl-icd-libopencl1 \
    opencl-headers \
    clinfo \
    ;

RUN mkdir -p /etc/OpenCL/vendors && \
    echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility

Docker Performance and Resource Tuning

This page would be good resource for Docker resource/performance tuning.

Root password inside a Docker container

  • Set container root password, when docker image is built by yourse
RUN echo 'root:Docker!' | chpasswd
or
RUN echo 'Docker!' | passwd --stdin root
  • To create/change a root password in a running container,

docker exec -itu 0 {container} passwd

  • use -u 0 to login to overriding the USER setting as root, when docker image from 3rd party,

docker container exec -u 0 -it mycontainer bash

Failed to create NAT chain Docker as ...

Reason - Docker package looks broken

Solution - $sudo apt update && sudo apt upgrade && sudo systemctl restart docker.service[2]

Run script at Container stop[3]

By default docker stops your container by sending the SIGTERM signal to the entry point process (normally with process id 1 in the container). If the container is still running after 10 seconds, docker stop and docker-compose down will send the SIGKILL signal, which will remove the process from the OS scheduler.

This can be overridden depending on,

  • The ENTRYPOINT in your Dockerfile, and how it behaves when receiving a signal
  • The STOPSIGNAL in your Dockerfile (default: SIGTERM, but this is not always used in all base containers, see php:8.0-fpm and nginx).
  • The stop_signal in your docker-compose.yml file
  • The stop_grace_period in your docker-compose.yml file


For example, the php-fpm and nginx containers use SIGQUITinstead of SIGTERM as stop signa to graceful shutdown process so that user will not affected from the shutdown.

$ docker inspect nginx:latest | jq '.[].Config.StopSignal'
"SIGQUIT"
$ docker inspect php:7.4-fpm | jq '.[].Config.StopSignal'
"SIGQUIT"

Container stop after 10s

#--- create the init.sh script
cat<<EOT > init.sh
#!/bin/bash

#We don’t trap the signal, so that we can handle SIGTEM to exit the script
echo "This container will not stop immediately after SIGTERM, it uses SIGQUIT"

sleep infinity #sleep infinity in a way so that our bash script can’t trap the signal
EOT

chmod 755 init.sh

#--- create the Dockerfile
cat<<EOT > Dockerfile
from php:8.0-fpm

COPY . /

ENTRYPOINT ["/init.sh"]
EOT

Container stop as soon as SIGTERM ~ $docker stop <container>

cat<<EOT > init.sh
#!/bin/bash

#--- add a function to exit nicely (perhaps kill a few processes and remove some temp files)
function exit_container_SIGTERM(){
  echo "Caught SIGTERM"
  exit 0
}

#--- trap the SIGTERM signal
trap exit_container_SIGTERM SIGTERM 

echo "This container will stop immediately after SIGTERM"

sleep infinity &
wait
EOT

Select which signal to use with the STOPSIGNAL keyward in Dockerfile

cat<<EOT > Dockerfile
from php:8.0-fpm

COPY . /

#--- override the SIGQUIT used in php:8.0-fpm
STOPSIGNAL SIGTERM

ENTRYPOINT ["/init.sh"]
EOT

Handle signal correctly in the bash script

if you don’t take care of how you sleep at the end of the script (bash), the script will not catch any signals sent to it, even if you have a trap in your script.

This does not work This works
function exit_script(){
  echo "Caught SIGTERM"
  exit 0
}


trap exit_script SIGTERM

#--- my init.sh script
./start/my/program &
sleep infinity
function exit_script(){
  echo "Caught SIGTERM"
  exit 0
}

trap exit_script SIGTERM

#--- my init.sh script
./start/my/program &

#--- send sleep into the background, then wait for it.
sleep infinity &
#--- "wait" will wait until the command you sent to the background terminates, which will be never.
#--- "wait" is a bash built-in, so bash can now handle the signals sent by "docker stop"
wait

Install nvm in Dockerfile[4]

RUN mkdir -p $NVM_DIR && \
   curl https://raw.githubusercontent.com/creationix/nvm/v0.36.0/install.sh | bash && \
   . $NVM_DIR/nvm.sh && \
   nvm install $NODE_VERSION

Docker cleanup

When working with Docker, you can end up piling up unused images, containers, and datasets that clutter the output and take up disk space. beyond docker images, disk space can be took up with unused containers, volumes, networks. these objects are generally not removed unless you explicitly ask Docker to do so. otherwise unused objects can cause Docker to use extra disk space.


Although each type of object, Docker provides a prune command, Docker has a single command that cleans up all dangling resources, such as images, containers, volumes, and networks, not tagged or connected to a container.

Following commands prunes images, containers and networks only. Volumes are not pruned by default until you speficy the --volume flags in this

#docker system prune


There are also may possible filter[5] options that you can check on Docker site[6][7]

Dangling image vs unused image

Dangling images are images which do not have a tag, and do not have a child image which displays "<none>" on its name when you run docker imagescomand. The main reason to keep them around is for build caching purposes in case you need to build multiple different top images from some common Docker image layers. because they can be used a independent layers that have no relationship to any tagged images.


However, An unused image is an image that has tags but currently not being used as a container.

Ofcourse it is safes to delete them when you build final Docker image and ready to use.

  • List Dangling images
docker images -f dangling=true
  • Remove Dangling Images
docker rmi $(docker images -f dangling=true -q)

Or

docker images --quiet --filter=dangling=true | xargs --no-run-if-empty docker rmi

Or

docker images -a | grep none | awk '{ print $3; }' | xargs docker rmi --force

For easy life, we can add alias command in .bashrc file like

alias docker_clean_images='docker rmi $(docker images -a --filter=dangling=true -q)'

alias docker_clean_ps='docker rm $(docker ps --filter=status=exited --filter=status=created -q)'

Adding a path to container

The docker client does not know about environment variables that are present inside the container, so prepending this at the command-line won't work indeed.

If this is just for an interactive session, this should probably work[8]

docker run -it <container> bash -c 'exec env PATH=/home/app:$PATH bash'

or

docker run -it <container> /path-to-script/entrypoint.sh bash

Reference