How To Execute A Shell Command Before The Entrypoint Via The Dockerfile
Solution 1:
Images are immutable
Dockerfile defines the build process for an image. Once built, the image is immutable (cannot be changed). Runtime variables are not something that would be baked into this immutable image. So Dockerfile is the wrong place to address this.
Using an entrypoint script
What you probably want to to do is override the default ENTRYPOINT
with your own script, and have that script do something with environment variables. Since the entrypoint script would execute at runtime (when the container starts), this is the correct time to gather environment variables and do something with them.
First, you need to adjust your Dockerfile to know about an entrypoint script. While Dockerfile is not directly involved in handling the environment variable, it still needs to know about this script, because the script will be baked into your image.
Dockerfile:
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["npm", "start"]
Now, write an entrypoint script which does whatever setup is needed before the command is run, and at the end, exec
the command itself.
entrypoint.sh:
#!/bin/sh# Where $ENVSUBS is whatever command you are looking to run$ENVSUBS < fil1 > file2
npm install
# This will exec the CMD from your Dockerfile, i.e. "npm start"exec"$@"
Here, I have included npm install
, since you asked about this in the comments. I will note that this will run npm install
on every run. If that's appropriate, fine, but I wanted to point out it will run every time, which will add some latency to your startup time.
Now rebuild your image, so the entrypoint script is a part of it.
Using environment variables at runtime
The entrypoint script knows how to use the environment variable, but you still have to tell Docker to import the variable at runtime. You can use the -e
flag to docker run
to do so.
docker run -e "ENVSUBS=$ENVSUBS" <image_name>
Here, Docker is told to define an environment variable ENVSUBS
, and the value it is assigned is the value of $ENVSUBS
from the current shell environment.
How entrypoint scripts work
I'll elaborate a bit on this, because in the comments, it seemed you were a little foggy on how this fits together.
When Docker starts a container, it executes one (and only one) command inside the container. This command becomes PID 1, just like init
or systemd
on a typical Linux system. This process is responsible for running any other processes the container needs to have.
By default, the ENTRYPOINT
is /bin/sh -c
. You can override it in Dockerfile, or docker-compose.yml, or using the docker command.
When a container is started, Docker runs the entrypoint command, and passes the command (CMD
) to it as an argument list. Earlier, we defined our own ENTRYPOINT
as /entrypoint.sh
. That means that in your case, this is what Docker will execute in the container when it starts:
/entrypoint.sh npm start
Because ["npm", "start"]
was defined as the command, that is what gets passed as an argument list to the entrypoint script.
Because we defined an environment variable using the -e
flag, this entrypoint script (and its children) will have access to that environment variable.
At the end of the entrypoint script, we run exec "$@"
. Because $@
expands to the argument list passed to the script, this will run
exec npm start
And because exec
runs its arguments as a command, replacing the current process with itself, when you are done, npm start
becomes PID 1 in your container.
Why you can't use multiple CMDs
In the comments, you asked whether you can define multiple CMD
entries to run multiple things.
You can only have one ENTRYPOINT
and one CMD
defined. These are not used at all during the build process. Unlike RUN
and COPY
, they are not executed during the build. They are added as metadata items to the image once it is built.
It is only later, when the image is run as a container, that these metadata fields are read, and used to start the container.
As mentioned earlier, the entrypoint is what is really run, and it is passed the CMD
as an argument list. The reason they are separate is partly historical. In early versions of Docker, CMD
was the only available option, and ENTRYPOINT
was fixed as being /bin/sh -c
. But due to situations like this one, Docker eventually allowed ENTRYPOINT
to be defined by the user.
Solution 2:
Will the Run ccommand be excuted when the env variable is available?
Environnement variables set with -e
flag are set when you run
the container.
Problem is, Dockerfile is read on container build
, so the RUN
command will not be aware of thoses environnement variables.
The way to have environment variables set on build, is to add in your Dockerfile, ENV
line. (https://docs.docker.com/engine/reference/builder/#/environment-replacement)
So your Dockerfile may be :
FROM node:latest
WORKDIR /src
ADD package.json .
ENV A YOLO
RUN echo"$A"
And the output :
$ docker build .
Sending build context to Docker daemon 2.56 kB
Step 1 : FROM node:latest
---> f5eca816b45d
Step 2 : WORKDIR /src
---> Using cache---> 4ede3b23756d
Step 3 : ADD package.json .
---> Using cache---> a4671a30bfe4
Step 4 : ENV A YOLO
---> Running in 7c325474af3c---> eeefe2c8bc47
Removing intermediate container 7c325474af3c
Step 5 : RUN echo "$A"
---> Running in 35e0d85d8ce2
YOLO
---> 78d5df7d2322
You see at the before-last line when the RUN
command launched, the container is aware the envrionment variable is set.
Solution 3:
For images with bash
as the default entrypoint, this is what I do to allow myself to run some scripts before shell start if needed:
FROM ubuntu
COPY init.sh /root/init.sh
RUN echo'a=(${BEFORE_SHELL//:/ }); for c in ${a[@]}; do source $x; done' >> ~/.bashrc
and if you want to source a script at container login you pass its path in the environment variable BEFORE_SHELL
. Example using docker-compose:
version:'3'services:shell:build:context:.environment:BEFORE_SHELL:'/root/init.sh'
Some remarks:
- If
BEFORE_SHELL
is not set then nothing happens (we have the default behavior) - You can pass any script path available in the container, included mounted ones
- The scripts are sourced so variables defined in the scripts will be available in the container
- Multiple scripts can be passed (use a
:
to separate the paths)
Solution 4:
I had an extremely stubborn container that would not run anything on startup. This technique workd well, and took me a day to find as every single other possible technique failed.
- Run
docker inspect postgres
to find entrypoint script. In this case, it wasdocker-entrypoint.sh
. This might vary by container type and Docker version. - Open a shell into the container, then find the full path:
find / -name docker-entrypoint.sh
- Inspect the file:
cat /usr/local/bin/docker-entrypoint.sh
In the Dockerfile, use SED to insert line 2 (using 2i
).
# Insert into Dockerfile
RUN sed -i '2iecho Run on startup as user `whoami`.' /usr/local/bin/docker-entrypoint.sh
In my particular case, Docker ran this script twice on startup: first as root, then as user postgres
. Can use the test
to only run the command under root.
Post a Comment for "How To Execute A Shell Command Before The Entrypoint Via The Dockerfile"