Issue
I'm looking into moving some of our web servers to docker containers. The jwilder/nginx-proxy image looks interesting, and seems to do what we want, but how would one properly deploy a flask application in a container, and have it work with the jwilder/nginx-proxy server? To be clear, the flask application would also be running in a docker container.
In a separate, but related question, how would one do this for a django app?
It looks like there's a popular tiangolo/uwsgi-nginx-flask image, and a similar dockerfiles/django-uwsgi-nginx image. In this setup, from what I understand, the nginx-proxy container would direct traffic to the uwsgi-nginx-flask or django-uwsgi-nginx container. Is this a common way to do this?
The main thought I had was that in such a setup, we're running extra instances of nginx - one for every python/django app. Is this common? Or is it possible/beneficial/common to somehow have the nginx-proxy talk directly to uwsgi within the python app container?
I see that the nginx-proxy image has a VIRTUAL_PROTO=uwsgi
option that other containers can be started with. Is this something that can be used to make things more efficient? Or is it more effort than it's worth?
Edit: Or is the nginx instance that accompanies the flask/django project beneficial, since it can be used to serve static content, without which, you would need to configure the nginx-proxy image with the location of every project's static files?
Solution
Personally, I prefer to have Django have one container, NGINX in separate container, other applications in other containers etc. For that I prefer to use docker-compose. You can checkout my implementation about using Django + NGINX + PostgreSQL in here.(I have not used jwilder/nginx-proxy, instead I have used official NGINX docker image)
But putting NGINX and Python server in same container does not sound that bad. I have used a lightweight alpine based images for deploying python, for example:
FROM nginx:mainline-alpine
# --- Python Installation ---
RUN apk add --no-cache python3 && \
python3 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip3 install --upgrade pip setuptools && \
if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi && \
if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi && \
rm -r /root/.cache
# --- Work Directory ---
WORKDIR /usr/src/app
# --- Python Setup ---
ADD . .
RUN pip install -r app/requirements.pip
# --- Nginx Setup ---
COPY config/nginx/default.conf /etc/nginx/conf.d/
RUN chmod g+rwx /var/cache/nginx /var/run /var/log/nginx
RUN chgrp -R root /var/cache/nginx
RUN sed -i.bak 's/^user/#user/' /etc/nginx/nginx.conf
RUN addgroup nginx root
# --- Expose and CMD ---
EXPOSE 5000
CMD gunicorn --bind 0.0.0.0:5000 wsgi --chdir /usr/src/app/app & nginx -g "daemon off;"
Although it looks bit messy, but it works fine. Please checkout my full implementation at here.
Depending on how you want to deploy docker images, you can use either approaches. But using docker compose
would be the best solution IMHO. And in both setups, you can use NGINX to serve your static contents(no need to configure it for each static file).
Answered By - ruddra Answer Checked By - Katrina (PHPFixing Volunteer)
0 Comments:
Post a Comment
Note: Only a member of this blog may post a comment.