PHPFixing
  • Privacy Policy
  • TOS
  • Ask Question
  • Contact Us
  • Home
  • PHP
  • Programming
  • SQL Injection
  • Web3.0
Showing posts with label docker-compose. Show all posts
Showing posts with label docker-compose. Show all posts

Saturday, November 12, 2022

[FIXED] How to clear cache for Django with Docker?

 November 12, 2022     django, docker, docker-compose, memcached, python     No comments   

Issue

I have a Django project that I am using with memcached and Docker. When I use sudo docker-compose up in development I'd like the entire cache to be cleared. Rather than disabling caching wholesale while in development mode, is there a way to run cache.clear() as noted in this question on each re-run of sudo docker-compose up?

I am not sure whether this should go in:

  1. docker-entrypoint.sh
  2. Dockerfile
  3. docker-compose.yml
  4. Somewhere else?

docker-compose.yml:

version: "3"
services:
  redis:
    image: "redis:alpine"
    command: "redis-server --requirepass ${REDISPASS} --bind 0.0.0.0"
    ports:
      - '6379:6379'
  memcached:
    image: "memcached:latest"
    ports:
      - '11211:11211'
  nginx:
      image: nginx:latest
      volumes:
        - ./configuration/nginx/conf.d/:/etc/nginx/conf.d
        - ./configuration/nginx/realtime:/etc/nginx/realtime
        - ./static_cdn/static_root/static:/static
      ports:
        - 80:80
        - 443:443      
      depends_on:
        - app_main
        - app_async_app1
        - app_async_app2
  app_main:
      command: "djangoAppName.settings.prod ${SECRET_KEY} 1 ${DB_PASS}     ${REDISPASS}"      
      image: "django_image_name"
      ports:
        - 0003:0003
      volumes:
        - ./static_cdn/static_root/:/static_cdn/
      depends_on:
        - redis
        - memcached
 app_async_app2:
      command: "djangoAppName.settings.prod ${SECRET_KEY} 2 ${DB_PASS} ${REDISPASS}"      
      image: "django_image_name"
      ports:
        - 0002:0002
      depends_on:
        - redis
        - memcached
        - app_main
  app_async_app1:
      command: "djangoAppName.settings.prod ${SECRET_KEY} 3 ${DB_PASS} ${REDISPASS}"    
      image: "django_image_name"
      depends_on:
        - redis
        - memcached
        - app_main
      ports:
        - 0001:0001
  react:
      command: "npm run"
      image: "django_image_name"
      volumes:
        - ./static_cdn/static_root/:/static_cdn/
      depends_on:
        - memcached
        - app_main

Solution

As per this answer you can add a service that's executed before the memcached service that clears out the cache. As it looks like you're using Linux Alpine, you can add this service to docker-compose.yml:

clearcache:
    command: [sh, -c, "python manage.py clear_cache"] 

and then add to memcached:

memcached:
    ...
    depends_on:
     - clearcache

There's also an example in there that does it in the same command and not relying on a separate service (though personally I don't like that).

For the cache clearing command, this answer has some useful discussion and posts.

clear_cache.py:

from django.core.management.base import BaseCommand
from django.core.cache import cache

class Command(BaseCommand):
    def handle(self, *args, **kwargs):
        cache.clear()
        self.stdout.write('Cleared cache\n')


Answered By - Nobilis
Answer Checked By - Cary Denson (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Wednesday, November 9, 2022

[FIXED] How use custom dkim selector on mailu?

 November 09, 2022     dkim, docker, docker-compose     No comments   

Issue

I have a server with mailu installed and would like to know how to use a specific dkim selector.

I tried putting a file inside mailu/overrides/rspamd/dkim.conf

selector = "dkim1";
path = "/var/lib/rspamd/dkim/$domain.$selector.key";

and also mailu/overrides/rspamd/dkim_signing.conf

dkim_signing {
allow_envfrom_empty = true;
allow_hdrfrom_mismatch = false; 
allow_hdrfrom_multiple = false; 
allow_username_mismatch = false; 
path = "/var/lib/rspamd/dkim/$domain.$selector.key"; 
selector = "dkim1"; 
sign_authenticated = true; 
sign_local = true; 
symbol = "DKIM_SIGNED"; 
try_fallback = true; 
use_domain = "header"; 
use_esld = true; 
use_redis = true; 
key_prefix = "DKIM_KEYS";
}

but apparently I was not successful


Solution

I found the answer, just set the variable "DKIM_SELECTOR" in "mailu.env"



Answered By - Luam Navega Ribeiro
Answer Checked By - Marie Seifert (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, November 8, 2022

[FIXED] How to docker-compose an image hosted in a Digital Ocean private repo?

 November 08, 2022     digital-ocean, docker, docker-compose     No comments   

Issue

I connected doctl to my account and logged into the private registry, validate it's successfully authorized, but when I try to docker-compose up -d an image it says that pull access is denied. What could be the reson ?

> doctl account get
User Email                 Team       Droplet Limit    Email Verified    User UUID                               Status
user@domain.name           My Team    25               true              aa11a5d9-1913-4f8d-b427-005fa9e11be6    active

Docker daemon is logged into the registry:

> docker login registry.digitalocean.com
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /home/admin/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

Connection with the registry is established:

> doctl registry login
Logging Docker in to registry.digitalocean.com

My linux user is part of the docker group:

> groups
sudo docker

Docker images are present locally:

>docker images
REPOSITORY                                          TAG       IMAGE ID       CREATED        SIZE
registry.digitalocean.com/project/strategy          latest    60d4796574fc   26 hours ago   1.96GB
registry.digitalocean.com/project/redis             latest    ee373138aeec   47 hours ago   177MB

But I'm unable to execute containers:

(strategy) admin@ubuntu-s-2vcpu-2gb-fra1:/var/www/strategy$ docker-compose up -d
[+] Running 0/5
 ⠿ flower Error                                                                                                                                                                                                                                                    1.6s
 ⠿ celery_worker Error                                                                                                                                                                                                                                             1.6s
 ⠿ celery_beat Error                                                                                                                                                                                                                                               1.6s
 ⠿ django Error                                                                                                                                                                                                                                                    1.5s
 ⠿ redis-4 Error                                                                                                                                                                                                                                                   1.5s
Error response from daemon: pull access denied for strategy, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

docker-compose.yml

version: '3.8'

services:
  django:
    build:
      context: .
      dockerfile: ./compose/local/django/Dockerfile
    image: strategy
    command: /start
    volumes:
      - .:/app
    ports:
      - "8004:8004"
    env_file:
      - strategy/.env
    depends_on:
      - redis-4
    networks:
      - mynetwork

  redis-4:
    build:
      context: .
      dockerfile: ./compose/local/redis/Dockerfile
    container_name: redis-4
    image: redis
    expose:
      - "6375"
    networks:
      - mynetwork

  celery_worker:
    image: strategy
    command: /start-celeryworker
    volumes:
      - .:/app:/strategy
    env_file:
      - strategy/.env
    depends_on:
      - redis-4
      - strategy
    networks:
      - mynetwork

  celery_beat:
    image: strategy
    command: /start-celerybeat
    volumes:
      - .:/app:/strategy
    env_file:
      - strategy/.env
    depends_on:
      - redis-4
      - strategy
    networks:
      - mynetwork

  flower:
    image: strategy
    command: /start-flower
    volumes:
      - .:/app:/strategy
    env_file:
      - strategy/.env
    depends_on:
      - redis-4
      - strategy
    networks:
      - mynetwork

networks:
  mynetwork:
    name: mynetwork

Solution

The names of the images are not correct:

strategy -> registry.digitalocean.com/project/strategy
redis -> registry.digitalocean.com/project/redis
...

If you don't specify the registry then docker assumes it's Docker Hub.



Answered By - Mihai
Answer Checked By - Willingham (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Sunday, November 6, 2022

[FIXED] Why React app wont reloading in Docker-compose container

 November 06, 2022     docker, docker-compose, reactjs     No comments   

Issue

im trying to start my dev react app in Docker and developing it with live reload.

My Dockerfile:

FROM node:16.8.0-bullseye

WORKDIR /usr/src/app
COPY ./package.json .

RUN npm install
RUN npm install -g nodemon

COPY . .

CMD npm run start

My docker-compsoe.yml

version: '3'
services:
  app:
    build:
      dockerfile: ./Dockerfile.dev
    volumes:
      - ".:/usr/src/app"
      - "/usr/src/app/node_modules"
    ports:
      - 3000:3000
    environment:
      - CHOKIDAR_USEPOLLING=true

It starts server, but when I edit source code, it doesn't reload. What am I doing wrong?

Start React app in Docker and live reload on file changes


Solution

try to use WATCHPACK_POLLING=true instead of CHOKIDAR_USEPOLLING=true



Answered By - Layeb Mazen
Answer Checked By - Gilberto Lyons (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to change the default character set of mysql using docker-compose?

 November 06, 2022     docker, docker-compose, mysql     No comments   

Issue

When I save the string of Chinese, mysql rise the error "Exception Value:
(1366, "Incorrect string value: '\xE5\xB0\x8F\xE6\x98\x8E' for column 'name' at row 1")",I check the character of mysql,it show this:

mysql> show variables like 'character%';
+--------------------------+----------------------------+
| Variable_name            | Value                      |
+--------------------------+----------------------------+
| character_set_client     | latin1                     |
| character_set_connection | latin1                     |
| character_set_database   | latin1                     |
| character_set_filesystem | binary                     |
| character_set_results    | latin1                     |
| character_set_server     | latin1                     |
| character_set_system     | utf8                       |
| character_sets_dir       | /usr/share/mysql/charsets/ |
+--------------------------+----------------------------+
8 rows in set (0.00 sec)

And my docker-compose.yml is as fellow:

web:
    image: yetongxue/docker_test:1.2
    links:
      - "db"
    ports:
      - "8100:8000"
    volumes:
      - "/Users/yetongxue/docker_v/docker_test/media:/root/media"
    restart: always

db:
    image: mysql:5.7
    environment:
      MYSQL_ROOT_PASSWORD: qwerasdf
      MYSQL_DATABASE: docker_db
    restart: always
    volumes:
      - "/Users/yetongxue/docker_v/docker_test/db:/var/lib/mysql"

I know how to set the character of mysql with my.cnf,but how can I do this in the docker-compose.yml? Anybody know this? Thanks!


Solution

You can either build your own mysql image where you modify my.cnf, or modify the command that starts mysql's daemon with --character-set-server=utf8mb4 and --collation-server=utf8_unicode_ci.

web:
    image: yetongxue/docker_test:1.2
    links:
      - "db"
    ports:
      - "8100:8000"
    volumes:
      - "/Users/yetongxue/docker_v/docker_test/media:/root/media"
    restart: always

db:
    image: mysql:5.7
    environment:
      MYSQL_ROOT_PASSWORD: qwerasdf
      MYSQL_DATABASE: docker_db
    restart: always
    volumes:
      - "/Users/yetongxue/docker_v/docker_test/db:/var/lib/mysql"
    command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci']

I recommend using utf8mb4, as it can store up to "4 bytes per multibyte character" (https://dev.mysql.com/doc/refman/5.5/en/charset-unicode-utf8mb4.html)



Answered By - Javier Arias
Answer Checked By - David Goodson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] Why containers are not restarted?

 November 06, 2022     docker, docker-compose     No comments   

Issue

I updated docker compose file. Then when I rebuild containers, it seems they are not restarted. Why?

Here we can see that last step is not cached and new image was created:

$ BUILDKIT_PROGRESS=plain docker compose --verbose -p bot -f docker-compose.yml -f docker-compose.dev.yml --env-file etc/db_env.conf up --detach --build
...
#16 [my-perl 12/12] COPY . .
#16 DONE 0.0s

#17 exporting to image
#17 exporting layers 0.0s done
DEBU[0001] stopping session                             
#17 writing image sha256:f64d73e7d5c5d5baa69df94dfc083bac08fd8395ae86e25204bad45d20007134 done
#17 naming to docker.io/library/bot_app done
#17 DONE 0.1s

Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
[+] Running 2/0
 ⠿ Container bot-db-1   Running                                                      0.0s
 ⠿ Container bot-app-1  Running                            

Solution

Found answer: https://github.com/docker/compose/issues/9259

This is a bug and fixed at docker compose 2.6.1. Mine version is 2.3.3



Answered By - Eugen Konkov
Answer Checked By - Robin (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] where should I put .dockerignore?

 November 06, 2022     docker, docker-compose     No comments   

Issue

enter image description here

I have dockerfile in each folder(admin, portal, webpai). and docker-compose file in root only(proxy.yml, services.yml) where should I put .dockerignore file? in each folder? or root folder only?


Solution

As each build instruction uses the specified directory (containing the Dockerfile) as buildcontext, you need to put the .dockerignore into every folder.



Answered By - PHagemann
Answer Checked By - Candace Johnson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Saturday, November 5, 2022

[FIXED] How do configure docker compose to use a given subnet if a variable is set, or choose for itself if it isn't?

 November 05, 2022     docker, docker-compose, docker-networking, environment-variables     No comments   

Issue

I have the following networks configuration in my docker compose file.

networks:
    default:
        ipam:
            driver: default
            config:
                - subnet: ${DOCKER_SUBNET}

When DOCKER_SUBNET is set, the subnet specified in that variable is used as expected. When the variable is not set I get: ERROR: Invalid subnet : invalid CIDR address: because the variable is blank (which is entirely reasonable).

Is there a way to configure the ipam driver such that when the DOCKER_SUBNET variable is not set, docker-compose will choose an available subnet as it would normally do if the ipam configuration was not given?


Solution

Compose will only choose an available subnet if you don't provide any ipam configuration for the network. Compose doesn't have advanced functionality to modify config on the fly.

You could make the decision outside of compose, either with multiple compose files or a template based system, in shell or some other language that launches the docker-compose command.

Seperate the compose network config from the rest of the service config in files:

docker-compose-net-auto.yml

version: "2.1"
networks:
  default:

docker-compose-net-subnet.yml

version: "2.1"
networks:
  default:
    ipam:
      driver: default
      config:
        - subnet: ${DOCKER_SUBNET}

Then create a script launch.sh that makes the choice of which network file to include.

#!/bin/sh
if [ -z "$DOCKER_SUBNET" ]; then
  docker-compose -f docker-compose.yml -f docker-compose-net-auto.yml up
else
  docker-compose -f docker-compose.yml -f docker-compose-net-subnet.yml up
fi


Answered By - Matt
Answer Checked By - Cary Denson (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to properly send env variables to image?

 November 05, 2022     docker, docker-compose, dockerfile, environment-variables     No comments   

Issue

I wrote a Docker image which need to read some variables so I wrote in the Dockerfile:

ARG UID=1000
ARG GID=1000

ENV UID=${UID}
ENV GID=${GID}
ENV USER=laravel

RUN useradd -G www-data,root -u $UID -d /home/$USER $USER
RUN mkdir -p /home/$USER/.composer && \
    chown -R $USER:$USER /home/$USER

This code actually allow me to create a laravel user which has the id of the user that starts the container.

So an user that pull this image actually set in the docker-compose section this content:

env_file: .env

which have:

GROUP_ID=1001
USER_ID=1001

For some weird reason that I don't understand, when I exec in the container with the pulled image, the user laravel is mapped with the id 1000 which that is the default value setted in the Dockerfile.

Instead, if I test the image using:

build:
  context: ./docker/php
  dockerfile: ./Dockerfile
  args:
    - UID=${GROUP_ID:-1000}
    - GID=${USER_ID:-1000}

I can see correctly the user laravel mapped as 1001. So the questions are the following:

  1. is the UID variable not reading from env file?
  2. is the default value overwriting the env value?

Thanks in advance for any help

UPDATE:

As suggested, I tried to change the user id and group id in the bash script executed in the entrypoint, in the Dockerfile I have this:

ENTRYPOINT ["start.sh"]

then, at the start of start.sh I've added:

usermod -u ${USER_ID} laravel
groupmod -g ${GROUP_ID} laravel

the issue now is:

usermod: user laravel is currently used by process 1 groupmod: Permission denied. groupmod: cannot lock /etc/group; try again later.


Solution

Docker build phase and the run phase are the key moment here. New user is added in the build phase and hence it is important to pass dynamic values while building docker image in build phase with e.g.

docker build --build-arg UID=1001 --build-arg GID=1001 .

or the case which you have already used and where it works (i.e. docker image is re-created with expected IDs), in docker-compose file:

build:
  context: ./docker/php
  dockerfile: ./Dockerfile
  args:
    - UID=${GROUP_ID:-1000}
    - GID=${USER_ID:-1000}

In run phase, i.e. starting the container instance of already built docker image, passing env does not overwrite vars of build phase. Hence, in you case you can omit passing envs when starting container.



Answered By - m19v
Answer Checked By - Timothy Miller (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, October 24, 2022

[FIXED] How to establish database connection from wordpress docker

 October 24, 2022     docker, docker-compose, mysql, wordpress     No comments   

Issue

I try running a docker compose wordpress by using this guide: https://docs.docker.com/compose/wordpress/

This is the yaml file as described in the guide:

version: '3.3'
services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: wordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
       WORDPRESS_DB_NAME: wordpress
       WORDPRESS_DEBUG: "true"
volumes:
    db_data: {}

After I run my

"docker-compose up -d" 

command, I go to "http://localhost:8000/" in my browser and get the white page with "Error establishing a database connection". According to the guide, wordpress should show me the 5 minute Installation already at this point. When I run the container with wordpress debug true, this error message is shown then:

Warning: mysqli_real_connect(): (HY000/2002): Connection refused in /var/www/html/wp-includes/wp-db.php on line 1612

Connection refused

I now use

docker exec it container_id /bin/bash 

and type "mysql -p". Now I use the MYSQL_ROOT_PASSWORD from the docker compose file but I get access denied ("Access denied for user 'root'@'localhost' (using password: YES)")

I am not sure what I did earlier, but at some point it worked and I listed the databases and the mysql.users and the db and user were there.

So I dont even know, what the problem here is...

And why can I not access as root anymore? Does anyone know what to do?

EDIT: changed port back to 3306, I tried 3308 just to see if that may be a port issue


Solution

I found another post and they used this yaml. Still not sure why this works, but it does.

version: '3.3'
services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: wordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - 8000:80
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
volumes:
    db_data: {}


Answered By - Klaxx
Answer Checked By - Mildred Charles (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Saturday, October 22, 2022

[FIXED] How do I communicate with the docker daemon from inside a container using unix sockets?

 October 22, 2022     communication, docker, docker-compose, sockets, unix-socket     No comments   

Issue

A bit of background, I have a docker-compose based application with multiple services. Each service may have n number of instances. My inter-service communication strategy requires the redis service to be aware of the current state of the application, in terms of when there are new instances added, and when instances die or are removed.

From reading multiple blog posts and stack overflow questions, I'm aware that the solution involves communicating with the host docker daemon by binding the unix socket /var/run/docker.sock to a socket in the container, but I am unable to get any traction. Most resources I have come across give a fairly shallow explanation of what is going on, and there is definitely lacking any sort of ELI5 tutorial out there.

Currently, in my docker-compose.yml, I have the following config as part of one of my nodejs based services (no, it's not part of the redis service b/c I am just at a proof of concept stage at the moment)...

volumes:
  - /var/run/docker.sock:/var/run/docker.sock:ro

I've seen this exact snippet dozens of times in other posts and stack overflow questions, but the explanations usually end there.

In my nodejs/express service, I have an endpoint I created for the purpose of testing if my setup is working or not. It is using got by Sindre Sorhus for it's ability to work with unix sockets.

app.get('/dockersocket', async (req, res) => {
  const data = await got('unix:/var/run/docker.sock:/var/run/docker.sock')
  res.status(200).json({ data: data })
})

Needless to say, it does not work in it's current form. When I wrap the snippet above in a try/catch and console.log the error, I receive the output below...

{
  HTTPError: Response code 404(Not Found)
  at EventEmitter.emitter.on(/usr/src / app / node_modules / got / source / as - promise.js: 74: 19)
  at processTicksAndRejections(internal / process / next_tick.js: 81: 5)
  name: 'HTTPError',
  host: null,
  hostname: 'unix',
  method: 'GET',
  path: '/var/run/docker.sock',
  socketPath: '/var/run/docker.sock',
  protocol: 'http:',
  url: 'http://unix/var/run/docker.sock:/var/run/docker.sock',
  gotOptions: {
    path: '/var/run/docker.sock',
    protocol: 'http:',
    slashes: true,
    auth: null,
    host: null,
    port: null,
    hostname: 'unix',
    hash: null,
    search: null,
    query: null,
    pathname: '/var/run/docker.sock:/var/run/docker.sock',
    href: 'http://unix/var/run/docker.sock:/var/run/docker.sock',
    retry: {
      retries: [Function],
      methods: [Set],
      statusCodes: [Set],
      errorCodes: [Set]
    },
    headers: {
      'user-agent': 'got/9.6.0 (https://github.com/sindresorhus/got)',
      'accept-encoding': 'gzip, deflate'
    },
    hooks: {
      beforeRequest: [],
      beforeRedirect: [],
      beforeRetry: [],
      afterResponse: [],
      beforeError: [],
      init: []
    },
    decompress: true,
    throwHttpErrors: true,
    followRedirect: true,
    stream: false,
    form: false,
    json: false,
    cache: false,
    useElectronNet: false,
    socketPath: '/var/run/docker.sock',
    method: 'GET'
  },
  statusCode: 404,
  statusMessage: 'Not Found',
  headers: {
    'content-type': 'application/json',
    date: 'Sun, 31 Mar 2019 01:10:06 GMT',
    'content-length': '29',
    connection: 'close'
  },
  body: '{"message":"page not found"}\n'
}

Solution

The Docker daemon API can be communicated with using HTTP endpoints, and by default listens on a UNIX socket. That means you can communicate with it like any normal HTTP server, with just a bit of extra handling for when it's a socket.

You are getting an error because while you did send a request to the socket, you are requesting the wrong path. The syntax for a request is:

PROTOCOL://unix:SOCKET_PATH:ENDPOINT_PATH

For your code, that means:

const data = await got('unix:/var/run/docker.sock:/var/run/docker.sock')

// protocol      = http (default by library)
// socket path   = /var/run/docker.sock
// endpoint path = /var/run/docker.sock

To fix your issue, you should request a valid Docker Engine API endpoint (documentation for v1.39) as the HTTP path. Example to list containers:

await got('unix:/var/run/docker.sock:/containers/json')

If you have curl handy, you can test this from your shell:

$ curl --unix-socket /var/run/docker.sock http://localhost/containers/json


Answered By - hexacyanide
Answer Checked By - Clifford M. (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Thursday, October 20, 2022

[FIXED] How to pass environment variables from docker-compose into the NodeJS project?

 October 20, 2022     docker, docker-compose, docker-image, dockerfile, node.js     No comments   

Issue

I have a NodeJS application, which I want to docker-size.

The application consists of two parts:

  • server part, running an API which is taking data from a DB. This is running on the port 3000;

  • client part, which is doing a calls to the API end-points from the server part. This is running on the port 8080;

With this, I have a variable named "server_address" in my client part and it has the value of "localhost:3000". But here is the thing, the both projects should be docker-sized in a separate Dockerimage files and combined in one docker-compose.yml file.

So due some reasons, I have to run the docker containers via docker-compose.yml file. So is it possible to connect these things somehow and to pass the server address externally from dockerfile into the NodeJS project?

docker-composer.yml

version: "3"
services:
  client-side-app:
    image: my-client-side-docker-image
    environment:
      - BACKEND_SERVER="here we need to enter backend server"
    ports:
      - "8080:8080"
  server-side-app:
    image: my-server-side-docker-image
    ports:
      - "3000:3000"

both of the Dockerfile's looks like:

FROM node:8.11.1
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]

by having these files, I have the concern:

  • will I be able to use the variable BACKEND_SERVER somehow in the project? And if yes, how to do this? I'm not referring to the Dockerimage file, instead into the project itself?

Solution

Use process.env in node.js code, like this

process.env.BACKEND_SERVER

Mention your variable in docker-compose file.

version: "3"
services:
  client-side-app:
    image: my-client-side-docker-image
    environment:
      - BACKEND_SERVER="here we need to enter backend server"
    ports:
      - "8080:8080"
  server-side-app:
    image: my-server-side-docker-image
    ports:
      - "3000:3000"


Answered By - prisar
Answer Checked By - Dawn Plyler (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, October 18, 2022

[FIXED] When running psql in a Docker container, how to do I reference my Postgres host in another Docker container?

 October 18, 2022     docker, docker-compose, postgresql, psql     No comments   

Issue

I have the following two containers in my docker-compose.yml file

  postgres:
    image: postgres:10.5
    ports:
      - 5105:5432
    ...
  web:
    restart: always
    build: ./web
    ports:           # to access the container from outside
      - "8000:8000"
    env_file: .env
    command: /usr/local/bin/gunicorn directory.wsgi:application --reload -w 1 -b :8000
    volumes:
    - ./web/:/app
    depends_on:
      - postgres

When I'm logged in to my "web" container (an Ubuntu 18 container), I'd like to be able to login to the PostGres container. How do I do this? I tried this

root@0868cef9c65c:/my-app# PGPORT=5432 PGPASSWORD=password psql -h localhost -Uchicommons directory_data
psql: error: could not connect to server: Connection refused
    Is the server running on host "localhost" (127.0.0.1) and accepting
    TCP/IP connections on port 5432?
could not connect to server: Cannot assign requested address
    Is the server running on host "localhost" (::1) and accepting
    TCP/IP connections on port 5432?

but this doesn't seem to be working.


Solution

In a Docker container, localhost refers to the container itself.

By default, Docker compose creates a docker bridge network and connects each container to it. From a container on the bridge network, you can reach other containers using their service names. So to reach the database container, you'd use postgres as the host name, like this

PGPORT=5432 PGPASSWORD=password psql -h postgres -Uchicommons directory_data

On the bridge network, you use the native ports. So it's port 5432 for Postgres. If you only need to access a container from other containers on the bridge network, you don't need to map the port to a host port. Mapping to a host port is only needed if you need to access the container from the host computer.



Answered By - Hans Kilian
Answer Checked By - Clifford M. (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How can I delete all local Docker images?

 October 18, 2022     docker, docker-compose, python     No comments   

Issue

I recently started using Docker and never realized that I should use docker-compose down instead of ctrl-c or docker-compose stop to get rid of my experiments. I now have a large number of unneeded docker images locally.

Is there a flag I can run to delete all the local docker images & containers?

Something like docker rmi --all --force --all flag does not exist but I am looking for something with similar idea.


Solution

Unix

To delete all containers including its volumes use,

docker rm -vf $(docker ps -aq)

To delete all the images,

docker rmi -f $(docker images -aq)

Remember, you should remove all the containers before removing all the images from which those containers were created.

Windows - Powershell

docker images -a -q | % { docker image rm $_ -f }

Windows - Command Line

for /F %i in ('docker images -a -q') do docker rmi -f %i


Answered By - techtabu
Answer Checked By - Timothy Miller (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to pass environment variable to docker-compose up

 October 18, 2022     docker, docker-compose, dockerfile     No comments   

Issue

I am trying to run a container. I already have the image uploaded to private Docker registry. I want to write a compose file to download and deploy the image. But I want to pass the TAG name as a variable from the docker-compose run command.My compose file looks like below. How can I pass the value for KB_DB_TAG_VERSION as part of docker-compose up command?

version: '3'
services:
   db:
    #build: k-db
    user: "1000:50"
    volumes:
      - /data/mysql:/var/lib/mysql
    container_name: k-db
    environment:
      - MYSQL_ALLOW_EMPTY_PASSWORD=yes
    image:  XX:$KB_DB_TAG_VERSION
    image: k-db
    ports:
      - "3307:3306"

Solution

You have two options:

  1. Create the .env file as already suggested in another answer.
  2. Prepend KEY=VALUE pair(s) to your docker-compose command, e.g:

    KB_DB_TAG_VERSION=kb-1.3.20-v1.0.0 docker-compose up
    

    Exporting it earlier in a script should also work, e.g.:

    export KB_DB_TAG_VERSION=kb-1.3.20-v1.0.0
    docker-compose up
    


Answered By - Jakub Kukul
Answer Checked By - David Goodson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to reduce output in `docker-compose` up command

 October 18, 2022     continuous-integration, docker, docker-compose     No comments   

Issue

I'm looking for a way to reduce the output generated by docker compose up.
When running in CI all the "interactive" output for download and extract progress is completely useless and generate lots of useless text.

docker has --quiet but I don't see the same for docker compose.


Solution

There is a --quiet-pull option that lets you reduce the output generated docker compose up and docker compose run

docker compose up --quiet-pull



Answered By - user2176681
Answer Checked By - Marie Seifert (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to customise docker-compose containing the odoo app and postgresql with non default database name, user name and password?

 October 18, 2022     default, docker, docker-compose, odoo, postgresql     No comments   

Issue

I have an application (odoo, but my question may be for any app, I don't know) and a database with database default name, user and password. All is launched with docker-compose. With these default values, it works great (I have copy/pasted only what is relevant):

  db:
    image: postgres:14
    user: root
    environment:
      - POSTGRES_PASSWORD=odoo
      - POSTGRES_USER=odoo
      - POSTGRES_DB=postgres
  volumes:
      - ./postgresql:/var/lib/postgresql/data
  web:
    image: odoo:15
    user: root
    depends_on:
      - db
    environment:
      - HOST=db
      - USER=odoo
      - PASSWORD=odoo
  volumes:
      - ./etc:/etc/odoo
      - ./odoo-data:/var/lib/odoo

There is also a config file etc/odoo.conf which needs to be updated and I did (here the default values):

db_name = postgres
db_user = odoo
db_password = odoo

If for example I set the password to 123, the user name to my and the database name to mydb, remove the containers and suppress ./postgresql to restart clean, I get the following error from the database host:

odoo-db-1   | 2022-09-18 15:08:17.420 UTC [908] FATAL:  password authentication failed for user "my"
odoo-db-1   | 2022-09-18 15:08:17.420 UTC [908] DETAIL:  Role "my" does not exist.

Of course, I have updated both the docker-compose and odoo.conf files with my values.

What could I miss? My investigations at this point have failed.

Here is my docker-compose file:

version: '2'
services:
  db:
    image: postgres:14
    user: root
    environment:
      - POSTGRES_DB_FILE=/run/secrets/postgresql_db
      - POSTGRES_USER_FILE=/run/secrets/postgresql_user
      - POSTGRES_PASSWORD_FILE=/run/secrets/postgresql_password
    restart: always             # run as a service
    volumes:
        - ./postgresql:/var/lib/postgresql/data
    secrets:
      - postgresql_user
      - postgresql_password
      - postgresql_db

  web:
    image: odoo:15
    user: root
    depends_on:
      - db
    ports:
      - "10013:8069"
      - "20013:8072" # live chat
    tty: true
    command: --
    environment:
      - HOST=db
      - USER_FILE=/run/secrets/postgresql_user
      - PASSWORD_FILE=/run/secrets/postgresql_password
    volumes:
      - /etc/timezone:/etc/timezone:fr
      - /etc/localtime:/etc/localtime:fr
      - ./addons:/mnt/extra-addons
      - ./etc:/etc/odoo
      - ./odoo-data:/var/lib/odoo
    secrets:
      - postgresql_user
      - postgresql_password
    restart: always             # run as a service

secrets:
  postgresql_db:
    file: odoo_pg_db
  postgresql_user:
    file: odoo_pg_user
  postgresql_password:
    file: odoo_pg_pass

Here is my etc/odoo.conf:

[options]
addons_path = /mnt/extra-addons
data_dir = /etc/odoo
admin_passwd = "my_own_admin_password"
logfile = /etc/odoo/odoo-server.log
db_name = my_db
db_user = myself
db_password = "my_db_user_password"
dev_mode = reload

The entrypoint.sh:

#!/bin/bash

set -e

# set the postgres database host, port, user and password according to the environment
# and pass them as arguments to the odoo process if not present in the config file
: ${HOST:=${DB_PORT_5432_TCP_ADDR:='db'}}
: ${PORT:=${DB_PORT_5432_TCP_PORT:=5432}}
: ${USER:=${DB_ENV_POSTGRES_USER:=${POSTGRES_USER:='odoo'}}}
: ${PASSWORD:=${DB_ENV_POSTGRES_PASSWORD:=${POSTGRES_PASSWORD:='odoo'}}}

# install python packages
pip3 install pip --upgrade
pip3 install -r /etc/odoo/requirements.txt

# sed -i 's|raise werkzeug.exceptions.BadRequest(msg)|self.jsonrequest = {}|g' /usr/lib/python3/dist-packages/odoo/http.py

DB_ARGS=()
function check_config() {
    param="$1"
    value="$2"
    if grep -q -E "^\s*\b${param}\b\s*=" "$ODOO_RC" ; then       
        value=$(grep -E "^\s*\b${param}\b\s*=" "$ODOO_RC" |cut -d " " -f3|sed 's/["\n\r]//g')
    fi;
    DB_ARGS+=("--${param}")
    DB_ARGS+=("${value}")
}
check_config "db_host" "$HOST"
check_config "db_port" "$PORT"
check_config "db_user" "$USER"
check_config "db_password" "$PASSWORD"

case "$1" in
    -- | odoo)
        shift
        if [[ "$1" == "scaffold" ]] ; then
            exec odoo "$@"
        else
            wait-for-psql.py ${DB_ARGS[@]} --timeout=30
            exec odoo "$@" "${DB_ARGS[@]}"
        fi
        ;;
    -*)
        wait-for-psql.py ${DB_ARGS[@]} --timeout=30
        exec odoo "$@" "${DB_ARGS[@]}"
        ;;
    *)
        exec "$@"
esac

exit 1

And:

cat odoo_pg_db
my_db
cat odoo_pg_user
myself
cat odoo_pg_pass
my_db_user_password

And my project folder tree:

.
├── addons
│   └── readme.md
├── docker-compose.yml
├── entrypoint.sh
├── etc
│   ├── addons
│   │   └── 15.0
│   ├── odoo.conf
│   ├── odoo.conf.meld
│   ├── odoo-server.log
│   ├── requirements.txt
│   └── sessions
├── odoo-data
├── odoo_pg_db
├── odoo_pg_pass
├── odoo_pg_user
├── postgresql [error opening dir]
├── README.md
├── run.sh
└── screenshots
    ├── odoo-13-apps-screenshot.png
    ├── odoo-13-sales-form.png
    ├── odoo-13-sales-screen.png
    └── odoo-13-welcome-screenshot.png

The odoo:15 dockerfile does not include hard written database name nor user regarding the database, only an app user set to odoo, but this is for the web app. It makes use of a config file with the default database name user and password like depicted above, but I provide the modified one with my values. The docker files makes use of ENV to refer to it: ENV ODOO_RC /etc/odoo/odoo.conf. Then it runs an entrypoint containing:

DB_ARGS=()
function check_config() {
    param="$1"
    value="$2"
    if grep -q -E "^\s*\b${param}\b\s*=" "$ODOO_RC" ; then       
        value=$(grep -E "^\s*\b${param}\b\s*=" "$ODOO_RC" |cut -d " " -f3|sed 's/["\n\r]//g')
    fi;
    DB_ARGS+=("--${param}")
    DB_ARGS+=("${value}")
}
check_config "db_host" "$HOST"
check_config "db_port" "$PORT"
check_config "db_user" "$USER"
check_config "db_password" "$PASSWORD"

With these check_config, it gets my values from the config file.

The postgres:14 dockerfile contains only the superuser name postgres set with a useradd.

Note: of course, there is more to be done after this like replace root user, use Docker secrets, but that's another topic.


Solution

The postgresql folder contents were not suppressed with sudo rm -fr postgresql/* while the containers were deleted, since created with the odoo user. I have to do sudo chmod -R 777 postgresql before.

Now I can see in the docker console that the environment variables are correctly set. Other problems now, but this one is solved.



Answered By - lalebarde
Answer Checked By - David Marino (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to access contents of some docker container in totally different docker container

 October 18, 2022     docker, docker-compose, docker-container     No comments   

Issue

I have the following project structure

  • client folder
    • package.json
    • Dockerfile
  • server folder
    • package.json
    • Dockerfile

client needs a npm run devserver command and server needs a npm run develop command. Both these commands simultaneously in 2 different terminals runs application in my local

The client folder uses some files which are present in server folder while running the devserver command. Now if I create separate dockerfiles in client and server folder. The devserver command wont be able to access files in server folder. And hence Im unable to start my application.

Is there any way I can access the files using dockerisation ? Maybe using docker-compose too not able to figure out.


Solution

You can put docker-compose.yaml in the same folder of server and client, and specify build context as ., while afford an additional dockerfile option to meet your requirement, example as next:

structure:

root@pie:~/20221015# tree
.
├── client
│   └── Dockerfile
├── docker-compose.yaml
└── server
    └── file_in_server
    └── Dockerfile
       
2 directories, 4 files

docker-compose.yaml:

version: "3.7"

services:
  client:
    image: client_image
    build:
      context: .
      dockerfile: client/Dockerfile

client Dockerfile:

FROM alpine
COPY server/file_in_server /tmp

execution:

root@pie:~/20221015# docker-compose build --no-cache
Building client
Step 1/2 : FROM alpine
 ---> 9c6f07244728
Step 2/2 : COPY server/file_in_server /tmp
 ---> c5cc162bad75

Successfully built c5cc162bad75
Successfully tagged client_image:latest
root@pie:~/20221015# docker run --rm -it client_image ls /tmp/file_in_server
/tmp/file_in_server

You could see the client dockerfile successfully access the file in server folder, you can move to build-definition if you want to dig more.



Answered By - atline
Answer Checked By - Mary Flores (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How can I setup a docker container for NestJS in production?

 October 18, 2022     docker, docker-compose, dockerfile, nestjs, node.js     No comments   

Issue

I've been trying to setup a docker-compose for an nestjs application, mysql and redis for a while now. I already got the mysql, redis, and nestjs development containers to work fine. The issues come when I try to setup an additional container for nestjs in production, where I've been getting some problems along the way.

In a nutshell, the most common error that I've been getting is that npm is not been able to find the package.json in the current workspace, although I copied it before running the command that causes the error (which are either npm install or npm run build).

/docker-compose.yml

version: '3.8'

networks:
  nesjs-network:
    driver: bridge

services:
    redis:
      container_name: nestjs_redis
      image: redis
      environment:
        - ALLOW_EMPTY_PASSWORD=yes
      networks:
        - nesjs-network
      ports:
        - '${FORWARD_REDIS_PORT:-5003}:6379'
    dev:
        container_name: nestjs_dev
        image: nestjs-api-dev:1.0.0
        build:
            context: ./Docker
            target: development
            dockerfile: Dockerfile
        command: npm run start:dev
        ports:
            - 3000:3000
            - 9229:9229
        networks:
            - nesjs-network
        volumes:
            - .:/usr/src/app
            - /usr/src/app/node_modules
        restart: unless-stopped
        env_file: '.env'
        depends_on:
          - database
          - redis
        links:
          - database
          - redis
    prod:
        container_name: nestjs_prod
        image: nestjs-api-prod:1.0.0
        build:
            context: ./Docker
            target: production
            dockerfile: Dockerfile
        # command: npm run start:prod
        ports:
            - 3000:3000
            - 9229:9229
        networks:
            - nesjs-network
        volumes:
            - .:/usr/src/app
            - /usr/src/app/node_modules
        restart: unless-stopped
        env_file: '.env'
        depends_on:
          - database
          - redis
        links:
          - database
          - redis

    database:
      build:
        context: ./Docker
        dockerfile: mysql8.Dockerfile
      image: mysql/mysql-server:latest
      container_name: database
      restart: unless-stopped
      tty: true
      ports:
        - '${FORWARD_DB_PORT:-3306}:3306'
      env_file: '.env'
      command: --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --authentication_policy=mysql_native_password --host_cache_size=0
      environment:
          MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
          MYSQL_DATABASE: '${DB_NAME}'
          MYSQL_USER: '${DB_USER}'
          MYSQL_PASSWORD: '${DB_PASSWORD}'
          MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
          TZ: '${APP_TIMEZONE-America/New_York}'
      networks:
        - nesjs-network
      volumes:
        - dbdata:/var/lib/mysql:rw,delegated

#Volumes
volumes:
  dbdata:
    driver: local

/docker/DockerFile

###################
# BUILD FOR LOCAL DEVELOPMENT
###################

FROM node:18-alpine AS development

WORKDIR /usr/src/app

COPY package*.json ./

# RUN apk add --nocache udev ttf-freefont chromium git
RUN apk add udev ttf-freefont chromium git
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true
ENV CHROMIUM_PATH /usr/bin/chromium-browser

RUN npm install -g npm@8.19.2

RUN npm install glob rimraf

# RUN npm install --only=development
RUN npm ci

COPY . .

# Nest line needs to be tested
EXPOSE 3000
EXPOSE 9229


###################
# PRODUCTION
###################

FROM node:18-alpine as production

ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}

WORKDIR /usr/src/app

COPY package*.json ./

RUN   apk update \                                                                                                                                                                                                      
 &&   apk add ca-certificates wget \
 &&   update-ca-certificates

# RUN apk add --nocache udev ttf-freefont chromium git
RUN apk add udev ttf-freefont chromium git
ENV PUPPETEER_SKIP_CHROMIUM_DOWNLOAD true
ENV CHROMIUM_PATH /usr/bin/chromium-browser

# RUN npm install -g npm@8.19.2

# RUN npm install glob rimraf

# RUN npm install --only=production
RUN npm ci

COPY . .

EXPOSE 3000
EXPOSE 9229

COPY --from=development /usr/src/app/dist ./dist
# RUN npm run build

CMD ["node", "dist/main"]

No matter how many tweaks I add or change, I keep getting the same kind of errors. I also tried using --chown=node:node every time I copy files, and changing to the node user (USER node), but nothing changes.

The most common error I get:

=> ERROR [build 7/8] RUN pm run build
> [build 7/8] RUN nom run build:
#0 0.518 pm ERR! code ENOENT
#0 0.519 nom ERR! syscall open
#0 0.519 pm ERR! path /usr/src/app/package.json
#0 0.520 nom ERR!
errno
-2
#0 0.521 pm ERR! enoent ENOENT: no such file or directory, open '/usr/src/app/package.json'
#0 0.521 pm ERR! enoent This is related to pm not being able to find a file.
#0 0.521 pm ERR! enoent
#0 0.522
#0 0.522 pm ERR! A complete log of this run can be found in:
#0 0.522 nom ERR!
/root/.npm/_logs/2022-10-15T19_02_39_449Z-debug-0.log
failed to solve: executor failed running [/bin/sh -c pm run build]: exit code: 254

Would anyone know what I might be doing wrong? All the containers work fine including the dev one for nestjs, but no luck with making the one for production.


Solution

Just as David suggested in the question's comments, using ./Docker in the docker-compose build context was causing these path issues inside the Dockerfile:

    prod:
        container_name: nestjs_prod
        image: nestjs-api-prod:1.0.0
        build:
            context: ./Docker
            target: production
            dockerfile: Dockerfile

Once I changed it to:

    prod:
        container_name: nestjs_prod
        image: nestjs-api-prod:1.0.0
        build:
            context: .
            target: production
            dockerfile: docker/Dockerfile

the error didn't happen again and npm was able to find the package.json file with no issues!



Answered By - Christian De Santis
Answer Checked By - Marie Seifert (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How can I check if docker compose plugin is installed?

 October 18, 2022     bash, docker, docker-compose     No comments   

Issue

I can check if docker is installed through the which docker or command -v docker commands. But I need to check if docker's compose plugin is installed (I will use it like docker compose up -d later).


Solution

Write on terminal:

$ docker compose --version

The return would look like:

Docker version X.Y.Z, build 95e78f4241

Source: https://docs.docker.com/engine/reference/commandline/compose/



Answered By - tremendows
Answer Checked By - Katrina (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg
Older Posts Home
View mobile version

Total Pageviews

Featured Post

Why Learn PHP Programming

Why Learn PHP Programming A widely-used open source scripting language PHP is one of the most popular programming languages in the world. It...

Subscribe To

Posts
Atom
Posts
All Comments
Atom
All Comments

Copyright © PHPFixing