PHPFixing
  • Privacy Policy
  • TOS
  • Ask Question
  • Contact Us
  • Home
  • PHP
  • Programming
  • SQL Injection
  • Web3.0
Showing posts with label gitlab-ci-runner. Show all posts
Showing posts with label gitlab-ci-runner. Show all posts

Sunday, October 9, 2022

[FIXED] Why is GitLab CI/CD ignoring certain files when passing them as an artifact?

 October 09, 2022     continuous-deployment, continuous-integration, deployment, gitlab, gitlab-ci-runner     No comments   

Issue

I have a simple CI/CD setup for my GitLab repo.

During the build job, the commands install NPM, run a build script which uses Webpack to build the JS/SCSS etc.

Then, since I don't want any of the source files to be uploaded to my server, I remove them along with some other files that are not needed for deployment.

However, when the parent directory htdocs is uploaded as an artifact so it can be used in the deploy job, when I run ls -la and ls ./assets -la in the deploy job, I can still see all of the files that I removed earlier in the build job.

When I download the artifact ZIP file from the browser, everything is correct as all of my unneeded files are gone.

Why is it then I can still see BOTH the source and distributed files in the build job? Is this a bug?

At the moment this is causing all of my source files to be uploaded to my server which is not what I want. Can anyone help?

image: node:11.9.0

before_script:
  - cd ./htdocs

stages:
  - build
  - deploy

build:
  stage: build
  script:
    - npm install --quiet
    - npm run build
    - rm -rf node_modules assets/src
    - rm -r .??* package.json package-lock.json README.md webpack.*.js
  artifacts:
    paths:
      - ./htdocs/

deploy:
  stage: deploy
  cache: {}
  dependencies:
    - build
  script:
    - ls -la
    - ls ./assets/ -la


Solution

Every job in GitLab by default clones the repository and then downloads any artifacts you set in the dependencies. You should see this at the start of the job log of the deploy stage.

To stop your source files from being uploaded in your deploy stage, you could set the GIT_STRATEGY to none.

Eg:

deploy:
  stage: deploy
  cache: {}
  variables:
    GIT_STRATEGY: none
  dependencies:
    - build
  script:
    - ls -la
    - ls ./assets/ -la


Answered By - Rekovni
Answer Checked By - Mildred Charles (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How can I securely push from a GitLab Runner KubernetesExecutor pod to a private container registry?

 October 09, 2022     continuous-integration, docker, gitlab-ci, gitlab-ci-runner, kubernetes     No comments   

Issue

Goal

Build a CI/CD pipeline multiple GitLab repositories with a certain project structure can make use of. For this, a Docker container with Python code is built and subsequently securely pushed to Google Cloud's Container Registry.

Set up

  • KubernetesExecutor is installed on Kubernetes Engine using the Helm chart as provided by GitLab.
  • The base image for the build process (runners.image in the values.yaml) is a custom one as this helps automatically containerising the provided repository. The reason this is worth mentioning is that this is from the same private repository as where the image should be pushed to.
  • Right now, building the container from the repository runs successfully (see code below).

Problem

How can I push the image to the Container Registry without adding a service account key to a Docker image (otherwise, please convince me this isn't bad practice)?

Code

.gitlab-ci.yml

services:
  - docker:19.03.1-dind

stages:
  - build

build:
  stage: build
  script:
    - docker build -t ${CONTAINER_REGISTRY}/pyton-container-test:latest .
    # This line is where I'd need to use `docker login`, I guess.
    - docker push ${CONTAINER_REGISTRY}/python-container-test:latest

values.yaml (Helm)

It's worth mentioning that the following environment variables are set by the GitLab Runner:

runners:
  env:
    DOCKER_DRIVER: overlay2
    DOCKER_HOST: tcp://localhost:2375
    DOCKER_TLS_CERTDIR: ""
    CONTAINER_REGISTRY: eu.gcr.io/<project_id>

Direction of solution

I think I should be able to mount a secret from the Kubernetes cluster to the GitLab Runner build pod, but I can't seem to find a way to do that. Then, I should be able to add the following line into .gitlab-ci.yml:

cat mounted_secret.json | docker login -u _json_key --password-stdin https://eu.gcr.io

Setting up config.toml to use a secret volume should work. However, with a Helm chart this doesn't seem possible yet.

Notes

  • It is possible to set protected environment variables in GitLab CI, but I'd rather not, as they're harder to maintain.
  • I've investigated this answer, but this says I need to add a key to my Docker image.
  • Looked into the GitLab documentation on using a private container registry, but don't seem to get much further with that.
  • A similar problem would occur when, for example, it must connect to a database during the build process.

Solution

It's not possible with the default Helm chart as provided by GitLab. However, there is a workaround when you customise theirs.

In templates/configmap.yaml, it's possible to edit the entrypoint. At the very end, the runner is started as follows:

# Start the runner
exec /entrypoint run --user=gitlab-runner \
     --working-directory=/home/gitlab-runner

For this to work, a config.toml is generated based upon the provided values.yaml (with the right Runner token). This means that right before this step, we could edit the config.toml to our needs. In my case, I simply added:

echo "    [[runners.kubernetes.volumes.secret]]" >> ${CONFIG_FILE}
echo "      name = \"{{ .Values.secretName }}\"" >> ${CONFIG_FILE}
echo "      mount_path = \"/keys\"" >> ${CONFIG_FILE}
echo "      read_only = true" >> ${CONFIG_FILE}

Where ${CONFIG_FILE} is /home/gitlab-runner/.gitlab-runner/config.toml.

Finally, you can deploy your GitLab Runner using:

$ helm install project_name -f values.yaml <path to chart>


Answered By - bartcode
Answer Checked By - Mildred Charles (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How do you execute a script after a job is cancelled in GitLab?

 October 09, 2022     continuous-integration, gitlab, gitlab-ci, gitlab-ci-runner     No comments   

Issue

I am looking for a way to clean up the runner after a job has been cancelled in GitLab. The reason is we often have to cancel running jobs because the runner is sometimes stuck in the test pipeline and I can imagine a couple of other scenarios where you would want to cancel a job and have a clean up script run after. I am looking for something like after_script but just in the case when a job was cancelled. I checked the GitLab keyword reference but could not find what I need.

The following part of my gitlab-ci.yaml shows the test stage which I would like to gracefully cancel by calling docker-compose down when the job was cancelled.

I am using a single gitlab-runner. Also, I don't use dind.

test_stage:
  stage: test
  only:
  - master
  - tags
  - merge_requests
  image: registry.gitlab.com/xxxxxxxxxxxxxxxxxxx
  variables:
    HEADLESS: "true"
  script:
  - docker login -u="xxxx" -p="${QUAY_IO_PASSWORD}" quay.io  
  - npm install
  - npm run test
  - npm install wait-on  
  - cd example
  - docker-compose up --force-recreate --abort-on-container-exit --build traefik frontend &
  - cd ..
  - apt install -y iproute2
  - export DOCKER_HOST_IP=$( /sbin/ip route|awk '/default/ { print $3 }' )
  - echo "Waiting for ${DOCKER_HOST_IP}/xxxt"
  - ./node_modules/.bin/wait-on "http://${DOCKER_HOST_IP}/xxx" && export BASE_URL=http://${DOCKER_HOST_IP} && npx codeceptjs run-workers 2
  - cd example
  - docker-compose down
  after_script:
  - cd example && docker-compose down 
  artifacts:
    when: always
    paths:
      - /builds/project/tests/output/
  retry:
    max: 2
    when: always
  tags: [custom-runner]

Solution

Unfortunately this is not currently possible in GitLab. There have been several tickets opened in their repos, with this one being the most up-to-date.

As of the day that I'm posting this (September 27, 2022), there have been at least 14 missed deliverables for this. GitLab continues to say it's coming, but has never delivered it in the six years that this ticket has been open.

There are mitigations as far as automatic job cancelling, but unfortunately that will not help in your case.

Based on your use case, I can think of two different solutions:

  1. Create a wrapper script that detects when parts of your test job are hanging
  2. Set a timeout on the pipeline (In GitLab you can go to Settings -> CI/CD -> General Pipelines -> Timeout)

Neither of these solutions are as robust as if GitLab themselves implemented a solution, but they can at least prevent you from having a job hang for an eternity and clogging up everything else in the pipeline.



Answered By - Josh Correia
Answer Checked By - Candace Johnson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How do you execute a script after a job is cancelled in GitLab?

 October 09, 2022     continuous-integration, gitlab, gitlab-ci, gitlab-ci-runner     No comments   

Issue

I am looking for a way to clean up the runner after a job has been cancelled in GitLab. The reason is we often have to cancel running jobs because the runner is sometimes stuck in the test pipeline and I can imagine a couple of other scenarios where you would want to cancel a job and have a clean up script run after. I am looking for something like after_script but just in the case when a job was cancelled. I checked the GitLab keyword reference but could not find what I need.

The following part of my gitlab-ci.yaml shows the test stage which I would like to gracefully cancel by calling docker-compose down when the job was cancelled.

I am using a single gitlab-runner. Also, I don't use dind.

test_stage:
  stage: test
  only:
  - master
  - tags
  - merge_requests
  image: registry.gitlab.com/xxxxxxxxxxxxxxxxxxx
  variables:
    HEADLESS: "true"
  script:
  - docker login -u="xxxx" -p="${QUAY_IO_PASSWORD}" quay.io  
  - npm install
  - npm run test
  - npm install wait-on  
  - cd example
  - docker-compose up --force-recreate --abort-on-container-exit --build traefik frontend &
  - cd ..
  - apt install -y iproute2
  - export DOCKER_HOST_IP=$( /sbin/ip route|awk '/default/ { print $3 }' )
  - echo "Waiting for ${DOCKER_HOST_IP}/xxxt"
  - ./node_modules/.bin/wait-on "http://${DOCKER_HOST_IP}/xxx" && export BASE_URL=http://${DOCKER_HOST_IP} && npx codeceptjs run-workers 2
  - cd example
  - docker-compose down
  after_script:
  - cd example && docker-compose down 
  artifacts:
    when: always
    paths:
      - /builds/project/tests/output/
  retry:
    max: 2
    when: always
  tags: [custom-runner]

Solution

Unfortunately this is not currently possible in GitLab. There have been several tickets opened in their repos, with this one being the most up-to-date.

As of the day that I'm posting this (September 27, 2022), there have been at least 14 missed deliverables for this. GitLab continues to say it's coming, but has never delivered it in the six years that this ticket has been open.

There are mitigations as far as automatic job cancelling, but unfortunately that will not help in your case.

Based on your use case, I can think of two different solutions:

  1. Create a wrapper script that detects when parts of your test job are hanging
  2. Set a timeout on the pipeline (In GitLab you can go to Settings -> CI/CD -> General Pipelines -> Timeout)

Neither of these solutions are as robust as if GitLab themselves implemented a solution, but they can at least prevent you from having a job hang for an eternity and clogging up everything else in the pipeline.



Answered By - Josh Correia
Answer Checked By - Candace Johnson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, September 27, 2022

[FIXED] how to run pipeline only on HEAD commit in GitlabCi runner?

 September 27, 2022     continuous-deployment, continuous-integration, gitlab, gitlab-ci, gitlab-ci-runner     No comments   

Issue

we have a CI pipeline on our repository, hosted in gitlab

we setup gitlab-runner on our local machine

the pipeline running 4 steps

  • build

  • unit tests

  • integration test
  • quality tests

all this pipeline takes almost 20 min

and the pipeline trigger on each push to a branch

is there a way to configure the gitlab-runner that if the HEAD of a branch that the runner currently running on changes the pipe will auto cancel the run? because the latest version is what matters

for example in this run the lower run is unnecessary

enter image description here

gitlab-ci.yml

stages:
  - build
  - unit_tests
  - unit_and_integration_tests
  - quality_tests

build:
  stage: build
  before_script:
    - cd projects/ideology-synapse
  script:
    - mvn compile


unit_and_integration_tests:
  variables:
    GIT_STRATEGY: clone
  stage: unit_and_integration_tests
  only:
    - /^milestone-.*$/
  script:
    - export RUN_ENVIORMENT=GITLAB_CI
    - export MAVEN_OPTS="-Xmx32g"
    - mvn test
    - "cat */target/site/jacoco/index.html"
  cache: {}
  artifacts:
    reports:
      junit:
        - "*/*/*/target/surefire-reports/TEST-*.xml"


unit_tests:
  variables:
    GIT_STRATEGY: clone

  stage: unit_tests
  except:
    - /^milestone-.*$/
  script:
    - export MAVEN_OPTS="-Xmx32g"
    - mvn test
    - "cat */target/site/jacoco/index.html"
  cache: {}
  artifacts:
    reports:
      junit:
        - "*/*/*/target/surefire-reports/TEST-*.xml"

quality_tests:
  variables:
    GIT_STRATEGY: clone
  stage: quality_tests
  only:
    - /^milestone-.*$/
  script:
    - export RUN_ENVIORMENT_EVAL=GITLAB_CI
    - export MAVEN_OPTS="-Xmx32g"
    - mvn test
  cache: {}

edit after @siloko comment:

I already try using the auto-cancel redundant, pending pipelines in the setting menu

I want to cancel running pipelines and not pending


Solution

after forther investigation, I found that I had 2 active runners on one of my machines one shared runner , and another specific runner then if I push a 2 commit one after another to the same branch both of the runners take the jobs and execute them. that also explains why

Auto-cancel redundant, pending pipelines 

options, didn't work because it works only when the same runner have pending jobs

actions that been taken to fix this problem: unregister the specific runner and leave the machine only with the shared runner



Answered By - Naor Tedgi
Answer Checked By - Candace Johnson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] What are the best gitlab-ci.yml CI/CD practices and runners configs?

 September 27, 2022     continuous-deployment, continuous-integration, gitlab, gitlab-ci, gitlab-ci-runner     No comments   

Issue

This is a bit theoretical, but I'll try to explain my setup as much as I can:

  • 1 server (instance) with a self-hosted gitlab
  • 1 server (instance) for development
  • 1 server (instance) for production

Let's say in my gitlab I have a ReactJs project and I configured my gitlab-ci.yml as following:

  • job deploy_dev Upon pushing to dev branch, the updates will be copied with rsync to /var/www/html/${CI_PROJECT_NAME} (As a deployment to dev server)

  • The runner that picks up the job deploy_dev is a shared runner installed on that same dev server that I deploy to and it picks up jobs with the tag reactjs

The question is: If I want to deploy to production what is the best practice should I follow?

I managed to come up with a couple of options that I thought of but I don't know which one is the best practice (if any). Here is what I came up with:

  1. Modify gitlab-ci.yml adding a job deploy_prod with the same tag reactjs but the script should rsync with the production server's /var/www/html/${CI_PROJECT_NAME} using SSH?

  2. Set up another runner on production server and let it pick up the jobs with tags reactjs-prod and modify gitlab-ci.yml to have deploy_prod with the tag reactjs-prod?

  3. You have a better way other than the 2 mentioned above?

Last question (related): Where is the best place to install my runners? Is what I'm doing (Having my runners on my dev server) actually ok?

Please if you can explain to me the best way (that you would choose) with the reasons as in pros or cons I would be very grateful.


Solution

The best practice is to separate your CI/CD infrastructure from the infrastructure where you host your apps. This is done to minimize the number of variables which can lead to problems with either your applications or your runners.

Consider the following scenarios when you have a runner on the same machine where you host your application: (The below scenarios can happen even if the runner and app are running in separate Docker containers. The underlying machine is still a single point of failure.)

  1. The runner executes a CPU/RAM heavy job and takes up most of the resources on the machine. Your application starts experiencing performance problems.

  2. The Gitlab runner crashes and puts the host machine in an inoperable state. (docker panic or whatever). Your production app stops functioning.

  3. Your app brakes the host machine (Doesn't matter how. It can happen), your CI/CD stops working and you can not deploy a fix to production.

Consider having a separate runner machine (or machines. Gitlab runner can scale horizontally), that is used to run your deployment jobs to both dev and production servers.



Answered By - cecunami
Answer Checked By - Mildred Charles (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to start newly built docker image when deploying?

 September 27, 2022     continuous-deployment, docker, docker-registry, gitlab-ci, gitlab-ci-runner     No comments   

Issue

I've created a gitlab-ci.yml pipeline to deploy a simple react.js application. I installed the gitlab-runner on EC2 (AWS) and inside my pipeline I build the image and push it to the registry.

This is my gitlab-ci.yml file:

image: docker:latest
services:
  - docker:dind

stages:
  - test
  - deploy

 test-build: 
   stage: test
   only:
     - master
   tags:
     - master
   script:
     - sudo docker build .

deploy-production:
  stage: deploy
  only:
    - master
  tags:
    - master
  before_script:
    # remove the offending package golang-docker-credential-helpers without removing all of docker-compose
    - sudo dpkg -r --ignore-depends=golang-docker-credential-helpers golang-docker-credential-helpers
    - sudo docker version
    - "sudo docker info"
    - "sudo docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
  script:
    - "sudo docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest --pull ."
    - "sudo docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest"
    - "sudo docker run -it -d -p 80:80 --rm --name daruma.comunicato ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest"
  after_script:
    - "sudo docker logout ${CI_REGISTRY}"
  when: manual

The problem is: how do I run the latest pushed image? If I run docker run -it ... the pipeline fails with:

docker: Error response from daemon: Conflict. The container name "/app.test" is already in use by container "f8e888d6dff6fe5808d7577fe8fddc3e24cd8cc0e248a69d36e7ce51bf339ae6". You have to remove (or rename) that container to be able to reuse that name.

Because that container is already running and has the same name. I don't want to stop all containers when deploying because I could have other docker containers running. I also thought to do docker stop app.test && docker rm app.test but if for some reason the container is down that will give me an error and won't deploy.

What is the correct way to handle this situation?


Solution

The error message means that you already have a container with that name and docker is not allowing you to have multiple containers with the same name, to fix that, remove the old container, just do in the terminal something like $docker ps -a and you will get a list of all containers you have, then identify the id of the container with that name and do remove it by doing:

$docker rm -f <container-id>


Answered By - ΦXocę 웃 Пepeúpa ツ
Answer Checked By - Timothy Miller (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, August 22, 2022

[FIXED] How to Setup GitLab Environment Variables in a safe way?

 August 22, 2022     environment-variables, gitlab, gitlab-ci, gitlab-ci-runner     No comments   

Issue

I don't wan't to put sensitive Credentials (Like API keys, passwords etc) into my branch. For this, GitLab (and other CI/CD-Services) are able to set Environment-Variables. They will be injected into the script during deployment process.

I know about two ways for GitLab to set them:

  1. Via UI: Project ⇒ Settings ⇒ CI/CD ⇒ Variables
  2. Via .gitlab-ci.yml

As in my opinion the first way is the secure one, because no files with credentials are saved in the git repo but it's also the more complicated way because I have to set each single variable by hand via GitLab-GUI.

With the second way, the issue is that .gitlab-ci.yml is saved into the gitlab repo so that the credentials are not secure.

Question: Is there a way to define the ENV-Vars in a file and provide it to GitLab, without putting them into the branch? Or is there another way to create those ENV-Vars easily and securely in GitLab?


Solution

Is there a way to define the ENV-Vars in a File?

Yes, in UI settings you mentioned you can specify variables type to be variable (key:value) or file (in Key will be passed path to secret file with content from value input).

So file variable seems like what you are looking for.

Readme and docs provide good description for variables. Personally I find very useful other options: variable masking and protected state.



Answered By - makozaki
Answer Checked By - Candace Johnson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, January 4, 2022

[FIXED] GitLab-CI Multi Runner php composer cache

 January 04, 2022     composer-php, docker, gitlab, gitlab-ci-runner, php     No comments   

Issue

I'm using gitlab-ci-multi-runner with docker containers. Everything is going fine, but docker containers don't keep the composer cache so in every run composer downloads dependencies again and again, which takes a lot of time. Is there any way to configure gitlab-ci-runner docker container to keep the composer cache or mount a volume on each run where the composer cache is kept?


Solution

You could modify the composer cache path and write the stuff to a docker volume.

That storage is persistent and can be shared across containers.

Referencing:

  • https://gitlab.com/gitlab-org/gitlab-runner/blob/master/docs/configuration/advanced-configuration.md#volumes-in-the-runnersdocker-section
  • https://docs.docker.com/engine/admin/volumes/volumes/


Answered By - Jens A. Koch
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg
Older Posts Home
View mobile version

Total Pageviews

Featured Post

Why Learn PHP Programming

Why Learn PHP Programming A widely-used open source scripting language PHP is one of the most popular programming languages in the world. It...

Subscribe To

Posts
Atom
Posts
All Comments
Atom
All Comments

Copyright © PHPFixing