PHPFixing
  • Privacy Policy
  • TOS
  • Ask Question
  • Contact Us
  • Home
  • PHP
  • Programming
  • SQL Injection
  • Web3.0
Showing posts with label gitlab-ci. Show all posts
Showing posts with label gitlab-ci. Show all posts

Sunday, October 9, 2022

[FIXED] How return artifact from Child Job to Parent pipeline?

 October 09, 2022     cicd, continuous-delivery, continuous-deployment, gitlab-ci     No comments   

Issue

Use trigger for dynamic select test job

prepare_test:
  image: $CI_REGISTRY/platform/docker-images/vault:1.8
  variables:
    CONTEXT_TEST: |
      include:
      # PRODUCT
        - project: 'gitlabci/integration-test'
          ref: dev_v2
          file: 
            - 'spark/.base_integration_test.yml'
            - 'spark/.base_integration_test_with_kafka.yml'   
      integration_test:
        variables:
          COVERAGE_SOURCE: "./src"  
    INTEGRATION_TEST: |
      $CONTEXT_TEST
        extends: .base_integration_test
    INTEGRATION_TEST_WITH_KAFKA: |
      $CONTEXT_TEST 
        extends: .base_integration_test_with_kafka  
  stage: prepare_test
  script:
    - export CICD_KAFKA_HOST=$(cat test/fixtures.py | grep KAFKA_HOST)
    - >
      if [ "$CICD_KAFKA_HOST" != "" ]; then
        export CICD_KAFKA_HOST="true"
        echo "$INTEGRATION_TEST_WITH_KAFKA" >> test.yml
      else
        export CICD_KAFKA_HOST="false"
        echo "$INTEGRATION_TEST" >> test.yml
      fi
    - env | sort -f
  artifacts:
    paths:
      - test.yml
    expire_in: 6000 seconds


# --------------- Integration test --------------- ###
integration_test:
  stage: test 
  trigger:
    include:
      - artifact: test.yml
        job: prepare_test
    strategy: depend

enter image description here

after complete child integration_test create coverage-report.xml

How return coverage-report.xml to parent pipeline?


Solution

You can use Gitlab API:

my-job:
  image: ...
  stage: ...
  script:
    - >
        export CI_CHILD_PIPELINE_ID=$(curl --header "PRIVATE-TOKEN: $GITLAB_USER_TOKEN" "$CI_API_V4_URL/projects/$CI_PROJECT_ID/pipelines/$CI_PIPELINE_ID/bridges" | jq ".[].downstream_pipeline.id")
    - echo $CI_CHILD_PIPELINE_ID
    - >
        export CI_CHILD_JOB_ID=$(curl --header "PRIVATE-TOKEN: $GITLAB_USER_TOKEN" "$CI_API_V4_URL/projects/$CI_PROJECT_ID/pipelines/$CI_CHILD_PIPELINE_ID/jobs" | jq '.[].id')
    - echo $CI_CHILD_JOB_ID 
    - 'curl --output artifacts.zip --header "PRIVATE-TOKEN: $GITLAB_USER_TOKEN" "$CI_API_V4_URL/projects/$CI_PROJECT_ID/jobs/$CI_CHILD_JOB_ID/artifacts"'
    - unzip artifacts.zip


Answered By - Philippe GRANET
Answer Checked By - Pedro (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to print/debug all jobs of includes in GitLab CI?

 October 09, 2022     continuous-integration, gitlab, gitlab-ci, nested-includes     No comments   

Issue

In Gitlab CI it is possible to include one or many files in a .gitlab-ci.yml file. It is even possible to nest these includes. See https://docs.gitlab.com/ee/ci/yaml/includes.html#using-nested-includes.

How can I see the resulting CI file all at once?

Right now, when I debug a CI cycle, I open every single include file and combine the resulting file structure by myself. There has to be a better way.

Example

Content of https://company.com/autodevops-template.yml:

variables:
  POSTGRES_USER: user
  POSTGRES_PASSWORD: testing_password
  POSTGRES_DB: $CI_ENVIRONMENT_SLUG

production:
  stage: production
  script:
    - install_dependencies
    - deploy
  environment:
    name: production
    url: https://$CI_PROJECT_PATH_SLUG.$KUBE_INGRESS_BASE_DOMAIN
  only:
    - master

Content of .gitlab-ci.yml:

include: 'https://company.com/autodevops-template.yml'

image: alpine:latest

variables:
  POSTGRES_USER: root
  POSTGRES_PASSWORD: secure_password

stages:
  - build
  - test
  - production

production:
  environment:
    url: https://example.com

This should result in the following file structure:

image: alpine:latest

variables:
  POSTGRES_USER: root
  POSTGRES_PASSWORD: secure_password
  POSTGRES_DB: $CI_ENVIRONMENT_SLUG

stages:
  - build
  - test
  - production

production:
  stage: production
  script:
    - install_dependencies
    - deploy
  environment:
    name: production
    url: https://example.com
  only:
    - master

→ How can I see this output somewhere?

Environment

  • Self-Hosted GitLab 13.9.1

Solution

As I continued to search for a solution, I read about a “Pipeline Editor“ which was released in GitLab version 13.8 this year. Turns out that the feature I am looking for was added to this editor just some days ago:

Version 13.9.0 (2021-02-22) „View an expanded version of the CI/CD configuration” → see Release Notes for a feature description and introduction video.

To view the fully expanded CI/CD configuration as one combined file, go to the pipeline editor’s »View merged YAML« tab. This tab displays an expanded configuration where:

  • Configuration imported with include is copied into the view.
  • Jobs that use extends display with the extended configuration merged into the job.
  • YAML anchors are replaced with the linked configuration

Usage: Open project → Module »CI / CD« → Submodule »Editor« → Tab »View merged YAML«

Screenshot Pipeline Editor Merged Configuration Tab



Answered By - pixelbrackets
Answer Checked By - Mildred Charles (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How can I securely push from a GitLab Runner KubernetesExecutor pod to a private container registry?

 October 09, 2022     continuous-integration, docker, gitlab-ci, gitlab-ci-runner, kubernetes     No comments   

Issue

Goal

Build a CI/CD pipeline multiple GitLab repositories with a certain project structure can make use of. For this, a Docker container with Python code is built and subsequently securely pushed to Google Cloud's Container Registry.

Set up

  • KubernetesExecutor is installed on Kubernetes Engine using the Helm chart as provided by GitLab.
  • The base image for the build process (runners.image in the values.yaml) is a custom one as this helps automatically containerising the provided repository. The reason this is worth mentioning is that this is from the same private repository as where the image should be pushed to.
  • Right now, building the container from the repository runs successfully (see code below).

Problem

How can I push the image to the Container Registry without adding a service account key to a Docker image (otherwise, please convince me this isn't bad practice)?

Code

.gitlab-ci.yml

services:
  - docker:19.03.1-dind

stages:
  - build

build:
  stage: build
  script:
    - docker build -t ${CONTAINER_REGISTRY}/pyton-container-test:latest .
    # This line is where I'd need to use `docker login`, I guess.
    - docker push ${CONTAINER_REGISTRY}/python-container-test:latest

values.yaml (Helm)

It's worth mentioning that the following environment variables are set by the GitLab Runner:

runners:
  env:
    DOCKER_DRIVER: overlay2
    DOCKER_HOST: tcp://localhost:2375
    DOCKER_TLS_CERTDIR: ""
    CONTAINER_REGISTRY: eu.gcr.io/<project_id>

Direction of solution

I think I should be able to mount a secret from the Kubernetes cluster to the GitLab Runner build pod, but I can't seem to find a way to do that. Then, I should be able to add the following line into .gitlab-ci.yml:

cat mounted_secret.json | docker login -u _json_key --password-stdin https://eu.gcr.io

Setting up config.toml to use a secret volume should work. However, with a Helm chart this doesn't seem possible yet.

Notes

  • It is possible to set protected environment variables in GitLab CI, but I'd rather not, as they're harder to maintain.
  • I've investigated this answer, but this says I need to add a key to my Docker image.
  • Looked into the GitLab documentation on using a private container registry, but don't seem to get much further with that.
  • A similar problem would occur when, for example, it must connect to a database during the build process.

Solution

It's not possible with the default Helm chart as provided by GitLab. However, there is a workaround when you customise theirs.

In templates/configmap.yaml, it's possible to edit the entrypoint. At the very end, the runner is started as follows:

# Start the runner
exec /entrypoint run --user=gitlab-runner \
     --working-directory=/home/gitlab-runner

For this to work, a config.toml is generated based upon the provided values.yaml (with the right Runner token). This means that right before this step, we could edit the config.toml to our needs. In my case, I simply added:

echo "    [[runners.kubernetes.volumes.secret]]" >> ${CONFIG_FILE}
echo "      name = \"{{ .Values.secretName }}\"" >> ${CONFIG_FILE}
echo "      mount_path = \"/keys\"" >> ${CONFIG_FILE}
echo "      read_only = true" >> ${CONFIG_FILE}

Where ${CONFIG_FILE} is /home/gitlab-runner/.gitlab-runner/config.toml.

Finally, you can deploy your GitLab Runner using:

$ helm install project_name -f values.yaml <path to chart>


Answered By - bartcode
Answer Checked By - Mildred Charles (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to fetch entire repository with GitLab CI/CD?

 October 09, 2022     continuous-integration, git, gitlab, gitlab-ci     No comments   

Issue

I'm currently setting up GitLab CI/CD. We use GitVersion in our project, which throws the following error:

/root/.nuget/packages/gitversiontask/5.3.7/build/GitVersionTask.targets(46,9): error : InvalidOperationException: Could not find a 'develop' or 'master' branch, neither locally nor remotely.

According to this blog this happens, when the CI-server does not fetch the full repository (we have both a develop and a master branch, but I'm working on a different one). For Jenkins we solved this problem by expanding the checkout stage:

stage("Checkout") { gitlabCommitStatus(name: "Checkout") {
    
    // These are the normal checkout instructions
    cleanWs()
    checkout scm
    
    // This is the additional checkout to get all branches
    checkout([
      $class: 'GitSCM',
      branches: [[name: 'refs/heads/'+env.BRANCH_NAME]],
      extensions: [[$class: 'CloneOption', noTags: false, shallow: false, depth: 0, reference: '']],
      userRemoteConfigs: scm.userRemoteConfigs,
    ])

    sh "git checkout ${env.BRANCH_NAME}"
    sh "git reset --hard origin/${env.BRANCH_NAME}"
}}

I'm essentially looking for something equivalent to this for the .gitlab-ci.yml file.


Solution

By default, runners download your code with a 'fetch' rather than a 'clone' for speed's sake, but it can be configured a number of ways. If you want all jobs in your project's pipeline to be cloned rather than fetched, you can change the default in your CI Settings:

settings bar enter image description here

If you don't want all your jobs to clone since it's slower, you can change it in your .gitlab-ci.yml for your job:

my_job:
  stage: deploy
  variables:
    GIT_STRATEGY: clone
  script:
    - ./deploy

You can read more about the GIT_STRATEGY variable here: https://docs.gitlab.com/ee/ci/runners/configure_runners.html#git-strategy

Note: You can also set this variable to none, which is useful if you don't need the code but maybe an artifact created by a previous job. Using this, it won't checkout any code, but skip straight to your script.



Answered By - Adam Marshall
Answer Checked By - Terry (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to compile multiple/different .gitlab-ci.yml file into a single .gitlab-ci.yml file located in a separate repository?

 October 09, 2022     continuous-integration, devops, gitlab, gitlab-ci, gitlab-ci.yml     No comments   

Issue

Is it possible to include multiple and different .gitlab-ci.yml file into a single file located in another subgroup/repository? If yes, then how can it be done?

What I want to do is to have a modular gitlab CI files which can be used by different subgrous and project repositories depending on what they need. I'm thinking on a scalability perspective, the reason why I thought of this is to avoid hard coding the ci file that will be used for a specific app/ source code and for them to pick a ci file that they will need.

Here's how I think it can be visualized: Gitlab Structure


Solution

You can include CI configuration files from other groups/projects irrespective of where your project lives in the hierarchy. You can do this using the include: directive.

# .gitlab-ci.yml in Repo D
include:
  - project: anygroup/anysubgroup/anyproject
    file: path/to/anyfile.yml

You can include many files from many different projects.

It is even possible to skip defining a .gitlab-ci.yml file in a particular project altogether by specifying a CI YAML file in another project in the project settings.



Answered By - sytech
Answer Checked By - Senaida (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to block a gitlab merge until a script passes?

 October 09, 2022     continuous-integration, gitlab, gitlab-ci     No comments   

Issue

We are using GitLab and GitLab CI which has a build, test and package step. We have written a script which checks if a user has done some house keeping for the feature they have added. The script returns a 1 or 0 depending on if the house keeping is done. At the moment the script goes in the build step of the GitLab CI, which means the build fails unless the house keeping is done. This is good, as the developer wont be able to merge until the house keeping is done. But, it would be better if the build and tests would continue to run, but the user was simply blocked from completing the merge request. E.g. by adding some automated "approve" step that is approved when the script passes.

Is there an obvious, easy or best way to do this?

Thanks


Solution

Set the job to use the builtin .post stage and use needs: []. This will cause the job to (1) run immediately with no delay and (2) will not stop your build/test or any other jobs from starting because it is in the last stage and (3) the failure of the job will still block the merge (provided you have the require pipelines to succeed setting enabled)

housekeeping:
  stage: .post  # always the last stage in any pipeline
  needs: []  # start immediately with no dependencies
  script: "..."
  # ...


Answered By - sytech
Answer Checked By - Pedro (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How do you execute a script after a job is cancelled in GitLab?

 October 09, 2022     continuous-integration, gitlab, gitlab-ci, gitlab-ci-runner     No comments   

Issue

I am looking for a way to clean up the runner after a job has been cancelled in GitLab. The reason is we often have to cancel running jobs because the runner is sometimes stuck in the test pipeline and I can imagine a couple of other scenarios where you would want to cancel a job and have a clean up script run after. I am looking for something like after_script but just in the case when a job was cancelled. I checked the GitLab keyword reference but could not find what I need.

The following part of my gitlab-ci.yaml shows the test stage which I would like to gracefully cancel by calling docker-compose down when the job was cancelled.

I am using a single gitlab-runner. Also, I don't use dind.

test_stage:
  stage: test
  only:
  - master
  - tags
  - merge_requests
  image: registry.gitlab.com/xxxxxxxxxxxxxxxxxxx
  variables:
    HEADLESS: "true"
  script:
  - docker login -u="xxxx" -p="${QUAY_IO_PASSWORD}" quay.io  
  - npm install
  - npm run test
  - npm install wait-on  
  - cd example
  - docker-compose up --force-recreate --abort-on-container-exit --build traefik frontend &
  - cd ..
  - apt install -y iproute2
  - export DOCKER_HOST_IP=$( /sbin/ip route|awk '/default/ { print $3 }' )
  - echo "Waiting for ${DOCKER_HOST_IP}/xxxt"
  - ./node_modules/.bin/wait-on "http://${DOCKER_HOST_IP}/xxx" && export BASE_URL=http://${DOCKER_HOST_IP} && npx codeceptjs run-workers 2
  - cd example
  - docker-compose down
  after_script:
  - cd example && docker-compose down 
  artifacts:
    when: always
    paths:
      - /builds/project/tests/output/
  retry:
    max: 2
    when: always
  tags: [custom-runner]

Solution

Unfortunately this is not currently possible in GitLab. There have been several tickets opened in their repos, with this one being the most up-to-date.

As of the day that I'm posting this (September 27, 2022), there have been at least 14 missed deliverables for this. GitLab continues to say it's coming, but has never delivered it in the six years that this ticket has been open.

There are mitigations as far as automatic job cancelling, but unfortunately that will not help in your case.

Based on your use case, I can think of two different solutions:

  1. Create a wrapper script that detects when parts of your test job are hanging
  2. Set a timeout on the pipeline (In GitLab you can go to Settings -> CI/CD -> General Pipelines -> Timeout)

Neither of these solutions are as robust as if GitLab themselves implemented a solution, but they can at least prevent you from having a job hang for an eternity and clogging up everything else in the pipeline.



Answered By - Josh Correia
Answer Checked By - Candace Johnson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How do you execute a script after a job is cancelled in GitLab?

 October 09, 2022     continuous-integration, gitlab, gitlab-ci, gitlab-ci-runner     No comments   

Issue

I am looking for a way to clean up the runner after a job has been cancelled in GitLab. The reason is we often have to cancel running jobs because the runner is sometimes stuck in the test pipeline and I can imagine a couple of other scenarios where you would want to cancel a job and have a clean up script run after. I am looking for something like after_script but just in the case when a job was cancelled. I checked the GitLab keyword reference but could not find what I need.

The following part of my gitlab-ci.yaml shows the test stage which I would like to gracefully cancel by calling docker-compose down when the job was cancelled.

I am using a single gitlab-runner. Also, I don't use dind.

test_stage:
  stage: test
  only:
  - master
  - tags
  - merge_requests
  image: registry.gitlab.com/xxxxxxxxxxxxxxxxxxx
  variables:
    HEADLESS: "true"
  script:
  - docker login -u="xxxx" -p="${QUAY_IO_PASSWORD}" quay.io  
  - npm install
  - npm run test
  - npm install wait-on  
  - cd example
  - docker-compose up --force-recreate --abort-on-container-exit --build traefik frontend &
  - cd ..
  - apt install -y iproute2
  - export DOCKER_HOST_IP=$( /sbin/ip route|awk '/default/ { print $3 }' )
  - echo "Waiting for ${DOCKER_HOST_IP}/xxxt"
  - ./node_modules/.bin/wait-on "http://${DOCKER_HOST_IP}/xxx" && export BASE_URL=http://${DOCKER_HOST_IP} && npx codeceptjs run-workers 2
  - cd example
  - docker-compose down
  after_script:
  - cd example && docker-compose down 
  artifacts:
    when: always
    paths:
      - /builds/project/tests/output/
  retry:
    max: 2
    when: always
  tags: [custom-runner]

Solution

Unfortunately this is not currently possible in GitLab. There have been several tickets opened in their repos, with this one being the most up-to-date.

As of the day that I'm posting this (September 27, 2022), there have been at least 14 missed deliverables for this. GitLab continues to say it's coming, but has never delivered it in the six years that this ticket has been open.

There are mitigations as far as automatic job cancelling, but unfortunately that will not help in your case.

Based on your use case, I can think of two different solutions:

  1. Create a wrapper script that detects when parts of your test job are hanging
  2. Set a timeout on the pipeline (In GitLab you can go to Settings -> CI/CD -> General Pipelines -> Timeout)

Neither of these solutions are as robust as if GitLab themselves implemented a solution, but they can at least prevent you from having a job hang for an eternity and clogging up everything else in the pipeline.



Answered By - Josh Correia
Answer Checked By - Candace Johnson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Tuesday, September 27, 2022

[FIXED] how to run pipeline only on HEAD commit in GitlabCi runner?

 September 27, 2022     continuous-deployment, continuous-integration, gitlab, gitlab-ci, gitlab-ci-runner     No comments   

Issue

we have a CI pipeline on our repository, hosted in gitlab

we setup gitlab-runner on our local machine

the pipeline running 4 steps

  • build

  • unit tests

  • integration test
  • quality tests

all this pipeline takes almost 20 min

and the pipeline trigger on each push to a branch

is there a way to configure the gitlab-runner that if the HEAD of a branch that the runner currently running on changes the pipe will auto cancel the run? because the latest version is what matters

for example in this run the lower run is unnecessary

enter image description here

gitlab-ci.yml

stages:
  - build
  - unit_tests
  - unit_and_integration_tests
  - quality_tests

build:
  stage: build
  before_script:
    - cd projects/ideology-synapse
  script:
    - mvn compile


unit_and_integration_tests:
  variables:
    GIT_STRATEGY: clone
  stage: unit_and_integration_tests
  only:
    - /^milestone-.*$/
  script:
    - export RUN_ENVIORMENT=GITLAB_CI
    - export MAVEN_OPTS="-Xmx32g"
    - mvn test
    - "cat */target/site/jacoco/index.html"
  cache: {}
  artifacts:
    reports:
      junit:
        - "*/*/*/target/surefire-reports/TEST-*.xml"


unit_tests:
  variables:
    GIT_STRATEGY: clone

  stage: unit_tests
  except:
    - /^milestone-.*$/
  script:
    - export MAVEN_OPTS="-Xmx32g"
    - mvn test
    - "cat */target/site/jacoco/index.html"
  cache: {}
  artifacts:
    reports:
      junit:
        - "*/*/*/target/surefire-reports/TEST-*.xml"

quality_tests:
  variables:
    GIT_STRATEGY: clone
  stage: quality_tests
  only:
    - /^milestone-.*$/
  script:
    - export RUN_ENVIORMENT_EVAL=GITLAB_CI
    - export MAVEN_OPTS="-Xmx32g"
    - mvn test
  cache: {}

edit after @siloko comment:

I already try using the auto-cancel redundant, pending pipelines in the setting menu

I want to cancel running pipelines and not pending


Solution

after forther investigation, I found that I had 2 active runners on one of my machines one shared runner , and another specific runner then if I push a 2 commit one after another to the same branch both of the runners take the jobs and execute them. that also explains why

Auto-cancel redundant, pending pipelines 

options, didn't work because it works only when the same runner have pending jobs

actions that been taken to fix this problem: unregister the specific runner and leave the machine only with the shared runner



Answered By - Naor Tedgi
Answer Checked By - Candace Johnson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] What are the best gitlab-ci.yml CI/CD practices and runners configs?

 September 27, 2022     continuous-deployment, continuous-integration, gitlab, gitlab-ci, gitlab-ci-runner     No comments   

Issue

This is a bit theoretical, but I'll try to explain my setup as much as I can:

  • 1 server (instance) with a self-hosted gitlab
  • 1 server (instance) for development
  • 1 server (instance) for production

Let's say in my gitlab I have a ReactJs project and I configured my gitlab-ci.yml as following:

  • job deploy_dev Upon pushing to dev branch, the updates will be copied with rsync to /var/www/html/${CI_PROJECT_NAME} (As a deployment to dev server)

  • The runner that picks up the job deploy_dev is a shared runner installed on that same dev server that I deploy to and it picks up jobs with the tag reactjs

The question is: If I want to deploy to production what is the best practice should I follow?

I managed to come up with a couple of options that I thought of but I don't know which one is the best practice (if any). Here is what I came up with:

  1. Modify gitlab-ci.yml adding a job deploy_prod with the same tag reactjs but the script should rsync with the production server's /var/www/html/${CI_PROJECT_NAME} using SSH?

  2. Set up another runner on production server and let it pick up the jobs with tags reactjs-prod and modify gitlab-ci.yml to have deploy_prod with the tag reactjs-prod?

  3. You have a better way other than the 2 mentioned above?

Last question (related): Where is the best place to install my runners? Is what I'm doing (Having my runners on my dev server) actually ok?

Please if you can explain to me the best way (that you would choose) with the reasons as in pros or cons I would be very grateful.


Solution

The best practice is to separate your CI/CD infrastructure from the infrastructure where you host your apps. This is done to minimize the number of variables which can lead to problems with either your applications or your runners.

Consider the following scenarios when you have a runner on the same machine where you host your application: (The below scenarios can happen even if the runner and app are running in separate Docker containers. The underlying machine is still a single point of failure.)

  1. The runner executes a CPU/RAM heavy job and takes up most of the resources on the machine. Your application starts experiencing performance problems.

  2. The Gitlab runner crashes and puts the host machine in an inoperable state. (docker panic or whatever). Your production app stops functioning.

  3. Your app brakes the host machine (Doesn't matter how. It can happen), your CI/CD stops working and you can not deploy a fix to production.

Consider having a separate runner machine (or machines. Gitlab runner can scale horizontally), that is used to run your deployment jobs to both dev and production servers.



Answered By - cecunami
Answer Checked By - Mildred Charles (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to specify the submodule branch in gitlab-ci?

 September 27, 2022     continuous-deployment, continuous-integration, gitlab, gitlab-ci     No comments   

Issue

How to specify branch in .gitlab-ci.yml for a submodule (different repo) in gitlab-ci?


Solution

You don't. You specify it in the .gitmodules file of the project you are building.

[submodule "MyRepo"]
    path = MyRepo
    url = https://github.com/vendor/MyRepo.git
    branch = master


Answered By - Stefan van Gastel
Answer Checked By - Robin (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How should I deploy the result of the CI/CD pipeline on my production server

 September 27, 2022     autodeploy, continuous-deployment, continuous-integration, gitlab, gitlab-ci     No comments   

Issue

I am having this GitLab CI/CD which builds then tests and pushes my projects container to GitLab container register successfully. But now I am wondering how I can do the deployment stage automated too. currently, I am doing it manually and after each successful pipeline, I SSH to my server and run several commands to pull the latest images from the GitLab.com container registry and then run them. But I would like to make this step automated as well. Yet, I don't know how? Actually I have seen some examples of opening an ssh session from CI/CD pipeline but it doesn't feel secure enough. So I was wondering is there a better way for this or I have to actually do this.

Not that I am using gitlab.com so the GitLab server is not installed on my machine and I can't share assets between them directly


Solution

There are many ways to achieve this, depending on your setup, other requirements, scale etc.
I'll just give you two options.

I. Kubernetes

  1. create cluster (ie control plane) somewhere
  2. add your cluster to GitLab (now GitLab can even create cluster for you in AWS and GCP, check this page)
  3. attach your target machine as a worker node to the cluster
  4. create Kubernetes YAML files \ Helm chart for your application and deploy via usual ways, e.g. kubectl apply -f ... or helm install ..., or rely on Auto DevOps to do this step for you

This is quite complex but sort of "right" way of doing things.

II. Private GitLab runner

  1. go to Settings > CI/CD > Runners of your GitLab project or group
  2. obtain the registration token
  3. install your own GitLab runner right on the target machine, and register it on the GitLab server using registration token, see example
  4. give runner some specific tag
  5. use that tag in your .gitlab-ci.yml file, see documentation
  6. then deployment process is just a local process of docker pull... and docker run ... for your image

This is a lot simpler, but is a "wrong" way, as you are mixing CI\CD infrastructure with target environment.



Answered By - Ivan
Answer Checked By - Senaida (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How should I deploy the result of the CI/CD pipeline on my production server

 September 27, 2022     autodeploy, continuous-deployment, continuous-integration, gitlab, gitlab-ci     No comments   

Issue

I am having this GitLab CI/CD which builds then tests and pushes my projects container to GitLab container register successfully. But now I am wondering how I can do the deployment stage automated too. currently, I am doing it manually and after each successful pipeline, I SSH to my server and run several commands to pull the latest images from the GitLab.com container registry and then run them. But I would like to make this step automated as well. Yet, I don't know how? Actually I have seen some examples of opening an ssh session from CI/CD pipeline but it doesn't feel secure enough. So I was wondering is there a better way for this or I have to actually do this.

Not that I am using gitlab.com so the GitLab server is not installed on my machine and I can't share assets between them directly


Solution

There are many ways to achieve this, depending on your setup, other requirements, scale etc.
I'll just give you two options.

I. Kubernetes

  1. create cluster (ie control plane) somewhere
  2. add your cluster to GitLab (now GitLab can even create cluster for you in AWS and GCP, check this page)
  3. attach your target machine as a worker node to the cluster
  4. create Kubernetes YAML files \ Helm chart for your application and deploy via usual ways, e.g. kubectl apply -f ... or helm install ..., or rely on Auto DevOps to do this step for you

This is quite complex but sort of "right" way of doing things.

II. Private GitLab runner

  1. go to Settings > CI/CD > Runners of your GitLab project or group
  2. obtain the registration token
  3. install your own GitLab runner right on the target machine, and register it on the GitLab server using registration token, see example
  4. give runner some specific tag
  5. use that tag in your .gitlab-ci.yml file, see documentation
  6. then deployment process is just a local process of docker pull... and docker run ... for your image

This is a lot simpler, but is a "wrong" way, as you are mixing CI\CD infrastructure with target environment.



Answered By - Ivan
Answer Checked By - Cary Denson (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] What is the best way to do CI/CD with AWS CDK (python) using GitLab CI?

 September 27, 2022     amazon-web-services, aws-cdk, continuous-deployment, gitlab, gitlab-ci     No comments   

Issue

I am using AWS CDK (with Python) for a containerized application that runs on Fargate. I would like to run cdk deploy in a GitLab CI process and pass the git tag as an environment variable that replaces the container running in Fargate. I am currently doing something similar with CloudFormation (aws cloudformation update-stack ...). Is anyone else doing CI/CD with AWS CDK in this way? Is there a better way to do it?

Also, what should I use for my base image for this job? I was thinking that I can either start with a python container and install node or vice versa. Or maybe there is prebuilt container somewhere that I haven't been able to find yet.

Here is start that seems to be working well:

CDK:
  image: python:3.8
  stage: deploy
  before_script:
    - apt-get -qq update && apt-get -y install nodejs npm
    - node -v
    - npm i -g aws-cdk
    - cd awscdk
    - pip3 install -r requirements.txt
  script:
    - cdk diff
    - cdk deploy --require-approval never

Edit 2020-05-04:

CDK can build docker images during cdk deploy, but it needs access to docker. If you don't need docker, the above CI job definition should be fine. Here's the current CI job I'm using:

cdk deploy:
  image: docker:19.03.1
  services:
    - docker:19.03.5-dind
  stage: deploy
  only:
    - master
  before_script:
    - apk add --no-cache python3
    - python3 -V
    - pip3 -V
    - apk add nodejs-current npm
    - node -v
    - npm i -g aws-cdk
    - cd awscdk
    - pip3 install -r requirements.txt
  script:
    - cdk bootstrap aws://$AWS_ACCOUNT_ID/$AWS_DEFAULT_REGION
    - cdk deploy --require-approval never

The cdk bootstrap is needed because I am using assets in my cdk code:

        self.backend_task.add_container(
            "DjangoBackend",
            image=ecs.AssetImage(
                "../backend",
                file="scripts/prod/Dockerfile",
                target="production",
            ),
            logging=ecs.LogDrivers.aws_logs(stream_prefix="Backend"),
            environment=environment_variables,
            command=["/start_prod.sh"],
        )

Here's more information on cdk bootstrap: https://github.com/aws/aws-cdk/blob/master/design/cdk-bootstrap.md


Solution

you definitely have to use CDK deploy inside the CI/CD pipeline if you have lambda or ECS assets, otherwise, you could run CDK synth and pass the resulting Cloudformation to AWS Code Deploy. That means a lot of your CI/CD will be spent deploying which might drain your free tier build minutes or just means you pay more (AWS Code Deploy is free)

I do something similar with Golang in CircleCi. I use the Go base image and install nodejs and cdk. I use this base image to build all my go binaries, the vuejs frontend and compile cdk typescript and deploy it.

FROM golang:1.13

RUN go get -u -d github.com/magefile/mage
WORKDIR $GOPATH/src/github.com/magefile/mage
RUN go run bootstrap.go

RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get install -y nodejs
RUN npm i -g aws-cdk@1.36.x
RUN npm i -g typescript
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt install yarn

I hope that helps.



Answered By - Jonny Rimek
Answer Checked By - Mary Flores (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to start newly built docker image when deploying?

 September 27, 2022     continuous-deployment, docker, docker-registry, gitlab-ci, gitlab-ci-runner     No comments   

Issue

I've created a gitlab-ci.yml pipeline to deploy a simple react.js application. I installed the gitlab-runner on EC2 (AWS) and inside my pipeline I build the image and push it to the registry.

This is my gitlab-ci.yml file:

image: docker:latest
services:
  - docker:dind

stages:
  - test
  - deploy

 test-build: 
   stage: test
   only:
     - master
   tags:
     - master
   script:
     - sudo docker build .

deploy-production:
  stage: deploy
  only:
    - master
  tags:
    - master
  before_script:
    # remove the offending package golang-docker-credential-helpers without removing all of docker-compose
    - sudo dpkg -r --ignore-depends=golang-docker-credential-helpers golang-docker-credential-helpers
    - sudo docker version
    - "sudo docker info"
    - "sudo docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
  script:
    - "sudo docker build -t ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest --pull ."
    - "sudo docker push ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest"
    - "sudo docker run -it -d -p 80:80 --rm --name daruma.comunicato ${CI_REGISTRY}/${CI_PROJECT_PATH}:latest"
  after_script:
    - "sudo docker logout ${CI_REGISTRY}"
  when: manual

The problem is: how do I run the latest pushed image? If I run docker run -it ... the pipeline fails with:

docker: Error response from daemon: Conflict. The container name "/app.test" is already in use by container "f8e888d6dff6fe5808d7577fe8fddc3e24cd8cc0e248a69d36e7ce51bf339ae6". You have to remove (or rename) that container to be able to reuse that name.

Because that container is already running and has the same name. I don't want to stop all containers when deploying because I could have other docker containers running. I also thought to do docker stop app.test && docker rm app.test but if for some reason the container is down that will give me an error and won't deploy.

What is the correct way to handle this situation?


Solution

The error message means that you already have a container with that name and docker is not allowing you to have multiple containers with the same name, to fix that, remove the old container, just do in the terminal something like $docker ps -a and you will get a list of all containers you have, then identify the id of the container with that name and do remove it by doing:

$docker rm -f <container-id>


Answered By - ΦXocę 웃 Пepeúpa ツ
Answer Checked By - Timothy Miller (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] What are the advantages of using GitLab CI/CD to build and deploy vs doing everything from my own machine?

 September 27, 2022     continuous-deployment, continuous-integration, gitlab-ci     No comments   

Issue

CI/CD From Google search:

GitLab CI (Continuous Integration) service is a part of GitLab that build and test the software whenever developer pushes code to application. GitLab CD (Continuous Deployment) is a software service that places the changes of every code in the production which results in every day deployment of production.

As a Visual Studio user, what have I been missing all these years? I build from C# and then deploy through FTP. Why even use GitLab for this?


Solution

You just need to take a look in the features page https://about.gitlab.com/stages-devops-lifecycle/continuous-integration/

Gitlab has too much features that will be difficult to summarize them here



Answered By - Sergio Tanaka
Answer Checked By - Senaida (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, September 26, 2022

[FIXED] How to execute a command from a docker image in a gitlab job?

 September 26, 2022     continuous-deployment, continuous-integration, docker, gitlab, gitlab-ci     No comments   

Issue

I created a docker image which stores an npm project. The npm project has an npm script which runs tests. I use Gitlab for CI/CD where I want to define a job which will pull my image and run the npm script. This is the .gitlab.yml:

stages:
  - test

.test-api:
  image: $CI_REGISTRY_IMAGE
  stage: test
  script:
    - cd packages/mypackage && npm run test:ci
  artifacts:
    paths:
      - packages/mypackage/test-report.html
    expire_in: 1 week

test-api-beta:
  extends: .test-api
  environment:
    name: some-env
  variables:
    CI_REGISTRY_IMAGE: my_image_name

The gitlab job fails with the error:

> mypackage@1.0.0 test:ci /builds/my-organization/my-project/packages/mypackage
> DEBUG=jest-mongodb:* NODE_ENV=test ts-node --transpile-only --log-error node_modules/.bin/jest --watchAll=false --detectOpenHandles --bail
sh: 1: ts-node: not found
npm ERR! code ELIFECYCLE
npm ERR! syscall spawn
npm ERR! file sh
npm ERR! errno ENOENT
npm ERR! mypackage@1.0.0 test:ci: `DEBUG=jest-mongodb:* NODE_ENV=test ts-node --transpile-only --log-error node_modules/.bin/jest --watchAll=false --detectOpenHandles --bail`
npm ERR! spawn ENOENT
npm ERR! 
npm ERR! Failed at the mypackage@1.0.0 test:ci script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.
npm WARN Local package.json exists, but node_modules missing, did you mean to install?
npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2021-04-28T09_05_39_023Z-debug.log

The main issue is the error npm WARN Local package.json exists, but node_modules missing, did you mean to install?. This means that the gitlab script is executed on the actual git repository of my project instead of being executed on the docker image. Indeed my repository doesn't contain node_modules so the job fails. But why doesn't gitlab execute the script on the actual image?

The docker image has a CMD directive: CMD ["npm", "run", "start"]

Maybe the CMD somehow interferes with the gitlab script?

P.S. pulling the docker image manually and executing the npm script locally works.

This is my Dockerfile:

FROM node:14.15.1
COPY ./package.json /src/package.json

WORKDIR /src
RUN npm install
COPY ./lerna.json /src/lerna.json
COPY ./packages/mypackage/package.json /src/packages/mypackage/package.json
RUN npm run clean
COPY . /src
EXPOSE 8082
CMD ["npm" , "run", "start"]

EDIT: As per M. Iduoad answer if the script is changed as follows:

.test-api:
  image: $CI_REGISTRY_IMAGE
  stage: test
  script:
    - cd /src/packages/mypackage && npm run test:ci
  artifacts:
    paths:
      - packages/mypackage/test-report.html
    expire_in: 1 week

the npm script works. We need to cd /src/packages/mypackage because this is the location of script in the Dockerfile.


Solution

Gitlab always clones you repo and checkout the branch you are triggering the pipeline against and run your commands on that code (same folder CI_PROJECT_DIR)

So in order to use you version of the code you should either move to folder where it is located in your docker image.

.test-api:
  image: $CI_REGISTRY_IMAGE
  stage: test
  script:
    - cd /the/absolute/path/of/the/project/ && npm run test:ci

Doing this, defies gitlab-ci's way of doing things. Since you job will always run on the same code(the one in the image) every time your gitlab job is run. When gitlab-ci is a CI system and is intended to run you jobs against the code in your git repo.

So to summarize, I suggest you add a stage where you install you dependencies (node_modules)

stages:
  - install
  - test

install-deps:
  image: node:latest # or the version you are using
  stage: install
  script: 
    - npm install
  cache:
    key: some-key
    paths:
      - $CI_PROJECT_DIR/node_modules
.test-api:
  image: $CI_REGISTRY_IMAGE
  stage: test
  script:
    - npm run test:ci
  cache:
    key: some-key
    paths:
      - $CI_PROJECT_DIR/node_modules
  artifacts:
    paths:
      - packages/mypackage/test-report.html
    expire_in: 1 week

This will use gitlab-ci's cache feature, to store the node_module across you jobs and across your pipeline. You can control how to use and share the cache across pipelines and jobs, by changing the key. (read more about the cache on gitlab's docs)



Answered By - Iduoad
Answer Checked By - Cary Denson (PHPFixing Admin)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to Deploy an already existing Gitlab release without building the project again?

 September 26, 2022     continuous-deployment, gitlab-ci     No comments   

Issue

I have a neat Gitlab pipeline which consists of 5 stages and >10 jobs which test, build, release and deploy the result on several environments.

Sometimes, we need to rollback one environment to an older release. With Jenkins this was easily done with extra job which only deployed the existing release from the repository without building the project again.

On Gitlab, it seems that I either have to create a new project with it's own build pipeline for deployment only, or add a rule to each job which says don't run the build and test jobs but only a deploy job when the environment variable DEPLOY_OLD_RELEASE was set.

I don't like either way because creating a project just for the build pipeline seems like a huge overhead, and modifying all the jobs is a huge effort and not easy to maintain.


Solution

Once a Gitlab job passes, you can re-run it any number of times. So for example, if you have a deploy job that scp's a tar file to a remote server, ssh's in to extract, move to a specific dir, change symlinks etc. you can just rerun that specific job.

However, since it sounds like you have a pretty well-defined pipeline, it's likely that the tar file isn't built in your deploy job, but rather one like a "Package App" job. In this job, you might upload the final tar file as an artifact so that your deploy job can deploy it.

If this is the case, as long as your 'Package' job's artifacts still exist (since they expire either based on the server's default, or a time you specify in your job definition), you can simply re-run the Deploy job. If however those artifacts have expired, you'll have to rerun the Package job, then the Deploy job.

However, maybe your 'Package' job depends on some dependencies that aren't built in this job, for example running npm ci or composer install (for PHP projects). If these activities happen in other jobs in prior stages and are uploaded as artifacts that the 'Package' job uses, you may have to rerun these as well.

This may sound like it requires re-running a lot of jobs, but if you have you expire times set accurately for your purposes, it isn't bad. For example, I have my "build" jobs (those that install npm dependencies, etc) expire in 30 minutes since the "package" job will run immediately after those complete, and I never need them directly again. Conversely, I keep my 'package' job artifact(s) for 2 weeks so that I can redeploy relatively quickly without rebuilding dependencies.

From my experience with my apps, I almost never re-deploy a build older than 2 weeks since there is usually a newer build to replace it. You should adjust the expire times for each project however.

Here's a Pipeline Definition similar to the one I use for my projects as an example. This is a PHP backend project with a JS/Vue frontend, so most jobs won't apply, but will give you a good idea how to proceed.

stages:
  - build
  - package
  - deploy

Run Composer Install:
  stage: build
  image: composer:latest
  only:
    - tags
  needs: []
  before_script:
    - echo ${COMPOSER_AUTH_JSON} > auth.json
  script:
    - composer clearcache; composer self-update --2
    - php -d memory_limit=-1 /bin/composer install --ignore-platform-reqs --no-dev
  artifacts:
    paths:
      - vendor
    when: on_success
    expire_in: 30 minutes

Npm Install:
  stage: build
  image: hub.comcast.net/code_common/spa_site_7.4:5
  only:
    - branches
    - tags
  needs: []
  script:
    - /usr/bin/npm ci
  artifacts:
    paths:
      - node_modules
    when: on_success
    expire_in: 30 minutes

Package Application:
  stage: package
  image: ... #same situation as the Package job for int environment
  dependencies:
    - Run Composer Install
    - Npm Install
  needs: ["Npm Install", "Run Composer Install"]
  only:
    - tags
  script:
    - npm run build
    - rm -rf .git* Vagrantfile vagrant_files .env.example .env.local CONTRIBUTING.md build.xml
    - mkdir -p bootstrap/cache
    - chmod -R go+r ./*
    - find ./ -type d -exec chmod ugo+x {} \;
    - php artisan package:discover
    - php artisan optimize
    - tar -C ../ --exclude "$CI_PIPELINE_ID.tar.gz" -zcpf $CI_PIPELINE_ID.tar.gz --warning=no-file-changed $(basename $CI_PROJECT_DIR) || [[ $? -eq 1 ]]
  artifacts:
    paths:
    - $CI_PIPELINE_ID.tar.gz
    when: on_success
    expire_in: 30 days

Deploy:
  stage: deploy
  image: my_image
  needs: ["Package Application"]
  dependencies: ["Package Application"]
  script:
    - ./deploy_placeholder.sh

The first three jobs run build steps for multiple environments. Next I take the artifacts uploaded in the previous jobs, run things like npm run build, and ultimately create the .tar.gz file which is uploaded as an artifact. Finally, the Deploy job pulls down the .tar.gz file and deploys.

With this hierarchy, depending on how much time has passed between initial deployment and the redeploy, you'll only need to rerun a few jobs, which you can do with a single button press for each job.



Answered By - Adam Marshall
Answer Checked By - Clifford M. (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

[FIXED] How to start a gitlab pipeline job that fails when test coverage goes below a threshold

 September 26, 2022     continuous-deployment, continuous-integration, gitlab, gitlab-ci, jestjs     No comments   

Issue

We have jest unit testing for our react app and need to set a threshold value of 80% test case coverage. I know that we can get the coverage report in npm test -- --coverage --watchAll=false but I am now tasked with failing the pipeline if the coverage goes below 80%. I saw that there is a test pipeline stage which is commented right now. I have the following script, I need to somehow get the coverage, and compare if it is 80 or more else fail the pipleline, what should I do

test:
  stage: test
  image: node:16.13.1
  before_script:
    - npm i
    - npx node -v
    - npx npm -v
  script:
    - echo "running test coverage"
    - npm test -- --coverage --watchAll=false
  coverage: '/All files[^|]*\|[^|]*\s+([\d\.]+)/'

Solution

Within your script section you should be able to use a Regex expression to retrieve the coverage value from the report.

Then you can use some bash scripting to remove the % sign from the output and compare the value with your required minimum. If the value is below, simply return false which I think gitlab should interpret as a failure.

If it’s above your minimum value return true and it should pass the job.



Answered By - quizguy
Answer Checked By - Marilyn (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg

Monday, August 22, 2022

[FIXED] How to Setup GitLab Environment Variables in a safe way?

 August 22, 2022     environment-variables, gitlab, gitlab-ci, gitlab-ci-runner     No comments   

Issue

I don't wan't to put sensitive Credentials (Like API keys, passwords etc) into my branch. For this, GitLab (and other CI/CD-Services) are able to set Environment-Variables. They will be injected into the script during deployment process.

I know about two ways for GitLab to set them:

  1. Via UI: Project ⇒ Settings ⇒ CI/CD ⇒ Variables
  2. Via .gitlab-ci.yml

As in my opinion the first way is the secure one, because no files with credentials are saved in the git repo but it's also the more complicated way because I have to set each single variable by hand via GitLab-GUI.

With the second way, the issue is that .gitlab-ci.yml is saved into the gitlab repo so that the credentials are not secure.

Question: Is there a way to define the ENV-Vars in a file and provide it to GitLab, without putting them into the branch? Or is there another way to create those ENV-Vars easily and securely in GitLab?


Solution

Is there a way to define the ENV-Vars in a File?

Yes, in UI settings you mentioned you can specify variables type to be variable (key:value) or file (in Key will be passed path to secret file with content from value input).

So file variable seems like what you are looking for.

Readme and docs provide good description for variables. Personally I find very useful other options: variable masking and protected state.



Answered By - makozaki
Answer Checked By - Candace Johnson (PHPFixing Volunteer)
Read More
  • Share This:  
  •  Facebook
  •  Twitter
  •  Stumble
  •  Digg
Older Posts Home
View mobile version

Total Pageviews

Featured Post

Why Learn PHP Programming

Why Learn PHP Programming A widely-used open source scripting language PHP is one of the most popular programming languages in the world. It...

Subscribe To

Posts
Atom
Posts
All Comments
Atom
All Comments

Copyright © PHPFixing