Issue
I have a neat Gitlab pipeline which consists of 5 stages and >10 jobs which test, build, release and deploy the result on several environments.
Sometimes, we need to rollback one environment to an older release. With Jenkins this was easily done with extra job which only deployed the existing release from the repository without building the project again.
On Gitlab, it seems that I either have to create a new project with it's own build pipeline for deployment only, or add a rule
to each job which says don't run the build and test jobs but only a deploy job when the environment variable DEPLOY_OLD_RELEASE was set.
I don't like either way because creating a project just for the build pipeline seems like a huge overhead, and modifying all the jobs is a huge effort and not easy to maintain.
Solution
Once a Gitlab job passes, you can re-run it any number of times. So for example, if you have a deploy job that scp's a tar file to a remote server, ssh's in to extract, move to a specific dir, change symlinks etc. you can just rerun that specific job.
However, since it sounds like you have a pretty well-defined pipeline, it's likely that the tar file isn't built in your deploy job, but rather one like a "Package App" job. In this job, you might upload the final tar file as an artifact so that your deploy job can deploy it.
If this is the case, as long as your 'Package' job's artifacts still exist (since they expire either based on the server's default, or a time you specify in your job definition), you can simply re-run the Deploy job. If however those artifacts have expired, you'll have to rerun the Package job, then the Deploy job.
However, maybe your 'Package' job depends on some dependencies that aren't built in this job, for example running npm ci
or composer install
(for PHP projects). If these activities happen in other jobs in prior stages and are uploaded as artifacts that the 'Package' job uses, you may have to rerun these as well.
This may sound like it requires re-running a lot of jobs, but if you have you expire times set accurately for your purposes, it isn't bad. For example, I have my "build" jobs (those that install npm dependencies, etc) expire in 30 minutes since the "package" job will run immediately after those complete, and I never need them directly again. Conversely, I keep my 'package' job artifact(s) for 2 weeks so that I can redeploy relatively quickly without rebuilding dependencies.
From my experience with my apps, I almost never re-deploy a build older than 2 weeks since there is usually a newer build to replace it. You should adjust the expire times for each project however.
Here's a Pipeline Definition similar to the one I use for my projects as an example. This is a PHP backend project with a JS/Vue frontend, so most jobs won't apply, but will give you a good idea how to proceed.
stages:
- build
- package
- deploy
Run Composer Install:
stage: build
image: composer:latest
only:
- tags
needs: []
before_script:
- echo ${COMPOSER_AUTH_JSON} > auth.json
script:
- composer clearcache; composer self-update --2
- php -d memory_limit=-1 /bin/composer install --ignore-platform-reqs --no-dev
artifacts:
paths:
- vendor
when: on_success
expire_in: 30 minutes
Npm Install:
stage: build
image: hub.comcast.net/code_common/spa_site_7.4:5
only:
- branches
- tags
needs: []
script:
- /usr/bin/npm ci
artifacts:
paths:
- node_modules
when: on_success
expire_in: 30 minutes
Package Application:
stage: package
image: ... #same situation as the Package job for int environment
dependencies:
- Run Composer Install
- Npm Install
needs: ["Npm Install", "Run Composer Install"]
only:
- tags
script:
- npm run build
- rm -rf .git* Vagrantfile vagrant_files .env.example .env.local CONTRIBUTING.md build.xml
- mkdir -p bootstrap/cache
- chmod -R go+r ./*
- find ./ -type d -exec chmod ugo+x {} \;
- php artisan package:discover
- php artisan optimize
- tar -C ../ --exclude "$CI_PIPELINE_ID.tar.gz" -zcpf $CI_PIPELINE_ID.tar.gz --warning=no-file-changed $(basename $CI_PROJECT_DIR) || [[ $? -eq 1 ]]
artifacts:
paths:
- $CI_PIPELINE_ID.tar.gz
when: on_success
expire_in: 30 days
Deploy:
stage: deploy
image: my_image
needs: ["Package Application"]
dependencies: ["Package Application"]
script:
- ./deploy_placeholder.sh
The first three jobs run build steps for multiple environments. Next I take the artifacts uploaded in the previous jobs, run things like npm run build
, and ultimately create the .tar.gz file which is uploaded as an artifact. Finally, the Deploy job pulls down the .tar.gz file and deploys.
With this hierarchy, depending on how much time has passed between initial deployment and the redeploy, you'll only need to rerun a few jobs, which you can do with a single button press for each job.
Answered By - Adam Marshall Answer Checked By - Clifford M. (PHPFixing Volunteer)
0 Comments:
Post a Comment
Note: Only a member of this blog may post a comment.