GistTree.Com
Entertainment at it's peak. The news is by your side.

How a Git Error Made Me Clean Unrelated Oracle Database Trace Files (2019)

0

In a intellectual day, I changed into as soon as pulling basically the most celebrated adjustments from a repository in our enjoy instance of GitLab CE, which is deployed on our docker cloud infrastructure, and changed into as soon as faced with a irregular error, I’ve checked my colleagues and most of them were encountering this error in a random-vogue, right here’s how it went on.

gitlab-docker

TLDR;

When without notice faced with the next error whereas pulling from a git repository for your enjoy instance of a git respond, test your utility storage, one of many errors shall be related to insufficient storage:

> git pull origin master
error: RPC failed: HTTP 500 curl 22 The requested URL returned error: 500 Inner Server Error
deadly: The remote end hung up suddenly

When upgrading a GitLab Omnibus instance and also you were faced with the next error:

Symlink at /var/log/gitlab/redis/config (pointing to /opt/gitlab/sv/redis/log/config) exists however making an strive to resolve it outcomes in a nonexistent file

Apply right here to fix it.

Aid to the parable

Okay, so faced with a uncommon error on git (or anything else in that matter), first explain I attain is google it :P…

What I’ve found changed into as soon as there are multiple causes to such errors on git:

  • If faced whereas pushing a commit, it’s likely you’ll presumably be pushing a principal file (a principal db-mock file, as an illustration) in that case, both exclude it (should always now not be pushed within the fundamental station) or lengthen your git client buffer, and that wasn’t my case.

  • There are permission related configurations which would possibly be overlooked up for your novel GitLab assign in instance, and that changed into as soon as now not my case both. Our GitLab instance has been running for 2 years straight fortunately-ever-after (undiscouraged, pointless to utter 🙂 ).

All were unrelated to our predicament, so all changed into as soon as left to me changed into as soon as to dig deep into gitlab’s sexy logs on its administration panel, two particular streams are attention-grabbing, which would possibly be manufacturing-logs and git-logs.

I wasn’t in a station to salvage any HTTP-500 related errors, however each logs were exhibiting errors related to git-repack, a module that is in negate of packing and compressing your repositories to be served to the git purchasers, and it changed into as soon as crashing.

A google search wasn’t invaluable in git-repack issues as properly, however I purchased an thought, let’s attain a rapid df -h and test the server mounting aspects and storages… and there it changed into as soon as:

/dev/mapper/vg-Docker    96G   96G    0G     100% /var/lib/docker

Our docker’s container storage is corpulent, we’re utilizing overlay2 as a storage driver and it sets the default dimension to 100Gb (about 96GB), let’s attain a rapid housekeeping and test if the predicament is genuinely about storage:

with draw prune, you choose away all stopped containers, unused networks, dangling photos (unused photos) and caches. After checking all vital containers are up, I ran the prune utter and changed into as soon as in a station to assign about 5GB, sufficient to establish the respond.

It genuinely changed into as soon as about storage, I changed into as soon as in a station to tug efficiently.

About 100GB of storage is outdated in our containers, although we inform volumes to establish them up on a diversified storage devices with 2TB of storage. Now we enjoy about 15 containers up in a time spanning between apps and databases (All in staging) however there would possibly be a sportive one between them (ingesting storage outside its mounted volumes) and I enjoy to come by it, rising /var/lib/docker requires deleting all photos and containers at some level of, and that is now not feasible to me correct now.

I cd‘ed into /var/lib/docker/volumes and ran a du -h to come by this sportive one, and right here what I’ve found attention-grabbing:

.....
15.0G ./_data/u01/app/oracle/diag/rdbms/orclcdb/ORCLCDB/designate
.....

Now we enjoy an oracle database container, single instance, for staging applications, and it has totally two schemas without a-manufacturing load on it in any appreciate. Oracle RDBMS is ingesting over 30 GB of storage (besides it’s already mounted volumes), half of them are logs…

When I sat it up, I’ve followed Oracle’s documentation of running an Oracle RDBMS in a docker container and setting its records volumes as properly. I negate it’s now not sufficient, you’ll enjoy to tweak the RDBMS configs yourself for storage thresholds, what a nightmare, .ORA data right here we lope.

oracle

I cleared the designate data and sat up the configs. I thought whereas I’m at it, let’s give a beget to the GitLab instance, what would possibly presumably per chance lope sinful.

Upgrading Omnibus GitLab Docker Instance

The give a beget to course of of GitLab docker instance is straight ahead, as lengthy as you are utilizing records volumes. You choose away the container, pull the next version of its record, flee the container with the venerable volumes and it could presumably per chance migrate to the novel version seamlessly.

The predicament is, I changed into as soon as neglecting the upgrades for 2 years now, the put 2 fundamental variations the put launched… extra work should always be performed.

GitLab follows the properly-known MAJOR.MINOR.PATCH versioning, that means on each increment of MINOR and MAJOR, breaking adjustments and database adjustments are presented. So there would possibly be a required path of updates you need to always opt one-by-one to realize the blueprint version, the GitLab instance changed into as soon as setting at version 10.1.1, right here is the path I needed to opt:

10.1.1 -> 10.4.5 -> 10.8.7 -> 11.11.8 -> 12.0.12 > 12.10.6 -> 13.0.0 -> 13.2.0

At the fundamental variations I faced some errors of missing log configuration data of redis, gitaly and postgres, right here is an example:

Symlink at /var/log/gitlab/redis/config (pointing to /opt/gitlab/sv/redis/log/config) exists however making an strive to resolve it outcomes in a nonexistent file

Your total three errors on each give a beget to are the identical, missing config data, google search wasn’t invaluable. So I had an thought…

I pulled basically the most celebrated record of GitLab and flee it on novel volumes as if installing GitLab for the fundamental time, I inspected the information that my give a beget to changed into as soon as complaining about from the novel record, and so they were as the next (the three of them):

s209715200
n30
t86400
!gzip

I created a file referred to as config and pasted those lines in it, eradicated the novel container, and changed into as soon as in a station to inform docker cp utter to replica the file into the volumes of our gitlab instance whereas it changed into as soon as in stopped converse:

> docker cp ./config gitlab:/opt/gitlab/sv/redis/log/config
> docker cp ./config gitlab:/opt/gitlab/sv/gitaly/log/config
> docker cp ./config gitlab:/opt/sv/postgresql/log/config

And since those were logging-related config data (fingers crossed), I hoped they won’t draw any breaking issues whereas gitlab reconfigure is running.

I needed to attract such adjusts to the gitlab volumes for a minimal of four releases on my path, and it changed into as soon as successful in bypassing them.

Ceaselessly we face issues the put we’re now not in a station to realize anything else on the come by invaluable to clear up them, specifically on issues so colossal savor Docker and GitLab, I hope this helps anybody faces such issues, and I hope that this changed into as soon as a right read.

Read More

Leave A Reply

Your email address will not be published.