Gitlab

The Gitlab-Instance is hosted on our Cluster and comes with a shared runner capable of running 40 (!) concurrent pipeline jobs, freely usable and without any hard limitations. Whilst the rest of the Gitlab services live on the regular cluster or undisclosed nodes (e.g. storage nodes), the pipeline jobs are spread across four dedicated nodes with 12 CPU cores and 64GB of ram each, to allow for even mammoth projects to be handled with relative ease.

Without overriding the configuration, however, any given pipeline job is limited to one core and 5GB of ram, which in turn allows every single node to, in theory, run 12 pipeline jobs. Leaving some margin of error, this results in the 40 mentioned above. For jobs needing a heavy workload, however, you can adjust the limitations, which may well cause said jobs to be queued longer in order to be scheduled, since they will be pending until the necessary resources could be allocated. There are, however, only a handful of instances in which this is actually necessary - like compiling the entirety of the Unreal Engine from source, for example.

There are no storage limits to speak of, as long as things are kept sane, LFS is supported and we are happy to introduce people to the world of CI/CD pipelines and sanely maintained git repositories. However, this happens on a strict "Be helpful, not useful" basis at the very least as much as staff is concerned - because we want to help, not to be used.