datatracker/helm/templates/configmap.yaml
Jennifer Richards 30a4a5a77b ci: run rabbitmq as non-root (#7362)
* ci: securityContext for rabbitmq

* ci: logs from rabbitmq as json to console

* ci: tmp volume for rabbitmq

Needed since rootfs is now read-only

* ci: fix permissions on /var/lib/rabbitmq vol

Rabbitmq needs to be able to write to the fs at
/var/lib/rabbitmq. It may be possible to get rid
of the initContainer and use fsGroup in the pod
securityContext to manage this, but that does not
work for the hostVolume mounts I use for dev.
The solution here moves the actual mount to the
rabbitmq/ directory in the rabbitmq-data volume
and uses an initContainer to set the permissions
on that. That should work for any volume type.
2024-05-13 21:41:36 -04:00

70 lines
1.8 KiB
YAML

apiVersion: v1
kind: ConfigMap
metadata:
name: django-configmap
data:
settings_local.py: |-
{{- .Files.Get "settings_local.py" | nindent 4 }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-configmap
data:
definitions.json: |-
{
"permissions": [
{
"configure": ".*",
"read": ".*",
"user": "datatracker",
"vhost": "dt",
"write": ".*"
}
],
"users": [
{
"hashing_algorithm": "rabbit_password_hashing_sha256",
"limits": {},
"name": "datatracker",
"password_hash": "HJxcItcpXtBN+R/CH7dUelfKBOvdUs3AWo82SBw2yLMSguzb",
"tags": []
}
],
"vhosts": [
{
"limits": [],
"metadata": {
"description": "",
"tags": []
},
"name": "dt"
}
]
}
rabbitmq.conf: |-
# prevent guest from logging in over tcp
loopback_users.guest = true
# load saved definitions
load_definitions = /etc/rabbitmq/definitions.json
# Ensure that enough disk is available to flush to disk. To do this, need to limit the
# memory available to the container to something reasonable. See
# https://www.rabbitmq.com/production-checklist.html#monitoring-and-resource-usage
# for recommendations.
# 1-1.5 times the memory available to the container is adequate for disk limit
disk_free_limit.absolute = 6000MB
# This should be ~40% of the memory available to the container. Use an
# absolute number because relative will be proprtional to the full machine
# memory.
vm_memory_high_watermark.absolute = 1600MB
# Logging
log.file = false
log.console = true
log.console.level = info
log.console.formatter = json