As per Uyuni documentation, Uyuni is a solution for organizations that require robust control over maintenance and package deployment on their servers. It enables you to manage large sets of Linux systems and keep them up-to-date, with automated software management, asset management, and system provisioning.
- Uyuni Proxy caches packages and can be deployed as containerised version. Steps to create proxy is provided here. https://www.uyuni-project.org/uyuni-docs/en/uyuni/installation-and-upgrade/container-deployment/uyuni/proxy-container-setup-uyuni.html
In my case, uyuni proxy was working fine but after reboot it was throwing below error in error logs.
Error
Since we don’t have an OCSP responder setup for the internal CA, but proxy was trying to reach out to the internet for the Geotrust one. It was actually the server trying to get the OCSP response for it’s own certificate to use with stapling.
ssl_util_ocsp.c(120): (101)Network is unreachable: [client 10.70.34.166:36572] AH01974: could not connect to OCSP responder 'status.geotrust.com'
My hypothesis was disabling OCSP stapling resolves the problem. Only problem, Uyuni Proxy is runs on pre-defined tar package file which creates containers using podman service. So disabling this parameter means I have to disable the config in the container and make sure it survives reboot because file where this parameter is defined is not mounted as filesystem on external volume.
Solution
First, I checked where this parameter is defined inside container.
uyuni-proxy-pod:/ # grep -r "SSLUseStapling" /etc/*
/etc/apache2/ssl-global.conf: # the TLS handshake if SSLUseStapling is enabled. Configuration of a cache
/etc/apache2/vhosts.d/vhost-ssl.template: SSLUseStapling off
/etc/apache2/vhosts.d/ssl.conf: SSLUseStapling off
I realised that this parameter is set with other parameters inside container, with configuration file here,
/usr/bin/uyuni-configure.py
So, I have to modify the script so new config always have SSLUseStapling off. And then I have to make sure script survives reboot.
Step 1
Modify the script. Since there is no vi editor in my container shell, I used sed.
uyuni-proxy-pod:/ # sed -i 's\SSLUseStapling off\SSLUseStapling on\g' /usr/bin/uyuni-configure.py
Step 2
Now the hard part how it can survive reboot 🙂 For that I have no other option but to create a new image from the running container.
podman commit uyuni-proxy-httpd
podman tag 773d53e636dd ssloffv2
ds-uyuni-proxy:/home/deploy # podman image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/ssloffv2 latest 773d53e636dd 9 days ago 258 MB
localhost/ssloff latest 7e682bcb7a47 9 days ago 258 MB
localhost/podman-pause 4.9.5-1729598400 55b9a4e1e943 2 weeks ago 778 kB
registry.opensuse.org/uyuni/proxy-httpd latest d20d114caaec 4 weeks ago 256 MB
registry.opensuse.org/uyuni/proxy-salt-broker latest 3bc06815baca 4 weeks ago 182 MB
registry.opensuse.org/uyuni/proxy-tftpd latest 64852227fb79 4 weeks ago 195 MB
registry.opensuse.org/uyuni/proxy-squid latest 6967765a161f 4 weeks ago 193 MB
registry.opensuse.org/uyuni/proxy-ssh latest b835a23a5e5c 4 weeks ago 188 MB
Now Image is ready to be loaded. I can save it as tar file and pass in mgrprxy command or can load on the podman.
/usr/bin/podman run --conmon-pidfile /run/uyuni-proxy-httpd.pid --cidfile /run/uyuni-proxy-httpd.ctr-id --cgroups=no-conmon --pod-id-file /run/uyuni-proxy-pod.pod-id -d --replace -dt -v /etc/uyuni/proxy:/etc/uyuni:ro -v uyuni-proxy-rhn-cache:/var/cache/rhn:z -v uyuni-proxy-tftpboot:/srv/tftpboot:z ${HTTPD_EXTRA_CONF} --name uyuni-proxy-httpd localhost/ssloffv2
Couple of reboots confirmed container survives with the configuration.
“If not, than delete the app, JK 🙂 ”
