Mon 02 June 2025 in security by Regalis
Make your internal infrastructure offline!
Keeping your infrastructure isolated and carefully splitting hosts/VMs/interfaces in to the air-gapped security domains is crucial. It is one of the most effective techniques for securing infrastructure against data leaks and remote takeovers.
Why is it so important?
- Eliminates entire classes of cyberattacks: attackers can’t hit what they can’t reach. Ransomware, zero-days, and automated botnet most likely require inbound/outbound connections to work.
- No silent data theft – if malware or a malicious insider tries to steal your data, they can’t call home without Internet access. Data stays inside your walls.
- Simplifies monitoring & forensics - in an offline network, any unexpected connection attempt is an instant red flag (vs. noisy Internet traffic).
- Future-proofs against unknown threats.
- Real Zero Trust (without the complexity).
Local Nginx caching proxy + Harbor
Isolating your infrastructure from the Internet prevents direct access to external package repositories and container registries. To maintain updates and deploy containers, you'll need to set up a local caching proxy to serve these resources internally. While this adds some setup, it ensures your environment remains secure and fully operational without external connectivity.
Key benefits:
- packages are downloaded only once - once fetched, they're distributed via local network, dramatically speeding up multi-host updates,
- new host deployment (e.g. from templates) becomes instantaneous since all packages are already locally available,
- no tooling changes required - everything works exactly the same way (just standard
apt update && apt upgrade
), - container images are also cached (Docker/Podman/Kubernetes), using Harbor.
It’s not as hard as you think... Let's go! 🧑🏭
How do we keep most of our infrastructure offline?
We keep our critical systems offiline and secure without productivity loss. Below is a high-level overwiev of an offline infrastructure.
Note that in this model we maintain a single virtual machine which is
connected to the internet. It can be monitored with special attention, and it
is possible to filter traffic using Squid Proxy
or regular Nginx
.
Of course the virtual machine of our main firewall is also connected to the Internet, but let's be honest - that's a critical system that was always going to need special treatment.
Internal architecture diagram and communication flow
Key points:
- in our setup, all communication between offline VMs happens within the local network,
- virtual machines only have access to package repositories and container registries - nothing else,
- infrastructure monitoring becomes drastically simpler,
- it’s worth deploying a few additional services (like a local DNS server or NTP server) in the network.
System repositories cache configuration
We use something as simple as Nginx.
Here is an example server
block:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
|
We have defined proxy_cache_path
with a key rt_debian
with the following parameters:
1 2 |
|
We also use custom log format (mentioned as cachelog
) in which we use $upstream_cache_status
variable.
The file /etc/nginx/include/regalis-tech-cache-common.conf
contains multiple
config directives (mostly proxy_cache_*
) from the
ngx_http_proxy_module.
Adding support for other distributions is relatively straightforward. In our network, we maintain several test machines running Arch Linux and Alpine, along with a cache for one external repository (Docker for Debian).
Docker/Podman/Kubernetes cache - Harbor
Harbor is an open source, CNCF-graduated implementation of the OCI Distribution Spec. It can serve as a proxy cache to upstream container registries.
Benefits of using Harbor as a proxy registry:
- your internal hosts can pull all images directly from Harbor,
- images are downloaded only once, then cached locally for repeated use,
- harbor includes built-in image scanner to identify vulnerabilities,
- OIDC support for integrating external authentication providers,
- harbor can host your internal container images, improving privacy,
- you no longer need to worry about image pull rate limits imposed by public registries - Harbor takes care of that by caching once and serving locally!
Hosts configuration
All Debian GNU/Linux VMs have their's sources.list
configured to point into our Proxy/cache VM
, like this:
1 2 3 |
|
Regarding Docker or Kubernetes configuration, simply use images in the following way:
1 2 3 4 5 6 7 8 |
|
How to update hosts without access to your cache or repository?
You can use SSH
for that, I have written an aricle about this: