← ~/blogs/

Docker Bridge Networks and Container Isolation - A Deep Dive

2026-01-04

Docker Bridge Networks and Host Service Accessibility

This post documents a networking challenge I encountered while deploying pgAdmin in a Docker container to manage a PostgreSQL database accessible only through an SSH reverse tunnel. The investigation reveals fundamental Docker networking design decisions that aren't immediately obvious.

The Setup

I needed to deploy pgAdmin on a VPS to manage a PostgreSQL database running on a remote workstation. The workstation sits behind a corporate network with strict egress policies—all remote access goes through an SSL/TLS tunnel using stunnel (covered in my outbound tunneling post).

The architecture looked like this:

Workstation (PostgreSQL) 
    → stunnel (TLS encryption)
    → Internet (port 443)
    → VPS stunnel (TLS termination)
    → SSHD (reverse tunnel)
    → Tunnel endpoint: 172.19.0.1:5432
    → pgAdmin container: 172.19.0.5

The Problem

The SSH reverse tunnel successfully forwards PostgreSQL traffic and binds to 172.19.0.1:5432 on the VPS—the gateway IP of the Docker traefik_default network.

From the VPS host, everything works:

# From VPS host - SUCCESS
$ nc -zv 172.19.0.1 5432
Connection to 172.19.0.1 5432 port [tcp/postgresql] succeeded!

$ psql -h 172.19.0.1 -U abdallah -d postgres
# Connected successfully

But from inside the pgAdmin container:

# From pgAdmin container - FAILURE
$ nc -zv 172.19.0.1 5432 -w 3
nc: 172.19.0.1 (172.19.0.1:5432): Operation timed out

The container cannot reach a service that's literally bound to its own network's gateway IP. What's going on?

Understanding Docker Bridge Networks

Docker bridge networks create isolated network segments. Each bridge network has:

The Gateway Misconception

Here's what I got wrong: the gateway IP (172.19.0.1) is not a general-purpose entry point to host services. It exists primarily for:

  1. Outbound NAT (containers accessing the internet)
  2. Docker's internal DNS resolution
  3. Inter-container routing

While Docker doesn't explicitly block container→gateway connections, the interaction between several layers creates the observed behavior:

The result: services bound to the gateway IP from the host's perspective may not be reachable from containers, even though they appear to be on the same subnet.

The host-gateway Gotcha

Docker provides host-gateway as a special DNS value:

extra_hosts:
  - "host.docker.internal:host-gateway"

I assumed this would solve everything. It didn't.

host-gateway is a design simplification: it always resolves to 172.17.0.1 (the default bridge gateway), regardless of which network the container is attached to.

For containers on custom networks like traefik_default, this creates a mismatch:

# Inside container on traefik_default (172.19.0.0/16)
$ cat /etc/hosts | grep host.docker
172.17.0.1      host.docker.internal  # Wrong network!

What I Tried

Attempt 1: Direct Gateway Connection

Bind the SSH tunnel to the Docker gateway IP and have containers connect directly.

Result: Failed. The SSH tunnel bound to the gateway IP isn't reachable from within the container's network namespace.

Attempt 2: Socat Bridge with host-gateway

Use a socat container with host-gateway to bridge traffic to the host.

Result: Failed. host-gateway resolves to default bridge (172.17.0.1), not our network's gateway (172.19.0.1).

Attempt 3: Dual Socat Bridge

Chain two socat instances: one in host network mode, one in the Docker network.

Result: Failed. The internal socat still can't reach the host-network socat via gateway IP—same isolation issue.

Attempt 4: Public IP Access

Have containers connect via the VPS's public IP where socat listens.

Result: Would work technically, but exposes PostgreSQL proxy to the public internet—unacceptable security risk.

The Working Solution

Sometimes Docker isn't the answer.

I ran pgAdmin directly on the host (without Docker), connecting to 127.0.0.1:5432. Then used Traefik's file provider to route HTTPS traffic to the local pgAdmin instance:

# pgAdmin connects directly to localhost
pgAdmin → 127.0.0.1:5432 → SSH Tunnel → Remote PostgreSQL

This eliminates the Docker networking layer entirely for this specific service.

Key Takeaways

  1. Gateway accessibility is not guaranteed: Services bound to Docker bridge gateway IPs may not be reachable from containers due to network namespace routing and SSH tunnel binding behavior.

  2. host-gateway is a simplification: It always resolves to the default bridge gateway (172.17.0.1), not the container's actual network gateway—a design choice, not a bug.

  3. SSH tunnel bind addresses matter: -R 127.0.0.1:5432 vs -R 0.0.0.0:5432 have very different accessibility implications.

  4. GatewayPorts in sshd_config: Required for SSH tunnels to bind to non-localhost addresses.

  5. Sometimes skip Docker: For services requiring complex host networking, native installation may be simpler than fighting container isolation.

Conclusion

This investigation revealed that the interaction between multiple networking layers—SSH reverse tunnels, network namespaces, and Docker bridge networks—can create unexpected connectivity challenges. The behavior emerged not from any single component's design, but from how these layers interact.

Understanding these boundaries helps inform better architectural decisions: not every service benefits from containerization, especially when complex host networking is required.