OpenShift 3.3 and later contain the functionality to route pod traffic to the external world via a well-defined IP address. This is useful for example if your external services are protected using a firewall and you do not want to open the firewall to all cluster nodes.
The way it works is that a egress pod is created which creates a macvlan interface inside the pod’s network namespace connected to the default network. Traffic which is sent to the pod’s IP address is then forwarded to a specific destination IP (EGRESS DESTINATION) via the macvlan interface:
You would typically front such an egress pod with a service declaration so that you do not have to hardcode the pod’s IP address (on the internal OpenShift SDN) but can simply reference it via the service name and resolve it using DNS.
An additional feature is the ability to define an egress policy per openshift project / kubernetes namespace. Here you can explicitly state which (external) IPs a pod may access and which ones not.
4 replies on “OPENSHIFT NETWORKING FROM A CONTAINER/WORKLOAD POINT OF VIEW – PART 6: CONTROLLING EGRESS TRAFFIC”
Thank you for posting this. I’m currently working on deploying egress routers in Origin 1.4.1 and been reading and listening to how routing works in Docker to formulate a proper diagram. If you look at how macvlan works it doesn’t use bridging and it splits your public interface. If you want to have a more accurate visual I’d recommend to adjust your diagram.
Perhaps I should have phrased my initial reply differently and asked to elaborate on it because I see that macvlan driver can have different modes. From what I understand is that macvlan is used to bypass bridging for latency sensitive applications and also so that the container can get exposed to the host network. Can you please elaborate?
Okay, last reply. There is no bridge as OpenShift uses macvlan in private mode:
https://github.com/openshift/origin/commit/620dae31c2a45de9db550c3137dc2abdb9d72bff
http://hicu.be/bridge-vs-macvlan
Hi canit00, thank you for your comment and research. What I meant to depict is that the egress pod has two NICs, one “default” veth pair whose endpoint is plugged into br0 and one macvlan NIC which connects directly to the network where the node’s eth0 is plugged into:
$ cat egress-pod.json
apiVersion: v1
kind: Pod
metadata:
name: egress-1
labels:
name: egress-1
annotations:
pod.network.openshift.io/assign-macvlan: “true”
spec:
containers:
– name: egress-router
image: openshift3/ose-egress-router
securityContext:
privileged: true
env:
– name: EGRESS_SOURCE
value: 192.168.12.99
– name: EGRESS_GATEWAY
value: 192.168.12.1
– name: EGRESS_DESTINATION
value: 203.0.113.25
$ oc create -f egress-pod.json
pod “egress-1” created
$ oc rsh egress-1 mtu 65536 qdisc noqueue state UNKNOWN qlen 1 mtu 1450 qdisc noqueue state UP mtu 1500 qdisc noqueue state UNKNOWN
sh-4.2# ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
3: eth0@if99:
link/ether b2:6c:50:38:8c:e1 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.131.0.88/23 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::b06c:50ff:fe38:8ce1/64 scope link
valid_lft forever preferred_lft forever
4: macvlan0@if2:
link/ether 06:cc:02:a1:ce:d1 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.12.99/32 scope global macvlan0
valid_lft forever preferred_lft forever
inet6 fe80::4cc:2ff:fea1:ced1/64 scope link
valid_lft forever preferred_lft forever
sh-4.2#