On our last post about UDP hole-punching we learned how we could use UDP traffic to exploit the stateful nature of firewalls, thus bypassing blocks for incoming connections. That was a fun exercise, but of limited use in the real world.
On this post we’re going to turn it up a notch and encapsulate other IP traffic inside a UDP stream. The goal of this exercise is to be able to establish seamless TCP connections between two hosts that have all TCP traffic blocked between them.
UDP hole punching is a widely documented firewall bypass technique. It doesn’t exploit any bugs or flaws, it does exploit UDP’s sessionless nature and firewalls’ stateful nature.
The technique relies on the fact that in a UDP conversation there is no session establishment, so in reality there’s no concept of inbound or outbound connection – there’s only inbound or outbound packets. That means that a firewall device has to rely on limited information to let returning packets through the firewall. In fact, it only relies on the UDP 5-tuple: Source IP address, source port, destination IP address, destination port, protocol.
One of Clarke’s laws says that any sufficiently advanced technology is indistinguishable from magic; and with container technologies I find that to be true more often than not.
Container networking might be one of those areas where magic seems to happen all the time. As you might know, the problem with magic is that you can’t troubleshoot it, so if something fails you are doomed to go through a costly and stressful process of trial and error.
One of the most valued skills in a technical position is the ability to slice a big, hairy, seemingly magical, logic-defying problem in tiny pieces and solve them one at a time. This process builds a knowledge foundation that eventually gives you a full understanding of a product, scenario or technology. The good news is that anyone can do it.
In this article we’re going to take container networking apart into tiny pieces and look at each of them individually. This will help me learn more about container networking as I’ll have to do research to write the article, but hopefully it will help you too!
We have established that, for some of us, container networking feels magical. This is because things just work when the container needs outbound connectivity and there’s little to do (expose ports) for inbound connectivity needs. This is true on a default Docker installation, but what does a default Docker installation entail in the networking side of things?
Many of you will know about Qualys SSL Labs and their comprehensive (and free!) SSL Test. If you didn’t know go and check it now, it’s amazing. It does run a series of security checks on a given SSL endpoint, including protocol version and ciphersuite compatibility.
Alright, now that we’re all on the same page I wanted to share with you a little project I did for educational purposes. The project’s name is TLS Checker and it’s a little online tool that checks which TLS protocol version and ciphersuite combination does a given site support. It does not check anything else and there’s where any comparison with SSL Test from Qualys ends.
During these holidays I’ve spent some time working on setting up a VPN between my on-premises network and an Azure VNet. In order to setup the connectivity I have used StrongSwan on Linux at the on-premises side and a VpnGw1 VPN Gateway in Routed/Dynamic mode on the Azure side.
|Vnet GW Address
|Vnet GW type
||Routing / Dynamic
|Vnet IP Address Space
||10.11.0.0./16 and 10.12.0.0/16
Note: The above addresses are not subnets from the same VNet, but separate address spaces.
||Ubuntu 16.04 LTS
|StrongSwan IP Addres Space
|StrongSwan version (“ipsec version”)
||Linux strongSwan U5.3.5/K4.4.0-22-generic
Note: The Ubuntu 16.04 LTS on-premises machine was acting as a router between the VPN tunnel and my on-premises network (100.64.0.0/24).
Azure Container Networking released CNM (libnetwork) and IPAM plugins for Docker (and CNI for k8s and DC/OS), making containers first degree citizens in your Azure VNet. The days of doing NAT on the host’s IP address should be now counted 🙂 if it wasn’t because the plugin is still in public preview!
As I’ve mentioned on previous articles, I run my Docker development environment on a Windows 10 laptop (even though I mostly do Linux containers).
At some point the list of stopped containers has grown so much that it has become a chore to delete them, so I resorted to PowerShell to help with that.
… or how to quickly deploy a “ntttcp for Linux” server as a container in Azure without any previous infrastructure deployed.
Do you need to quickly test the throughput you can push from a specific location to one of Azure regions?
I have been in that situation more than once, and so far my technique has been always to deploy a VM in one of my Azure VNets, install whichever software I’d like to use test with (e.g. ntttcp), open the required ports on the NSGs and then finally test. This is time consuming and even if I deallocate the VM (so next time I just have to boot it up and test), I might have the need to test on another region!
I have now found a better solution which implies deploying an ACI running ntttcp for Linux on my region of choice. Here are the instructions for Azure CLI:
How many times have you asked a customer to run a test with some specific tools, just to have to fight countless compatibility problems due to the particularities of the customer’s environment?
Yeah, I’ve been there too.
Containers should be designed to be immutable and run in the same way any time you deploy them, regardless of the environment.
This article will give you some guidelines to containerize any application. Once your application runs in a container, you don’t have to care anymore about compatibility issues, libraries or DLL not available, etc