I still don't know how to give docker an IP address or "physical" network card on my network.
I was able to do this very easily with solaris zones, and even BSD jails, but every installation of docker that is any way integrated into packages seems to be unable to do this.
perhaps I'm simply not using the right google search terms.
I've found pipework (https://github.com/jpetazzo/pipework) still to be the easiest way to manage networking in containers when you know what you want. Start up your container with --net=none and then something like:
Docker 1.9's enhanced networking functionality is supposed to make this easier, although I've had a hard time understanding how to use it on my preferred RHEL/CentOS platform (the doco I've read either glosses over details or assumes you use docker-machine & swarm, neither of which I want to use in a simple deployment).
Weave is supposed to make this easy to (and offers some simplicity over docker 1.9's requirement for a clustered key-value store), but I got frustrated at their implicit iptables nat rule for networks created by default and haven't worked out how to stop that.
Is your intention to expose the ports of a docker container? If you want to do this search for "expose" ports. The docker run option is -p and -P for this.
Or to create a subnet with custom IP for the docker daemon's bridge network? Use -b option to docker daemon.
no, that's not my intention, my intention is to not have to deal with iptables mangling packets and adding netfilter tags to everything.
not to mention port collisions with things that must run on predefined ports (think SMTP or pesky applications that keep redirecting you back to port 80)
I'm looking to expose 'an IP' similar to a bridged/open network in KVM.
I've had the same struggle. I'm moving my network to use a vxlan overlay for VMs so I ended up building a vxlan driver for docker networking which lets me do the same thing for containers. That project is here: https://github.com/clinta/docker-vxlan-plugin
Docker seems really intent on NATing containers behind the host, which IMO is not acceptable from a security perspective when I want to firewall outbound access based on a containers role.
Kubernetes is another option which creates a unique IP per container pod on the same network as the host. Not as flexible as a vxlan approach where containers can be micro-segmented into specific networks, but more like BSD jails that you're used to.
Edit to clarify: You can deploy each of the above images on different hosts located on different providers. There's no need for the hosts to have visibility between each other at all.
I haven't tried it yet with Docker, but I'm starting a networking service (pretty much SoftEther as a service) that might work on your case.
The idea is that your Docker containers will connect to a central server (443/TCP outgoing traffic) and have a virtual private network among them through this "hub", so they've got full access to each other. This is a layer 2 network and in my offering I have DHCP running by default to simplify things (100.64.0.0/24... naughty, I know :)). The communications are encrypted, so effectively you've got a sort of Virtual Private Network between your containers.
As I said, I haven't tried yet with Docker, but it's worth a shot. My service simplifies the process of getting up and running.
I was able to do this very easily with solaris zones, and even BSD jails, but every installation of docker that is any way integrated into packages seems to be unable to do this.
perhaps I'm simply not using the right google search terms.