Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A Docker Abstraction That Handles Container Security (nanobox.io)
61 points by technologyvault on July 25, 2017 | hide | past | favorite | 19 comments


I decided to try nanobox last Friday, after some troubles with firing up a Vagrant box on my laptop. One instance worked, another one didn't, nothing new...

Unfortunately I realized that to download nanobox I have to register and login and I really don't understand why. I expected to be able to download a binary, write a configuration file and build my service which I'll never run on somebody's else cloud.

So this is not equivalent to Vagrant or docker, which are unregistered downloads or even apt-gets. It's more like running a part of AWS locally in development, but I don't want any lockin for this project.

I went back to Vagrant. It turned out that a halt of the failed box followed by an up fixed the problem. I still don't feel Vagrant to be completely reliable or reproducible but I'll write my docker-compose and Dockerfiles if I want to use something else.

I'd love to hear from nanobox about the reasons for the required registration. Not having to support people like me that won't buy their service would be perfectly fine. I wonder if there is some technical reason that applies also to the basic scenario of firing up a service locally.


It might be worth investigating Kubernetes network policies [0] and the CIS benchmark [1] for a similar solution.

0 - https://kubernetes.io/docs/concepts/services-networking/netw...

1 - https://www.cisecurity.org/benchmark/kubernetes/


Those solutions work well if you have solid devops skills. Nanobox is built more to empower developers who want to focus on just coding and not deal with infrastructure.


From watching the videos on their homepage, it looks to me like the nanobox CLI is a bunch of wrappers around Docker.

The cloud product sounds like a custom Docker container orchestrator. Worker nodes run on your cloud provider but management is tied to a control panel on their website. They recommend using nanobox over a PaaS in their video, but I fail to see how this is anything other than a PaaS.


Brings to mind Joyent Triton (OSS) which takes the Docker API abstraction at the availability zone (DC) built on (Solaris) Zones which also benefit by Linux Kernel API (SmartOS LX Brand).


It feels like the de-facto network policy design methodology, which I am yet to see implemented in open source, is one in which CI/CD test processes observe network utilization in test environments and automatically implement restrictions for deployed instances.

For example, and ingress-only static content webserver would not require any outbound internet access.

The same approach could and should be used for other observable and manageable layers (filesystem access, syscalls, language interpreter-specific function call whitelisting, etc.).

I am waiting for a security-focused CI/CD tool to own this space. Even a light touch implementation would surely improve greatly on the status quo.


Sounds like this kind of setup would autogenerate subtle firewall misconfigurations that result in mysterious failure modes. Like blocking ICMP which breaks TCP path MTU discovery. Or blocking outbound 53/tcp.


Blocking outbound 53/* is often great. Many services don't need DNS (which is usually UDP not TCP) and it is known to be a very frequent covert channel for infiltrators. AFAIK tcp/53 is most frequently used for zone transfers... not something you'd want to encourage on most production servers!

Restricted ICMP (certain message types only) could be a default permit. If you have an internal service running over known virtual infrastructure to an external-facing proxy, MTU path discovery may be unnecessary anyway.

It doesn't have to be nazi, just better than nothing.


Those were just some examples - I was trying to point out that this method will generally give you subtly broken firewall rules.

(re DNS: the DNS protocol uses both TCP and UDP interchangeably, it switches to TCP when the message size crosses a threshold. Many kinds of DNS messages can be over the limit. See eg https://github.com/moby/moby/issues/24344)


You don't need to rigidly and purely enforce rules only from test results but rather also embedded wisdom, defaults and recommendations.

ICMP is a well known gotcha and the only such protocol in its class.

I've personally never seen a fallback to TCP requirement in the wild but you could easily say "if DNS then enable both protocols" as a general rule, and add <received wisdom>.

These are easy to add as policies and will result in better rules than manually configured systems in many cases, if documented. Everyone's happy.


Your comment reminded me of this talk https://youtu.be/BuFTHOgsgAY especially the conclusion that something that rely on access patterns can't be trusted because it's almost impossible to validate all code paths.


Perfection is not required.


Has there been a security audit yet?


There hasn't been a security audit yet. Nanobox is still pretty new. We released publicly in February.


How is this different from heroku?


The main difference is that the Nanobox PaaS is portable, making it what's been termed a "micro-PaaS". You can host with whatever cloud provider (AWS, DigitalOcean, Linode, Google Cloud) you want or set it up on-premise.

Businesses who want to (or have to for compliance reasons) use a different host (e.g. Vultr in Europe or Alibaba Cloud in Asia) can simply create their own adapters and deploy away: https://docs.nanobox.io/providers/create/



Well, to begin with, Heroku has a bare minimum of interactive functionality, and this is a containered VM with a fancy cPanel successor. How are they similar?


Does Nanobox need root privileges on the host? If not, how does it provide networking?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: