Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Firewalls are just some stupid crap industry made up and went with. We've known since the Orange Book days that security had to be done holistically involving every endpoint and network. Their standard for security was a strong TCB on endpoint with trusted path (see EROS or Dresden's Nitpicker); a network card with onboard security kernel, firewall, and crypto (see GNTP + GEMSOS); connections between networks through high assurance guards (see Boeing SNS or BAE's SAGE); proxies + guard software for risky protocols such as email (see mail guards like SMG or Nexor). All of this collectively working together was what it took to enforce a fairly-simple, security policy (MLS). More flexible attempts happened in capability model with KeyKOS + KeySAFE, E programming language, CapDesk desktop, and so on.

So, the above was the minimum that NSA et al would consider secure against adversaries on their level. Every security-critical component was carefully spec'd, implementation mapped against spec 1-to-1, analyzed for covert channels, pen-tested, and even generated on-site. Commercial industry, aiming at max profit and time to market, just shipped stuff with security features but not assurance. Broke every rule in the field. Came up with firewalls (knockoff of guards), AV, and so on to counter minor tactics. Of course that didn't work as it doesn't solve the central security problem: making sure all states or flows in the system correspond to a security policy.

The best route is to put security in the end-point along with E-like tools for distributed applications and hardware acceleration of difficult parts. Within your trust domain, you just check data types and use that for information flow control (aka security). Outside trust domain, you do input validation and checks before assigning types. The hardware will be like crash-safe.org or CHERI processor in that it handles the rest. A security-aware, I/O offload engine will help too. Fixing the root problem along with a unified model (capability-based, distributed) will make most security problems go away. At that point, firewalls will be about keeping out the riff raff and preventing DOS attacks.



If this observation is meaningful, shouldn't it also be the case that firewall deployments aren't meaningful to enterprise security?

Because: that seems intuitively not to be the case.

To wit: on an annual site-wide pentest of any major enterprise network (this is a project every security firm does for a couple clients a year), the moment the pentester gets "behind the firewall" (ie: code execution on any application server) is invariably game-over.

If firewalls were just some stupid crap the industry made up, shouldn't they make no difference at all? Shouldn't attackers just make a beeline for wherever the high-value information is, rather than scanning the perimeter and looking for some chink to use to get behind the firewall?

My argument would be: whether firewalls are "stupid crap" or not, they certainly do seem to matter right now.


'Game over': I think this is exactly the problem. In all the organizations I've been in, firewalls have been an excuse for negligence. 'We don't need to think about security because we are behind the firewall.'

Right now the compliance world is addicted to firewalls, to the detriment to reasonable appsec. In my fantasy world, I'd like the auditors to be telling companies 'in 5 years, you won't be allowed to firewall your business network, and if you aren't secure without the crutches, then no certification for you.' That would light a fire under management to care about software quality all over the place.


You're probably right that firewalls allow negligence elsewhere.

But if they can't secure their one firewall, what makes you think they can secure their complex network of a plethora of interdependent services running across many subdomains on a whole roomfull of machines?

"Simple" is a key step to effective security, and I think the reason we've latched on to firewalls is they are often the simplest, most contained, and most standard way to reduce the attack surface of your network.


I think in many cases you will be right and 'they' won't be able to secure it. This will force them to contact out those applications to someone who can. Plenty of SaaS providers able to secure a network. Just because my incompetent I.T. Guy can't properly harden a mail server doesn't mean we can't hire Rackspace or Microsoft or someone else who can. Let's incentivize competence, not hide incompetence.


Not all services are capable of "hardening" due to software quality. Not everything is written as tightly as Qmail


> In my fantasy world, I'd like the auditors to be telling companies 'in 5 years, you won't be allowed to firewall your business network, and if you aren't secure without the crutches, then no certification for you.' That would light a fire under management to care about software quality all over the place.

Your fantasy world also has auditors. What concerns me most is "self-auditing", mostly because it's a joke, partly because a lot of places don't take it seriously.


Re-read the comment and you'll see your answer. The choice of insecure endpoints, insecure protocols, insecure networking standards, and connections to an insecure internetwork of malice means that trusting security to a low assurance filter of internetworking part is... a joke. Might be why having a firewall didn't reduce odds of any major I.P. and data breaches I've read about.

You want network security? Use a guard [1] with additional security checks at the endpoints and working with software on the guard for protocols such as email or HTTP. Want to stop script kiddies all day long and get silently breached by the exact people that really worry you? Get a firewall: the cheap, knockoff of guards specifically designed to save money they would've spent on real security. I hear they even come with OS's that brag on hundreds of CVE's under their belts. ;)

Oh, and you need to do the endpoint security, too. My posts on HN regularly mention prior work immune to many forms of malware by design. The DARPA, NSF, and Euro funded teams are cranking out one good hardware and software TCB after another with strong arguments against leaks, injection, and so on. At this point, unless I.P. is withheld, there's no excuse for industry or FOSS not building clean-slate efforts on something like that.

[1] https://en.wikipedia.org/wiki/Guard_%28information_security%...

Note: To be clear, I'm not counting guards built on crap such as Linux. Many in medium to high assurance industry are doing the same cost-cutting crap as COTS. Sadly, they tell me the reason is "no demand for high security systems." I've heard that in U.S. and U.K. Pre-Snowden, though. Maybe there's hope.


Hi Thomas, I recall you saying at one point that you are not a fan of static code analyzers for improving application security. Could you elaborate? "None of them found Heartbleed" might be one reason, I suppose, but it seems to me they do find a lot of more ordinary XSS, SQL injections, etc. Do you really think it's not worth using them at all?


The best way to use analysis tools is to code in a way that makes it easier for them. Old Orange Book B3/A1 systems had to be coded in a very layered, modular, and internally simple way to facilitate security analysis. Likewise, many of the analysis tools will get lost once you start coding a certain way. Each one also has its strengths and weaknesses.

So, my normal recommendation for people that can't waste time is to use those with few to no false positives combined with a coding style making it easy. For instance, I coded in a structured programming style with relatively simple control flow, quite functional in each unit's structure (see Cleanroom methodology), and avoided hard to analyze things. That made it easy for the tools.

The cool thing is that tools such as Astree and SPARK can prove portions of your code immune to certain defects. Others can do this for concurrency problems. And plain old reviews of design, code, and configuration with a list of common issues can help a lot by itself. That each of these methods has positive results for teams that use them speaks for itself. Them together can be quite powerful.


To be clear I'm not claiming that firewalls are irrelevant in the enterprise campus scenario, especially if they have DPI functions that are effective in discovering outbound control channels. Even huge corporate environments rarely have more than 10Gb/s of transit and those Palo Alto devices I talked about work fine in that scenario.

What I am saying is that hardware firewalls are not an option at scale and that Layer 3/4 protections are being pushed into the host for scale-out operators. Note that "into the host" does not necessarily mean "in the operating system". There has been great work by some operators to push these controls into the Ethernet firmware, although I'm unaware of a standards-based open way of doing such.

I'm enjoying this HN discussion, where people are disagreeing with a response to a misquote of a incorrect summary made by somebody who didn't watch the talk. :)


Stamos seemed to be making a point about the progression and resources being spent on security solutions. Firewalls are deterrents, but there seems to be a general consensus that they are not feasible for the future and the closest approximation that can be achieved to a "fully secure" system is by focusing on Applicaiton Security. During the video Stamos admits and an audience member loudly agrees that "we suck at appsec". Firewalls are stupid crap because we suck at appsec. If we didn't suck at appsec firewalls would and should not matter.


See my post here to Geer:

https://www.schneier.com/blog/archives/2014/04/dan_geer_on_h...

We just need to do what was done in the past, present, and ongoing in various circles: architect the hardware and tools in a way that makes security (esp integrity) an easy default rather than a nightmare. Then we work from there esp making that faster. I've seen Linux and FreeBSD run on such systems with little modification so I know it can be done. It's why I spread the word.


I should've added that I'm not saying they're useless: just not a solution to the root problems or even the best solution to their problem. I use them if I have nothing else or just want to reduce traffic (DDOS) on a guard. Hell, using an obscure processor architecture while removing anything in traffic IDing it got me more security than any AV or firewall. I retired that strategy after 5+ years of it working with eBay hardware lol...


   > My argument would be: whether firewalls are "stupid crap" or not, they certainly do seem to matter right now.
Which would be the hard nut of the matter. Just as security guards at a checkpoint to a military base provide some value in at least awareness of a threat. But if we take the article at face value then we have to believe that the role of the firewall will become greatly diminished to something more sentry like rather than something that aspires to be a portcullis. And while the ability of the gateway infrastructure to prevent an attack from occurring arrives it may be able to raise the cost of mounting an attack by the network equivalent of ASLR on the stack by intentionally re-routing some parts of the network connect to avoid things like bogus source routing, or confusing CBC ciphers in private protocols.


There are still a lot of software which uses network masks for authentication purposes. When something like that can be become key components inside an enterprise, you can't really talk about enterprise security when inside the walls of the firewall. There simply isn't any.


It's because the entire experience of SSL sucks. Want to deal with some random, annoying and recurring issues? Deploy SSL on your app and then try and figure out which arcane cert issue is causing verification to fail.


They are certainly useful. But all these users behind the firewall frenetically clicking on every link they can find and opening any attachment from any email received are effectively an army of trojan horses which to me is an order of magnitude more of a problem than latency.


Pentesters always win. The isn't a network that is absolutely secure and if you really need to, you can always spearfish the secretary.


If you opened by saying "Firewall MARKETING is just some stupid crap ...", people might hear your message better.

Yes there's huge complacency about security. But the problem is people, not firewalls.

Holistic security is important and a huge opportunity created by this mass hypnosis. There's never been a better time to raise money. Happy to discuss, contact info on my profile.


Fair enough as that's a huge part of the problem. Yet, if firewalls should be trusted, they need to meet these basic critera:

1. Attention paid to firmware security and its ability to load kernel.

2. Firewall TCB is strong in that it can prevent or contain compromises.

3. Each component is isolated with restricted interactions subject to believable security arguments, static analysis, or formal verification.

4. Every piece of every packet is inspected for foul play.

5. Covert storage and timing channel mitigation is in place.

6. Supports application-layer security for whatever it's being used for.

Can you name a single firewall that meets all these criteria? That's how guard's were designed in the past before firewalls got invented to ignore most of that. So, firewalls (in theory and practice) are technically incapable of doing their job unless the coders were nearly perfect. Then, they're marketed as doing much more than they can. So, why people demand firewalls instead of companies getting the cost of guards down is beyond me.

Here's an example of a real firewall that is more like a guard in practice:

http://www.sentinelsecurity.us/HYDRA/hydra.html

A nice architecture combining highly assured firewalls and SNS Server guard (15-20 years without compromise) with COTS enhancements for quite a security argument:

http://www.dtic.mil/dtic/tr/fulltext/u2/a425566.pdf

Once I see the real thing, esp seeing NSA pentesters achieve nothing, it's hard for me to make excuses for security engineers making the same mistakes for years despite being shown what works. I've sent about every firewall vendor validation reports of what made it and why. They don't care and that's why firewalls are some stupid crap industry trusts but shouldn't.


I have tried preaching similar message while I have worked for a C4I unit. I found it extremely hard to get anyone understand what the actual point was, and even after that I got mostly "but we're all COTS now" with a shrug.

The previous, while working with netsec, stands practially for abandoning the sound principles and going for superficial compliance models. There is no real security architecture in place for most systems, there are not trusted paths of handling information, and the assurance level is at rock bottom. The result is scary, when you take it into the context of your adversaries being hostile, active, and very well funded (typically state sponsored).

Actually I considered elaborating the previous with examples from real life, but then I realized that stuff might be classified, so... Meh.


Appreciate the corroboration from the inside. I've suspected as much given that even the "controlled interfaces" are usually EAL4 at best. Did you know Navy people built an EAL7 IPsec VPN? I'm sure you can immediate realize (a) how awesome that is and (b) what value it has for our infrastructure/military. Yet, it got canceled before evaluation because brass said "no market for it." Virtually nobody in military or defense were interested in setting up highly secure VPN's.

Not a great state of affairs.


I was shocked to read a EAL4 summary for a product that I know to be extremely hard to secure.

But of course, if you follow all the steps, that only the vendor knows, it can be EAL4. Just don't miss one of the 100s of settings... :/


Haha I feel you on that. It's very important for people to understand the basic way C.C. works: a security target or protection profile with the security features needed (can't leave anything out!); an EAL that shows they worked hard (or didn't) to implement them correctly. I'd explain what EAL4 means but Shapiro did a much better job below [1]. That most of the market has insufficient requirements with EAL4 or lower assurance shows what situation we're in. Hope you at least enjoyed the article as I haven't been able to do much about the market so far. ;)

[1] https://web.archive.org/web/20040214043848/http://eros.cs.jh...


EAL criteria are so operationally restrictive that useful work is effectively prevented from happening. No one needs worse security, we need better security.


A number of us have conformed to higher ones on a budget with small teams. The highest one's are indeed a ton of work to accomplish yet there's been dozens of projects and several products with such correctness proofs. They figured by the 80's they needed their certified TCB to be re-usable in many situations to reduce the issue you mentioned. Firewalls, storage, communications, databases and so on all done with security dependent on same component. Modern work like SAFE (crash-safe.org) takes this closer to the limit by being able to enforce many policies with same mechanism.

So, your claim is understandable but incorrect. Useful work repeatedly got done at higher EAL's. It continues to get done. The real problem is (a) bad choice of mechanism for TCB and (b) bad evaluation process. Most of us skipped high-EAL evaluations for private evaluations instead by people working with us throughout the project. Saves so much time and money while providing even more peer review.

They really need to improve the evaluation process itself so it's not so cumbersome and update their guidance on best mechanisms for policy enforcement. Probably sponsor via funding some of them like they did in the old days. Fortunately, DARPA, NSF, and EU are doing this for many teams so we can just leverage what they create.


> AV

How does Anti-Virus play into this as a counter to "minor tactics?" Are you expecting all end-users to personally verify all of their software? No matter how secure the network connection is, end-users need software to use their computers to do work/have fun/etc. Unless you have a completely closed system of 100% trusted software. If you're part of an organization like the NSA, that might be doable, but home users don't really have this luxury unless you advocate for a walled-garden type system.


re antivirus. It doesn't work: they dodge it constantly. They can also use it to improve their odds of beating it by tuning the malware against it. Need I say more about why its barely a defense?

Back in 1961, Burroughs designed a mainframe [1] that anticipated all these problems. They tagged their memory with bits to protect pointers or differentiate code vs data. That's two bits per word of data with almost no performance overhead if it's all you use. That system was immune to almost every attack modern malware uses for code injection. It was very successful for a while but the market eventually chose against it in favor of IBM et al's systems that did dumb, fast, data crunching with hardly any security. Market as a whole went that way.

So, the problem is code can be injected, the isolation mechanisms don't work, and the toolsets are insecure by design. Fix these to make security the easy default with attackers working in a straight-jacket. The CHERI [2] team and others are doing exactly that. Investments in such systems will increase their functionality. I've seen architectures that even do it with 2 bits like Burroughs did albeit with a different model. It's compatible with Windows architecture. What's lacking isn't technology or knowhow: it's willingness of industry and FOSS to adopt methods that work instead of mainstream methods that don't. Always been the problem. Putting backward compatibility and no rewrites ahead of everything else is the other huge contributor to insecurity.

[1] http://www.smecc.org/The%20Architecture%20%20of%20the%20Burr...

[2] http://www.cl.cam.ac.uk/research/security/ctsrd/cheri/


You may be aware already, but the LowRISC people are planning on putting tagged memory into their chip. Yay!


I saw that! I think they were also calling their cores minions. That's great lol. I forgot to send them a list of all the tagging schemes I know, esp patent immune. Might help them out.


I would argue a publicly auditable software stack would be a strong alternative to the self audited stack. I run a completely open source OS and run all non open software on a machine I don't trust.

If someone can't have that then surely it would at least be good to a system that doesn't autorun things automatically, and stops common attacks like bootloader virus, email virus, etc...

I think AV is meant to deal with "minor tactics" like stopping things from autorunning or blocking common kinds of self replicating code and perhaps stopping known bad things.

That blacklist approach most AV takes can never guarantee security, but maybe some of the time it helps.


I would argue that almost all FOSS is insecure and many (OpenSSL) have had easy to spot vulnerabilities for years. The important part of closed or open software assurance is review. People also often focus on the open or closed part as if it's a dichotomy rather than a spectrum. To help, I wrote an essay illustrating the security levels offered at various points in spectrum of open vs closed source here:

https://www.schneier.com/blog/archives/2014/05/friday_squid_...

Here's what it takes to assure systems at every layer (skip to "the essence of security" para):

https://www.schneier.com/blog/archives/2013/01/essay_on_fbi-...

That's what secure takes against even black hats these days. It can be simplified with a strong TCB, better hardware, and better languages + toolchains. The problem is that only a tiny few projects in FOSS are doing that and not many more in commercial. Whitelisting, stack canaries, AV, firewalls... this is all just added complexity around the root problem that hackers bypass regularly. It isn't security except against the incompetent.

Getting the real thing might require throwing away a lot of code or apps. Or virtualizing it on secure architectures with crazy good interface protections. That's why market as a whole won't do it. Good news is there's small players making such things: eg Turaya Desktop, GenodeOS, CheriBSD, Secure64 SourceT. We'll get more over time but it would help if waves of FOSS coders invested in stuff that provably works instead of what holds them back. GenodeOS, L4, and MirageOS communities are only ones I know doing it at endpoint these days.


I would argue that almost all software is insecure. Proprietary is not any better.


I'd agree with that argument for the general case. Yet, there have been proprietary systems that resisted attacks in their attack model (with source code!) for years and all were designed with established methods for increasing assurance. There's dozens of done that way, esp in defense and smartcard markets. There's a few OSS projects with either good design or code review (medium assurance) that were done by pro's and open-sourced. Far as the actual FOSS development model, there are zero high assurance security offerings done that way. That's despite decades of examples with details published in journals, on the web, etc to draw on. So, FOSS has never done high security, NSA pentesters did give up on a few proprietary offerings, and therefore FOSS is inferior to proprietary in high security because only one has achieved it. Matter of fact, the open-source, commercial MCP OS of Burroughs was immune to pointer manipulation and code injection in 1961 via two bits of tag. FOSS systems haven't equaled its security in five decades.

They need to catch up really quick because they could be the best thing for high assurance. The mere fact that there's tons of labor, they're free, and not motivated by commercial success avoids the main obstacles to high assurance, commercial development: that the processes are labor-intensive, difficult to integrate with their shoddy legacy stuff, and hard to sell. If FOSS ever groks it, they could run circles around the other projects and products in terms of assurance. Closest thing is the OpenBSD community but they use low-assurance methods that lead to many bugs they fix. Their dedication and numbers combined with clean-slate architecture, coding, and tools would produce a thing of beauty (and security).

And, yet, the wait for FOSS high assurance continues. If you know anyone wanting to try, Wheeler has a page full of FOSS tools for them to use:

http://www.dwheeler.com/essays/high-assurance-floss.html


> have had easy to spot vulnerabilities for years

And when they are found they are fixed and the community is always outraged.

When a closed source project has a bug in it, sometimes the knowledge of that bug is kept hidden. Maybe most of the time it is handled responsibly, but without oversight how can an outsider tell?


You mean for those few FOSS projects that both get plenty code review and fix those bugs? Sure those probably are better off than average proprietary. Much worse than proprietary niche that's quality-focused. Yet the community isnt outraged enough to use low defect processes to prevent the next set. Further, that both FOSS and proprietary focus on getting features out quickly with few review ensures plenty of bugs in both.

The trick to either is the committment to quality/security is real, each commit is reviewed before acceptance, and independent verification is possible. With proprietary, the confirmation can come from a trusted third party, several third parties (mutually suspicious), or source provided to customers (but still paid).


In the long the source being publicly available means the bug will be found.

> Much worse than proprietary niche

I disagree, but even if I didn't how can the average purchaser of software discern quality software from junk. If they had the source, they could pay an expert.

I agree a commitment to quality, and therefor security, is important. But I feel that if all other things are equal the open source software will always have an advantage over the closed sourced software.


Reliability, determinism, and security vulnerabilities are a good start for the purchaser. For the reviewer, we already know what methods [1] historically improved the assurance of software. Every method they added, the more bugs they found. That most proprietary and FOSS software use little rigor is why they're insecure. Only a few proprietary or academic offerings, not community driven, had the rigor for the B3/A1/EAL6/EAL7 process. I give examples here [2] for those that want to see the difference in software lifecycle.

Can you name one FOSS product designed like that? Where every state, both success and failure, is known via design along with covert channels and source-to-object code correspondence? I've never seen it. Although, it has happened for a number of proprietary products whose claims were evaluated by NSA & other professional reviewers for years straight without serious flaws found. So, for high security, the "proprietary niche" that does that has beaten FOSS by far and mainstream FOSS is comparable to mainstream proprietary in quality (i.e. priorities of provider matters most).

FOSS can potentially outdo proprietary in highly assured systems given they have free labor. In practice, they do whatever they feel like doing and so far that's not using the best software/systems engineering practices available. So, I don't trust FOSS any more than proprietary except in one area: less risk of obvious subversion if I verified transport of the source and compiled it myself. Usually plenty of vulnerabilities anyway, though. Would love to see more high assurance efforts in FOSS.

[1] http://web.archive.org/web/20130718103347/http://cygnacom.co...

[2] https://www.schneier.com/blog/archives/2014/04/friday_squid_...


I would argue a publicly auditable software stack would be a strong alternative to the self audited stack. I run a completely open source OS and run all non open software on a machine I don't trust.

One word: Heartbleed.


It sat out there for a long time and was fixed. All parties involved were notified. There was never the opportunity for anything else to happen. This is the nature of open source, no room for deception in the long run.

If the same kind of bug (major impact, wide distribution and long exposed history) existed inside the code of microsoft, apple or oracle code no reasonable person would think that the company responsible would let that out with details on impact level. The hit to stock prices would be enormous. They would silently issue a patch and hope no one notice and likely no one would because there is no oversight. There is room for deception built in, even if it is not intended as such.

I am aware that companies do patch and do frequently notify, but they rarely let all the information out for public consumption. The larger the issue the more they downplay it. For how many years did the buffer overflow in the ie6 address bar or windows shatter privilege escalation attack remain vulnerable in windows.

Shatter was first described in windows xp before 2002 and was still present when windows xp reached end of life. The people affected never had any say and no one outside of microsoft ever had any opportunity to fix it.


Heartbleed is far more common on closed crypto software than often.

One word: wrong.


Could you expand on your last para please - it seems to promise there is a solution to software security already available ...

E-language seems a bit out dated from the intro I can find, What's an IO offload engine? What do you means about unified model (capability based / distributed implies E-language again?)

Is this using strong data types to base security capabilities on? And how does hardware for in here

I ask for interest as your comments here and on Schneier imply a lot of knowledge under the surface and am run in to catch up


I covered a lot of ground in this counterpoint to Dan Geer on why our security sucks:

https://www.schneier.com/blog/archives/2014/04/dan_geer_on_h...

Read it, its two links, and whatever they link to for plenty of inspiration. If you want, I'll email you a list of my designs and essays on there. I use his blog to reach as many people as possible. I can't make money on high assurance without selling out to the enemy so I just posted my stuff online for free anyway. The discussions and peer review over there were grade A with a few high assurance guys as regulars. This site is good, too. I'll get you those links if you want.


That would be kind - please. I shall delve, kids holiday permitting !


> I'll email you a list of my designs and essays on there

Yes, please.


"Firewalls are just some stupid crap industry made up and went with." -- I can't even begin to unravel how short sighted that comment actually is.

I'm not sure you really understand the state of the firewall industry at this point in time if I'm allowed to be blunt. While I do think that traditional firewalling (L3/L4) has lost it's overall efficacy there are solutions on the market that address application control, identity, A/V, IPS, spyware and malware solutions in a single solution (not UTM) and that are stream based (single pass - again not UTM).

Firewalls at the enterprise level are FULLY required for business to operate in a relatively secure manner today. Controlling the applications ingress and egress is not an option - it's a requirement. Greg (Etherealmind) has been very well known to be, well, a bit opportunistic in his early assessments. He mentions NSX in the East/West flows in the SDN environments, however what he fails to mention is that many customers implementing NSX have also been implementing purposebuilt firewalls in NSX via the exposed placement of security services tied to the NSX and NetX APIs (http://www.networkworld.com/article/2169448/virtualization/v...).

Working for a security company in this space let's refute the majority of his numbered components..

1) The majority of the customer verticals I deal with buying 10Gb+ firewalls buy A LOT of them. These are environments doing millions to, literally, billions of dollars of revenue per hour. A completely licensed, supported firewall rated at, say 20Gb can be had for under $300k and maintained annually for less.

2) 6.7 nanoseconds is a myth - unless you're in financials and the HPC space. There are so many conga line security products today, and ill conceived network architectures and a thousand other things where 6.7 nanosecond expectation is a unicorn. We typically get to the microsecond levels and customers (even financials) are often fine with those numbers in critical environments.

3) Yes you can. There are a lot of customers using NSX and OpenStack using fully supported, fully modern security solutions in production today. I've been involved in said projects - the best part about those environments is it's actually easier to deploy because it's software and more and more platforms have fully exposed APIs and are built for automation and abstraction.

4) BS. Application security? For real? Most of the Global 2000 are NOT software companies. That means they're software development is not their forte. Which means that most will continue to have SQLi (and other trivial) problems well into the next decade.

5) Let's just say for a minute that the perimeter is collapsed - which I hope that at this point it is for the majority of organizations who take network security seriously. That doesn't change the fact that overlays can't have security insertion points and that there can't be microsegmentation. Because there already is today.

6, 7 and 8... They make the least sense of any of the arguments because they are so pointed and least relevant to all scenarios.

Sure - fixing the endpoint and the software involved is an awesome approach to security. But traditional firewalls never fixed that in the first place, all they controlled was access. However today's firewalls go well beyond that and provide much more granular application and user control as well as threat services on top to boot.

But I'm sorry - if firewalls provided no business value there would not be companies building and selling 10 & 100Gb firewalls for hundreds of thousands to millions of dollars to protect, segment, identify and inspect - well beyond what this is lumping all "firewalls" into.


> Firewalls at the enterprise level are FULLY required for business to operate in a relatively secure manner today.

They're also completely unsustainable, because "firewall traversal" will always be a thing. The result is a tit-for-tat arms race between firewalls and applications, with application protocols being encapsulated deeper and deeper, and firewalls trying to inspect packets deeper and deeper. The overall system complexity skyrockets, and we all know that complexity is antethetical to security.

I predict that within the next few years, we'll see attackers successfully targeting vulnerabilities in firewalls and antivirus software directly. Add BYOD to that and the entire mess will collapse in a decade or two---probably much sooner.

Firewalls are a temporary workaround for poor application security, nothing more. They are pollution---they hurt everyone by turning connectivity into a hard problem. Once we have good appsec (which we already know how to do; we just haven't done it), the cost of firewalls will vastly outweigh their benefits, and they'll quickly disappear.


Appsec does not solve netsec and vice versa. A lot of these comments are being posted by people who may know appsec rather well, but know very little about netsec. Firewall technology has come a long way - again, if you think that it's simply L3/L4 filtering, you're completely off base.

People have been targeting firewalls and A/V for years already - this is nothing new and about to change as stated. However these systems are much easier to secure based on a generally small footprint and protected management access.

"they hurt everyone by turning connectivity into a hard problem" - again, sure - circa 90's technology. I'm not sure you're aware of the positive enforcement model that some vendors approach today, focusing on allowance of using applications that should be used and blocking those that shouldn't.

Firewalls are not temporary, they're like a lock and key on your house - they don't solve all security problems, but they're a key component within the system as a whole.

If you'd like to take a friendly wager I'll hold you to your last statement, because they're going to be around at least another two decades.


Enterprises already run very heterogeneous stacks/software and more often than not a large portion of that is proprietary or outside of their direct control in other ways. I don't see why any enterprise would take the risk of not having additional layers of security, layers that they can actually control.

I only see that going away if all software is reliably mechanically auditable for security.

Edit: actually thinking of it, there's still many firewall features that one wouldn't want to reimplement app-level each time like rate limiting, network access logging or even basic routing the list goes on. I'm not sure what definition of "firewall" you all are thinking about. To me it's any hardware or software appliance that processes incoming connections.


> there's still many firewall features that one wouldn't want to reimplement app-level each time like rate limiting, network access logging

One of the major things that was learned in the NCP->TCP/IP transition was that it's better to put complex logic in the endpoints, rather than in the network.

> basic routing

Routing isn't what a "firewall" does. Routing is what a "router" does.

> I'm not sure what definition of "firewall" you all are thinking about.

I'm talking about packet filtering that looks at more than the source & destination addresses, stateful packet filtering, "deep packet inspection", etc., especially when they're set up as default-deny.

Application developers shouldn't have to worry that their packets will succeed or fail to be delivered depending on their content.


in order to achieve 100Gbps at line rate, you have 6.7ns per frame. not a myth, but simple arithmetic.


That might be true if the system is monolithic ingress/egress, but that's not true for any chassis based firewall that's rated at, or above, 100Gb today (and there are quite a few).


Be blunt: I am and it's true that there's huge chunks of the industry I rarely interact with. Might have missed plenty. I particularly appreciate you bringing the NSX security framework to my attention. However, most of what you're mentioning are features that firewalls support where my post said they needed features + assurance (aka "guards," or firewalls with security inside). Most of the firewalls, if evaluated at all, stay at EAL4 or lower: certified to stop "casual or inadvertant attempts to breach security." They don't even get pen-tested by pro's or a source review. Any pro taking time examining a unit will probably find a 0-day or bypass. Grime's reviews showed many even had unknown services running, like FTP, without telling users. They're also prone to subversion as only EAL6/7 reduces that and Snowden leaks confirmed that for many companies.

So, my comment and yours actually agree that network defense is necessary. I just added this in my original comment: (a) real endpoint security, (b) app/protocol-layer security, (c) the right features in firewall, and (d) rigorous assurance and evaluation for each. The result of these combine did resist strong attackers in the past and present. The Boeing SNS Server, for example, hasn't been compromised in 15 years despite multiple pen-tests by NSA and private labs. That's high assurance and minimum of rigor that stops nation states. Commercial firewalls are largely not designed like that. So, they have the features but not assurance of implementation or self-protection. And not integrated enough with endpoints for enforcement to be split properly between the two. See below for an example of a stronger configuration:

http://www.dtic.mil/dtic/tr/fulltext/u2/a425566.pdf

Back to your peer review of his list, which I appreciate given your an insider. No 1 I've seen myself and agree. No 2 yes lol. No 3 I learned from you and will repeat to anyone else not aware of these things. No 4 is THE DUMBEST THING HE SAID, has never happened, and won't happen without fundamental changes I preach about here. Enough said. No 5. If my perimiter collapses, they're seeing (a) encrypted traffic that tells them nothing or (b) plain traffic whose nodes resist their attacks. Perimeter to me is minor DLP, DOS prevention, and IDS mainly. No 6, 7, and 8. Alright, that's 3 in his favor.

Your last point is the weakest one: companies regularly spend millions on inferior or non-solutions to problems because they don't know better. How much IT industry spends on something tells us nothing about its security or quality. If you're right, then Windows, Oracle, SAP, and Cisco switches are the highest quality and most secure things out there. (Checks the CVE's and news reports.) Nevermind...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: