If decision makers want to improve security, there are readily available levers they can pull: give a shit about code quality, put 5 minutes of thought into authorization decisions inside their applications, escape strings, take advantage of memory safety, upgrade unpatched legacy garbage, stop using and creating protocols that are trusting by default, etc. They don't do those things. They buy bolt-on antivirus suites and magic Cisco gateway boxes and when a 12 year old who's heard of SQL injection comes along they throw up their hands and go "APT nation-states, what could we have done?" They continue to believe in perimiter-based security, where network drops in unlocked conference rooms and hallways are inside the perimeter. They continue to laugh off email encryption and signing.
We aren't seeing sophisticated attacks, by and large. We're just seeing someone finally bothering to attack all the crap that was designed in the assumption of "who would ever want to attack this?" or that pre-existing viruses for which signatures are published are the only relevant threats.
If you are not seeing sophisticated attacks, perhaps your detection systems are simply not good enough!
I recently found an unknown 64MB filesystem on a USB stick, it was in the middle of the USB exfat filesystem.
Considering the core TinyOS is only 400KB in size before addon's thats small enough to hide on many systems & devices connected to the internet.
After all, the 64MB code I found could easily store itself on a hard disk, rewrite the disk controller and hang out in the disk cache when the computer is on, and then write itself back to an unused part of the hard drive when its switched off. Switch computer back on, disk controller reloads the malware.
You wont see it unless you use a hex editor, and whose going to spend 8 hours scanning a 2TB disk sector by sector when the OS driver is already compromised?
Do you think disk controllers cant be updated even when Western Digital or Perc say it cant? Have you tried reading a RaidXYZ blade server in a hex editor when the controllers or OS is already hacked?
Are those sectors marked damaged really damaged or just a cover?
Whose's going to check their data centre at that hex editor level?
Anything that can be updated ie software or hardware has the potential to be hacked.
So as Lizard squad showed with hacked routers, who last checked their network printer cards for malware?
Who virus scans their router, who virus scanned their bios or any other hardware device? What about your mobile, sure thats not been hacked in a variety of ways?
Do you think your read only mounted *nix systems are really read only?
Remember Stuxnet? It took over a year to reverse engineer the code before an AV company declared it was a virus. Thats alot of time in the wild, and since then we have had duqu1 and duqu2 amongst others. Of the AV companies, only Kaspersky has come closest to identifying duqu2.
It doesnt matter where you look, I can show you a hack to compromise a system.
Think that linux distro is safe having checked the hashes match? What if your ISP has been hacked and switches reroute you to look-a-likey servers serving compromised distro's with matching hashes? Unless you download the code and compile source yourself, you have no piece of mind. But wait didnt someone recently hack Github? How do you know the sourcecode you have downloaded is not compromised?
Even your CA's are a liability which then compromises various HW like say Cisco.
There is simply too much trust placed in other people.
Sure buying a product offsets your liability, but its all too easy to shutdown a company and startup again once the liquidators have done their work.
So what liability have you offloaded in the scheme of things? You can be sure the insurance company if insured will wriggle out of it.
Most tech people dont have a clue how vulnerable they really are, but then its not illegal to withhold the fact a company has been hacked is it? Cant hurt the share price now can we?
Google, Microsoft, Facebook are not obliged to announce the fact they have been hacked if they even know they have been hacked. Plus considering how management have sacked many dev's over their time, just how far ahead does one plan ahead when hacking systems from the inside? A few days, weeks, months or even years ahead?
But by and large, the decision makers can't make those decisions, because of dependencies on open source systems that are too large to audit, or closed source systems where all they can do is trust the vendor's salesperson who says "oh yeah, it's secure".
Vendors can be held to published standards. The industry seems to take operational/sysadmin standards seriously when required to (HIPAA, PCI, etc), including across the client/contractor boundary. Problem is, there don't seem to be equivalent standards in widespread use for programs themselves, only the environments they're running in. That could change, if the holders of the purse strings were to demand it.
Attacks on dependencies are tricky. But it seems like the dumbest, most high-profile attacks are against the core line-of-business applications. I'm sure AT&T has the very best firewalls money can buy, but it operated a consumer-facing application which did not check whether a user-supplied primary key actually belonged to the logged-in user before displaying private data! This is the disconnect I'm talking about.
If they're being outsourced to the lowest bidder, then maybe they shouldn't be. If they're being bought off the shelf, purchasers could either demand certifications that meaningfully eliminate at least the elementary classes of vulnerabilities or go in-house and do it right.
We aren't seeing sophisticated attacks, by and large. We're just seeing someone finally bothering to attack all the crap that was designed in the assumption of "who would ever want to attack this?" or that pre-existing viruses for which signatures are published are the only relevant threats.