> assuming that a breach is inevitable or has already occurred
Sometimes I get the vibe that all our systems are already breached by multiple state actors, and I imagine them having their little digital wars inside obscure chips on our motherboards and none of us being the wiser.
I used to be very careful about my information and privacy.
Now it's impossible without living your life in such a way that you go nuts.
Now I try to raise awareness and move people around me in the right direction.
If governments continue to listen to the lobbies, and these lobbies are funded by big business, governments will only make laws that favour business interests over citizens' interests.
same here. aside from the technical challenge the cognitive overhead of maintaining compartmentalization is huge[1].
Real spies have access to professionals helping to deal with the psychological issues that arise from compartmentalization.
> Now I try to raise awareness and move people around me in the right direction.
would be interesting to hear what you would consider the right direction? Is it via a better threat model (a technical solution) or changing the routine/behavior e.g. reducing from screen time, etc?
how I "solved" it is perhaps not compatible with the mainstream today. especially with people younger than me who have never experienced live before the mobile phone or Internet. E.g. I quit social media altogether and simply not carry a phone when I go out and on the technical front everything is minimalist set-up (which is only useful for a die-hard SW engineer with enough passion to justify all the yak-shaving)
While I can't speak to all our systems being breached, I can tell you that priority targets are often cohabitated with multiple actors. I've seen networks destroyed because warring factions within the same country of origin were fighting each other as much as they were working on compromising the target.
It's crazy how all of it plays out. Both in Govt. World, and Public security world. You see it all.
Like this one time these hackers by the handles Acid Burn and Zero Cool breached a local television station and were using the tape robot to fight for control over what was being broadcast. That tape robot was never the same.
Isn't hashed data based on belief of cryptographic security?
I know there's cryptographic hashes and non-cryptographic but given a sophisticated enough state-actor it seems they'd find how to find a key collision.
Is there a hashing mechanism/or similar technique that wouldn't be at risk to hash-collision attacks?
Use casino dice to generate the entropy, don’t trust the hardware. This is how I handle C-level authority keys in my company. Security is mostly about reducing the feasibility of the attack, there is no way to make your systems 100% secure but you can design it in a way that your 500k USD worth of system requires attacker to spend 5 million USD to breach it.
If you had to have a unique address for every atom in the solar system (I avoided the Universe), what technique would you use? Still hash and casino dice for each one?
I'm pretty serious with this question given contemplation of designing a future-inclusive Operating System.
If you really need so many cryptographic keys and won’t trust hardware then you have to build it yourself from commonly available parts and connect it to an offline raspberry pi to take out the automatically generated keys.
I am not sure that it makes a big difference to me that a private vs state sponsored hacker breaches me but it only takes looking at the logs of any of my personal servers to see that multiple breach attempts a day is an understatement. My smtp server has logon attempts from 5 to 20 distinct IPs a day, and I blacklist those IPs automatically, so it’s different IPs every day. This is just a random server on the internet, no particular reason to be targetted.
Exactly. Instead of putting backdoor in only the computer going to China, why not just install back door in ALL the chips. Cheaper too. At this point, it won't surprise if all the computer has a hard ware level backdoor such as spectr/meldown. Not to mention encryption backdoor. Even state level player doesn't have the resources to take all that out. Remember when Snowden leak came out, Russian gov switched all internal memo to typewriters that shall give you an idea.
And all phones have a modem, a closed source chip that likely provides numerous ways to get into your phone.
Ira Hunt (CIA chief technology officer) said in 2014 something like "we like the fitbit because it doesn't have a ... it doesn't ... well I can't say but we like the fitbit."
Because the average person isn't running a server on their laptop? It makes no sense to think that multiple foreign nations are currently connected to random individual's computers, and actively fighting each other in your silicon without anyone noticing widespread network connections.
Let's not talk about fantasy of what could happen, but whether it actually is happening right now on everyone's personal computers. No. It's not.
Though it's unlikely multiple state actors are actively targeting your computer, that doesn't mean they aren't fighting over breaching it or that you shouldn't assume a breach has occurred.
Perhaps not your personal computer, but your employer's systems is a different issue - the majority of attacks (both for-profit criminal attaks and state actor attacks) target private companies, not government networks. For example, in the recent SolarWinds case government networks were targeted as well, but the majority of victims were non-government entities.
Speaking of Zero Trust and the NSA, isn't the NSA the agency that published the weak Elliptic Curve Cryptography constant that allowed a backdoor into SSL encryption?
> isn't the NSA the agency that published the weak Elliptic Curve Cryptography constant that allowed a backdoor into SSL encryption?
Assuming what you meant is closer to "Don't I remember something to do with elliptic curves, and the NSA and an SSL backdoor?" then sure, you do remember that confluence of topics.
The NSA proposed Dual_EC_DRBG which is a cryptographically secure random number generator with weird properties, it successfully had Dual_EC_DRBG included in the NIST standard and in RSA (the company not the cryptography) BSAFE around 2004 or so.
Dual_EC_DRBG is clearly worse than the existing state of the art as a random number generator - the numbers aren't as random as you'd like, but it also possesses a potential backdoor. If the designers chose to do so they could pick values such that they know a secret they can use to find out your seed values from your output. The NSA's Dual_EC_DRBG does not explain how it picked the values used. In cryptography it's usual to pick "Nothing Up My Sleeve" numbers, such as decimal digits from Pi when constants are necessary and you want to show that you have no nefarious motive for picking the ones used. This was not done for Dual_EC_DRBG. The New York Times claims to have (but has never shown anybody) smoking gun evidence that the NSA deliberately picked values allowing them a backdoor.
Even without that evidence, we know this type of algorithm would be vulnerable to such things and we know it's not otherwise better than what we have, so it's weird they wanted people to use it.
If participants used Dual_EC_DRBG as their only or main source of randomness, then yes it's likely that the NSA knows the backdoor and could get back their seed values and thus get back the ephemeral secret keys they are using for SSL/TLS.
It's absolutely certain that Dual_EC_DRBG was used as an NSA back door. The NSA went out of their way to force it into a bunch of products, which is weird enough. Many "experts" dismissed the risk, because they claimed that the only real "danger" was that if the internal state of the RNG were to leak out over the wire, it would be trivial for the NSA (given their private key) to crack encryption of VPNs. No leak, no risk.
Juniper had a vulnerability in their VPN products. That little fiasco involved Chinese hackers having replaced the Dual_EC_DRBG public key with their own backdoored public key! When that was disclosed many security researchers suddenly took a keen interest in the Juniper VPN code and discovered that it "accidentally" leaked just the right number of bytes of RNG state into the initial connection packets.
Look.
Just... sigh.
If you're outside of the United States and you purchase any security product made by a US company, just assume that device has an NSA back door and is being used to spy on you. Juniper VPNs definitely were. Similar VPN products from Cisco most likely were. RSA hardware tokens had an insane design where they used a "tree" of keys where RSA had a root key to could be used to emulate any key fob. Three guesses as to why they needed to do that instead of simply generating standalone unrelated key pairs per fob?
Similarly, anything you store on Azure, AWS, Office 365 or G Suite should be considered compromised by the US. Encryption my ass. They have a copy of your encryption key!
In the past, before pervasive telemetry and HTTPS everywhere it was possible for some kinds of spying. For large, complex corporate networks with a significant cloud presence, it is now impossible.
A Cisco backdoor was found by a paranoid system admin about a decade ago. He didn't trust the vendor firewalls, so he passed everything through a Linux firewall and compared the logs using a script. Every "flow" reported by the commercial firewall was matched against the packet log of the Linux firewall.
He noticed that when he called Cisco support, the logs diverged. There were suddenly inbound and outbound flows from his Cisco gear that it wasn't logging, only the Linux firewall logged the flow.
He captured the traffic while calling Cisco support and quickly figured out that there was a hard-coded back door.
When he raised a stink, Cisco released a patch... which just changed the password.
When pressed, they basically said in a press release: "We were forced to do this, other vendors have similar configurations, and we cannot legally say anything more."
Reading between the lines: Cisco support figured that it was oh-so-convenient to use the NSA-mandated back door for their own troubleshooting purposes.
There is some weak reasoning with Dual_EC_DRBG ... ignoring the possibility of power/timing/etc. side-channels, Dual_EC_DRBG is as provably difficult to break as Elliptic Curve Diffie-Hellman and at least as strong as ECDSA using the same curve.
If your system is already broken if the attacker can compute discrete logs over that particular elliptic curve, and you trust that the NSA didn't generate the two constants in Dual_EC_DRBG such that they know the scalar to multiply the one by to get the other... then using Dual_EC_DRBG doesn't introduce extra attack surface, while using another PRNG does introduce extra attack surface.
Now, that's weak reasoning, particularly since they only dropped 16 bits when outputting values where it seems much safer to drop half the bits when outputting a value. However, there is a narrow set of circumstances where Dual_EC_DRBG makes sense.
Also, if you're using ECDH for encryption, you're almost certainly using AES in counter mode or ChaCha20. In that case, you're best off using a PRNG based on AES in counter mode or ChaCha20, using an entropy-gathering seeding algorithm that gracefully recovers from state compromise (such as Fortuna).
I always considered "zero trust" to mean something like I generate and hold the key that encrypted the data before it was sent to your cloud.
Not this:
The Zero Trust model eliminates trust in any one element, node, or service by assuming that a breach is inevitable or has already occurred. The data-centric security model constantly limits access while also looking for anomalous or malicious activity.
I think this is the definition everyone uses. I know the phrase from Google's BeyondCorp, which seems to be the first implementation of a zero trust network. [0]
However, some providers, such as SpiderOak, claim they are zero trust compliant, which in that case means that they cannot access your data, even if they were persuaded. In that sense, data is encrypted/decrypted solely on your own hardware.
To be pedantic, the central example of a zero trust network is the internet. BeyondCorp popularized the idea of running corporate services the same way as internet facing services, instead of relying on the internal network as a security boundary.
I thought "Zero trust" meant nodes that are linked among the LAN should be treated the same as nodes linked among the WAN - e.g. ideally no unencrypted and unauthenticated traffic even on LAN links, and no lack of authentication, lack of encryption, or assignment of privilege just because your IP is in the same subnet or is in a private subnet.
That would require severe restrictions on users to a point where security actually becomes a bottleneck to normal functionality. A better way is reduce the cooperation to the minimum necessary, as opposed to eliminating it.
For example, my office MacBook has policies enforced via "security profiles". Now, being, a privileged user, I have the ability to remove these, but knocking off my privileges will severely limit my ability to install new work related software or manage my machine to suit my work, etc. They can monitor and flag if profiles are removed, but there is always this race condition that exists that a user can exploit.
Ultimately it takes leaderships buy in. Even things like UAC or sudo/security profiles arent full proof. A lot of people will just type in their password or click yes if prompted and they are perceptively trying to "do something". Even those that "work in tech" but not really security. I have had some good developers try and disable our security services, use proxies etc. And realistically my teams work hard to nurture relationships with dev and give them any and all tools they need.
I tried, hard, to implement some basic account separation for people with admin access to backend infra. The goal was simply to reduce the possibility of EOP or lateral movement and really only more specifically only for those that work in Backend infra (aka Dev, Ops, SQL admins etc).
Things like removing local admin with a normal account and using admin accounts for backend infra, escalation so the normal users email/browsers cant possibly dump lsass etc..
I got a LOT of pushback, to the point of being called names. I was told i cannot push further without possible recourse and cannot escalate the proposal to c-suites. My org wasnt even open to step 1...
"Product" is military/intelligence-speak for synthesizing knowledge/information/an analysis into a document/artifact. "Customers" of that product are basically stakeholders or the people who requested that an intelligence unit _produce_ the product.
e.g. if you have a UAV flying over a region of interest for long period of times, on a schedule, a product out of this flight would be the analysis of traffic/footprints over the region over time, say as a heatmap over a satellite picture.
e.g. if you collect and analyze data from the data warehouse of your company, to answer a question you have about a possible implementation choice and the effect it would have on scaling your systems, and write this down into an architecture decision record (ADR) or something similar. In intelligence-speak, that ADR would be a product.
There are a number of approaches to Zero Trust from different vendors since Forrester coined the term. We at Saas Pass are of the opinion that you are always guilty till you prove your innocence to access any company application, regardless of the fact of being in or out of the corporate network. We require persistent multifactor authentication, and other signals, even in the network.
To us that is the core philosophy. Don’t trust and still verify everywhere.
> There are a number of approaches to Zero Trust from different vendors since Forrester coined the term. We at Saas Pass are of the opinion that you are always guilty till you prove your innocence to access any company application, regardless of the fact of being in or out of the corporate network. We require persistent multifactor authentication, and other signals, even in the network.
> To us that is the core philosophy. Don’t trust and still verify everywhere.
I recognize this is a pitch, but I don't understand what you're pitching.
Zero trust removes the concept of a “perimeter” or “boundary” with hardened entrypoints (vpns, bastions) and replaces it with a variety of different means of ensuring secure communications between users/services (think 2fa, mtls, tokens, endpoint security scoring, etc). They’re basically saying they do Zero Trust but in kind of in a weird way because if you’re doing it “right” it’s irrelevant if a connection traverses a corporate network or not since all connections more or less have to establish their legitimacy using a combination of the aforementioned methods.
Sometimes I get the vibe that all our systems are already breached by multiple state actors, and I imagine them having their little digital wars inside obscure chips on our motherboards and none of us being the wiser.