I also want to raise an alarm about a current AV practice, not mentioned in the article:
AV products like Bitdefender will MITM
your HTTPS connections by installing their
own root certificates, by default and
without warnings
In the name of "security", this undermines the very purpose of what HTTPS is about, knowingly endangering their users.
And consider that I, a highly technical and security conscious software developer, only noticed it because I saw green icons appearing in my search results and then noticed that Google's SSL certificate is now a fake. And I only noticed it because I know how this shit works and those green icons seemed suspicious.
And yes, I'm using the word "fake", because I doubt that companies like Bitdefender have to pass the same certifications as a certificate authority or that they have any deals whatsoever with Google. And it's a serious vulnerability, because their certificate can get stolen and used by malicious software, not to mention you now have to trust a third-party with all of your secure connections, which includes your Google searches exposing your most secret desires, your Facebook and Slack chats, your bank account, everything. A third-party that does not have the scrutiny of your open-source web browser.
That's just preposterous and these products only survive because users are gullible and technically illiterate.
Not true. Google collects your searches. They don't sell your searches, they sell whatever they infer from your searches (your compiled and quite vague profile and I know, because I interacted with their AdSense platform), because they'd be stupid to sell your actual searches, since that's their most valuable property.
Does anybody else know your search history? Besides the NSA, whom I assume have access to all US-hosted data, no. And not even Google knows my most sensitive searches, because my private mode is a Tor Browser connecting to DuckDuckGo, answering for my porn needs mostly.
And I trust Google to keep my data safe more than I trust shady AV companies, because Google has hired a lot of security researchers, at their size all eyes are watching them and their behavior has been acceptable compared with that of others like Facebook.
Information security is all about compartmentalization ;-)
It often irks me when people say things like "Google sells all of your data to advertisers and you are the product!"
Not because there are no potential issues to discuss around ad-funded free services and data aggregation, but more because it's like clickbait (in that it oversimplifies a complex issue for emotional effect and makes discussion of actual issues more difficult).
Using algorithms to build a general profile in order to increase relevance is not the same as "selling your data". They sell access to your eyeballs in much the same way as a broadcast TV station or free alt-weekly does. The main difference is that by getting some sense of who you are and what you might be interested in, they can decrease (in theory) the amount of irrelevant ads that end up on your screen versus the traditional methods of just blanketing things with general ads or using cruder demographic info.
Lots of companies flat-out do sell your data, either in aggregate and somewhat anonymized or in full. I've not found anything yet that leads me to believe this is how Google runs their advertising business. To this day, my main concern with Google isn't so much with Google as it is with a malicious third party somehow gaining access to the info Google has on me.
You're right, Google hoards your data, instead of selling it. They are the ones buying.
That's not the point. The point is there is a dark market trade in your personal identifying data and metadata, with the ultimate goal of knowing everything about you. The more they know about you, the better they can advertise to you.
Advertisements on the Internet are bought and sold algorithmically on high speed marketplaces. Omniscience leads to better decisions in this environment. Therefore your data is precious to whomever owns it.
I totally agree with that part, including the last sentence of your post. However, I don't quite see the difference between "you are the product" and "access to your eyeballs is the product", it's just describing the same thing differently. That others are worse doesn't really change that, the ones who are better are the standard as far as I'm concerned.
Once personal data is collected it could be abused or used in a way you object to. The abuse could be perpetrated by a third party who got access to the data legitimately or illegitimately, or by an employee of the company, or by an owner of the company.
As the target of such surveillance and data collection, you just never really know how that data will be used or who by. It's optimistic, to put it mildly, to expect that data will only be used for purposes you don't object to by good people with the best intentions.
It's disingenuous to compare the targeting that Google allows with the largely untargeted advertising on TV or in a newspaper. These traditional advertising media also don't perform the same intensive tracking that online advertising does. There is really no similarity, other than the fact they both result in ad impressions.
>and their behavior has been acceptable compared with that of others like Facebook.
If you have the time, would you mind expanding on why you consider Google's behavior better than Facebook's? I find myself very wary of FB but much less so of Google, but I can't really explain why.
I used a fake name on FB since the first day I signed up along with a photo of my favorite rock star. About a year ago, someone outed me and FB locked my account and said unless I emailed them a copy of my drivers license or some form of identification that proved who I was, they would keep my account locked.
I thought, "Whatever, I'll just fire up a new one."
This past month I signed up a new account under a very generic name like "John Johnson". No problems. I never uploaded a photo, just connected with the handful of people (less than 5) and felt like, "ok, we're cool now".
Yesterday, I got the same message and now the links you provided make a lot of sense as to why. FB really is after all of your data and essentially force you to give it to them, otherwise they hold your account hostage. Both times, I really felt like my privacy was being stampeded and this was a big intrusion to get at my personal information. No way am I going to give you an identifiable information about me. I already gave up an account I had used more or less for a decade instead of coughing up my personal information and photo.
So yeah, I'm done with FB - not that I was ever a huge fan, but this past week just confirmed what I always suspected.
Sure that's scary. But it also sounds like 'anti-fraud'. How could we distinguish the two? They're slammed if they want authentic accounts; they're slammed if folks create large numbers of spam accounts. How do we suppose they could win in this scenario?
Just do like most of the social media accounts do.
Have algorithms that detect spam? Let users report on accounts being used to spam other users?
Sure, if I'm someone abusing the system, this should be easy to ferret out without having to surrender all your personal information and identifying markers just to make a SOCIAL MEDIA platform free of spam.
I don't know this for a fact, but if I were FB I think I'd be fanatical about verifying real users to prevent people setting up social media PBNs. A holy grail of grey-hat SEO nowadays would be to control large networks of interlinked fake social media accounts, which could be used to promote content artificially. This kind of spam is potentially hard to detect since the networks could be very large and appear organic (to the point, theoretically, of having AI-driven "users" behind each one), so the first line of defence is to identify fake user accounts.
I have the same feeling but I cannot find arguments: they make money by selling very similar materials.
I think Google only better markets its intrusions than Facebook... Something to do with the public sentence "don't be evil". Indeed "evil" is like "common sense": everybody has its own and understands what comforts him/herself. Seems like pure marketing.
Maybe someone has any argument about the reality of this different Google/Facebook privacy intrusion/protection feeling?
Actually, I still have a problem when all my "actual searches" becomes someone else's "most valuable property".
Of course there are some other less intrusive search engines (DuckDuckGo, maybe Qwant), but unfortunately they still are less efficient than Google for fine or rare searches.
Well, I have a problem with that too, but then I'm talking about the average user, which is never going to install Tor in order to connect to DuckDuckGo. And to tell you the truth, I don't trust DuckDuckGo that much either, as they can always turn around and start collecting data without me knowing it. But for us, the technically inclined and privacy aware, there are always solutions.
But for the average user, until a better Google comes along, I think it's OK to trust Google with their searches. And compartmentalization is paramount to information security, my point being that trusting some other company besides Google with that data is not acceptable, which is why I find that intercepting HTTPS connections is simply wrong and evil, regardless of reasons. This besides the fact that intercepting HTTPS traffic increases the attack surface, making users less secure.
I see a distinct difference between DDG and Google.
We know Google does what it does. DDG's reason for existence is predicated on not doing so.
If DDG were found to be lying, I'd guess >80% of its customer base would evaporate overnight. It would mean destroying many years of branding, trust and relatively difficult cultivation of user browser defaults.
But that's worth gaming out - what would make it worth it to light all that on fire? About the only thing I can think of is a Lavabit-style conundrum, wherein our intelligence-overlords threaten someone's freedom. So, absolutely could happen, absolutely would come out.
So that's why I trust DDG to be less forthcoming with their logs.
When I started using DDGo some years ago I had the same problem; not quite so relevant search results. I think this is not the situation anymore, the results are very good. And you dont have to worry about security.
If the search results aren't relevant, you can always add !g to the search and force it to go to Google. They may still not be as relevant, of course, because Google can't use their data about you to filter the results to what they think you wanted. Which I'm perfectly fine with. I'm happy with the allegedly sub-par results that are displayed because of my anonymity. Others may not be.
I agree with ysavir: I do not mind being exposed to ads related to my current search - I understand it is the price to pay for a free service... I mind my searches being stored and attached to my (not event anonymous) profile in a database.
And the "not even anonymous" is not an option: to be able to have a fully functioning phone, I hardly can escape declaring my full details to Google.
And this is clearly an "evil" choice of Google: I never created a Linux, Debian, Ubuntu or Mint account to keep my desktop computer up-to-date and featured by additional apps.
It connects on-line and off-line searches, so it shows you the result in on-line locations. The underlying assumption was that users increasingly see on-line and off-line content as all part of the same world ("their content").
The commercial aspect was that it connected to places like Amazon. It made money for Canonical by using affiliate links if the user chose to make a purchase.
That is not the same as collecting all the history of the user and (anonymising) then selling that to a third-party or presenting adverts based on that data.
The default is off as users felt searches by default connecting to external services was an invasion of privacy - that's different to "selling searches".
Frankly, this closed-off the last viable manner for desktop Linux to secure a wider revenue stream of sufficient size to drive employing enough full time developers to keep up with the other platforms, in my personal opinion. FOSS doesn't change the dynamic that full-time developers cost real money.
Source: I worked at Canonical from the early days of the desktop, for ~10 years.
I'm sorry, but I think I trust Canonicals' privacy policy as a source more than you:
"Unless you have opted out, we will also send your keystrokes as a search term to productsearch.ubuntu.com and selected third parties so that we may complement your search results with online search results from such third parties including: Facebook, Twitter, BBC and Amazon. Canonical and these selected third parties will collect your search terms and use them to provide you with search results while using Ubuntu."
Source: Ubuntu's third party privacy policy.
* The default was not off in 12.10.
I'm fine if you want to make money this way! That's why you're a company, people need to make money. My argument was that some OSS software sells your search results, one way or another. I didn't take a position in this argument (but you can hopefully guess my position).
Now, you mention that your users complained, which was true. It caused a huge amount of backlash from your established user-base, many of which contributed to OSS themselves and have seen their contributions being monetized by Canonical (which is fine too, no worries). But besides the users, it was the pressure from EFF which caused Canonical to buckle to the pressure [1].
So, I don't care what Canonical underlying assumptions were, I don't care whether it is disabled now, I don't care whether Unity showed affiliate links or not. It's all just distracting from the main point: search terms entered in Unity were send, by default, to third-party servers!
Which was four years ago. It's off now, and has been since 16.04 (the most recent LTS release, which shipped last year).
I agree the Amazon integration in the Dash was a mistake, but it's a mistake that has been fixed. It's simply not true anymore that "Ubuntu unity sells your searches in the desktop environment by default," and continuing to tell people so is deeply misleading.
Perhaps, I wasn't clear enough with my point. You stated that Canonical sold data - "unity sells your searches". I was factually correcting you because it doesn't and didn't. Whereas you said in the previous comment - "search terms entered in Unity were send, by default, to third-party servers!", this is a true statement, though there's far more nuance behind it. The two things are not the same (Canonical didn't sell the searches), and in the context of the wider thread I felt it was confusing.
The rest of your points are personal opinions on users and data privacy, I shouldn't have commented on that area, I apologise. I see no value in getting drawn into discussing the strong emotions associated with this area as it never ends well :-)
I agree it's not the same as collecting user history and selling it to third parties. However, it's still against the expectations of most users of FOSS. These things, if they must be there at all, must absolutely be opt-in, never opt-out. Canonical deserved the backlash.
Yeah, agreed it was against user expectation - it's a great demonstration of what happens when organisations don't prepare their audience well or read the emotional response.
It's a personal frustration and professional regret that desktop Linux is under-funded to compete on an equal footing - the business model challenge feels intractable. RedHat/SUSE, Mandriva and Canonical have tried different options. But, there's been no sustained success that can get desktop Linux over 5% of market. Perhaps Google will have more success with ChromeOS.
You're passionate and certain in your beliefs - probably from a strong philosophical basis; My opinions are formed by my practical "experiences" and hard work for 10 years in this space. There's never been a good quality discussion when philosophical certainty crashes against pragmatic experience!
You're clearly angry, but I don't deserve the implication of being called stubborn, arrogant or aloof.
I've already apologised to you for pushing your comment side-ways in the other thread - I'm not sure what else you'd like from me at this point.
Ubuntu is trying very hard to abide by the letter of Free Software but not the spirit. Is there a solution? Simple: don't use it. There are many other distros out there which do abide by the spirit as well as the letter.
Canonical have tried to do some very good things to, and deserve some credit for successfully making Linux more end-user-friendly. They're not terrible people, just folks with a different set of perspectives & incentives from me.
It was a definite breach of trust. Back when it happened I had to get people to go through a lot of command line fiddling to fix it. I'm still mad that Canonical didn't publish an apology and a hotfix which made the behavior opt in.
Oops, my bad. But at least they did, which was my point to begin with. I said goodbye to Ubuntu and their raking since the notorious community discussion, even before it was implemented. They lost my trust and goodwill with that move.
>>>> _Everyone_ is collecting our data nowadays. Who's left to sell it to?
I confirm that he is probably right about _____collecting____ data. Yes, this most definitely includes FOSS software. If your qualifier for FOSS is not using GA or anything like that than your are right, however, most of probably still count brew as FOSS. Hope that helps.
_Everyone_ may be collecting data (I'm collecting steam player data, project pieces in alpha and beta stages), but not everyone has access to a lot of your data. I might be able to tie your player name to your identity, and know your work and sleep schedule, but this brings me no closer to your browser history.
Everyone is collecting as much data as possible, but few are in position to get all of the users' browsing data. Even fewer (if any?) make it available for sale. So there certainly IS interest in such data.
We recently published a paper on this exact issue and quantify the degree to which AV / corporate middlebox systems degrade the security of HTTPS connections. The tl;dr is that we find an alarming amount of MITM on the public internet (5-10%), mostly due to AV/middleboxes, and they almost always degrade the security of the connection.
Many corporations do this on their networks so that they can inspect traffic for security purposes and outbound loss prevention. It's not uncommon today and seems to be gaining in popularity.
Edit: I don't mean to imply that it's the right or the wrong thing to do (it probably depends on the situation). Just stating what I have seen in industry.
That communication belongs to the company, the session is work product on a company owned device. Feels squeamish if you didn't think about it that way, but is implied by almost every employment agreement.
This is quite different than the AV vendor who does not own your communication from your own device.
I still wish most companies knew/had a better "best practice" than just MITM interception certificates, because that is potentially brittle and is an threat to corporate security. If you already have all of your machines MITMed, then an attacker could gain access to the existing MITM certificate and who would ever know.
I know I'm a relative minority in the corporate IT world, but as a software developer downloading/uploading dependent libraries or the outputs of my development issues, corporate MITM interception certificates absolutely scare me for my personal threat model and the threat model of the projects that I work on.
It is. I do work in a Fortune 500 occasionally, and have to use their MITM gateway (websense SSL intercept).
They haven't yet fixed the internal cert to not use SHA-1.
If you're using something other than a corporate windows desktop + browser, you have to install the root certificates manually.
They have to make manual exceptions for sites that do certificate pinning. When they miss a site, it creates issues. Github is broken for me...I have to use crazy workarounds.
If there were a movement to enable certificate pinning everywhere, it would be very disruptive for the Corporate MITM vendors.
Edit: They also have irritating "content filters". So, if I'm tasked to research options for a project, like say a VPN, I can't search from their network. It blocks pages talking about VPNS because there's a policy to block "websense proxy avoidance".
Similar anecdote: an internal intercept certificate that Firefox outright refused to install to a trusted store because the cert seemed suspicious/insecure. (Not caught by corporate IT because of a Chrome monoculture, which is a different problem.)
As someone who's worked on corporate web proxies, I can also tell you there's usually someone who knows what they're doing administering them. Besides the "implied consent" of being on an employer network, you also have people at the company who know how to ensure bad SSL from the proxy -> website will not be permitted.
If the MITM function of bitdefender isn't advertised, how can anyone consent to it, or knowlegably ensure it's still enforcing connection resets on bad SSL certs?
Don't forget that if their software is changing the certificates for every HTTPS site you visit they're probably doing it on the fly. This means that the private key is on your computer. Assuming they generate a per install private key that won't be a big issue but if it's the same private key for all their installs it could get pretty bad once someone gets the key
I am not disagreeing with you, but I want to point out that, usually, the certificate is generated locally during setup and then installed in the trusted certificates store.
So no one else should have that certificate.
I also assume there is an option somewhere to disable the MITM scanner.
> I also assume there is an option somewhere to disable the MITM scanner.
By default this is ON and users don't have the competence to recognize that this is in fact increasing the surface area for attacks and to disable it. The mere existence of a setting that is ON by default doesn't absolve such AV companies.
But speaking of Bitdefender in particular, I installed it on my wife's computer, disabled that option, confirmed that it survived a restart, then one month later I discovered that it is ON again, probably due to an automatic update. It's also an "admin" setting and my wife's user account does not have admin privileges to turn it on or off.
So even with a setting in place, it's untrustworthy.
When I was using bitdefender, this was the first setting I disabled after installing it. Bitdefender also has a slew of other issues, including BSODing after installation.
Bitdefender likes to revert to default settings frequently. I prefer Windows Firewall (Win7) and have to turn off the Bitdefender firewall usually at least once a week.
Any references, because I don't know what you're talking about?
I'm not a Windows user, haven't been a Windows user since 2001, my AV experience has been with the PCs of my family, whom I'm trying to keep safe.
But even if I were a Windows user, if you can't trust Microsoft, you can't trust their OS, at which point it would be better to use something else because security really depends on how trustworthy that OS and its vendor are. I do trust Microsoft more than I trust an AV vendor though.
"Thursday's unscheduled update effectively blocks highly sensitive secure sockets layer (SSL) certificates covering 45 domains that hackers managed to generate after compromising systems operated by the National Informatics Centre (NIC) of India. That's an intermediate certificate authority (CA) whose certificates were automatically trusted by all supported versions of Windows"
I'd argue that's a problem in CA trust model, not MS. If you trust a certain CA, of course you trust their issued certificates by design. Currently, if some high tier CA f*cks up, there's no other way to invalidate their issued certificates than propagating CRLs and removing its certificate from the root CA stores manually (or by updates, as in MS case).
From Wikipedia: "TLS and SSL are cryptographic protocols that provide communications security over a computer network". Your host is not "the network" and it's expected to be your trusted asset.
If the AV software can't be trusted, that's another issue not addressed by TLS.
No, I don't think so and if it does, please tell me which browser does it so I can keep away from it, because that defeats the purpose of TLS.
AVs generally run with complete permissions, and can do everything up to and including injecting their own code inside your browser's running process. Providing them with an API doesn't weaken the security, it just reduces the chances they'll screw the browser up.
I have bitdefender and I was wondering how I can check if it does it on my pc and mac and how I can disable it (if possible). Could you help me in the right direction?
Open https://www.google.com/ and see what cert you've got.
If it's "bitdefender something" instead of google
then uncheck "Scan SSL" in bitdefender and google how to remove root cert from trusted root cert store assuming former doesn't do it for you.
Browsers grudgingly support local MITM as an ugly half-ass solution, mostly because banks require it as part of their data loss prevention measures. Since there is no OS-provided API, there is no alternative that makes corporate clients happy.
Depending on the AV vendor, the MITM implementation will "give AV access to your SSL traffic" or "allow everyone to intercept it" (Symantec).
I feel like, instead of MITM'ing all TLS connections, antivirus companies could implement this same thing in a browser extension. If good ad blockers can prevent requests for ads from being completed, an antivirus extension should be able to do something similar, without having to tamper with the TLS connection between the browser and the site.
That being said, users would probably be much safer if they skipped the antivirus and just installed a decent ad blocker.
At least with Chrome, the extension API doesn't allow you to "peek" into the content. You do have the ability to see the url before it's fetched[1], and block the fetch/redirect. But you can't see the data until it's too late.
Not to detract from the main point but for Bitdefender specifically ssl MITM looks like a paid feature. Which is ironic.
So, just use free version. I am not sure if it sends out all the plain url's though. If anybody knows for sure please let us know.
Cert pinning ignores root certs. This is by design :(
>The Chromium browser disables pinning for certificate chains with private root certificates to enable various corporate content inspection scanners and web debugging tools (such as mitmproxy or Fiddler). The RFC 7469 standard recommends disabling pinning violation reports for "user-defined" root certificates, where it is "acceptable" for the browser to disable pin validation.
> In the name of "security", this undermines the very purpose of what HTTPS is about, knowingly endangering their users.
It doesn't have to be insecure. If the software that does the MITM checks the certificates correctly, I don't see how it would be worse than letting the browser handle it.
It actually is worse. The problem comes "what does the interception do when it encounters an invalid certificate"?
So for example a self-signed cert. does it
a) create a "valid" cert itself, hiding the error from the user? This is obviously dangerous
b) create an "invalid" self-signed cert. This is messy as a user will then see a self-signed cert from the A-V vendor, which they may be more or less inclined to trust
c) Pass the traffic through without inspection, missing any potential threats
And that's just one case. SSL/TLS interception is very hard to get right and easy to make the user's security worse as a result.
With Eset you get a message explaining the issue, similar to if your connection was blocked because malware was detected.
I don't think this practice is a big issue because the local machine would have to be compromised for it to be an issue, in which case it's irrelevant because the game is over already. Also the alternative is not scanning ssl traffic for malware which has it's own very real risks.
The issues I've mentioned (problems dealing with self-signed certs, Cert pinning and EV-SSL) don't have much to do with the client being compromised. They're examples of how SSL MITM (even assuming no implementation flaws) can damage user security by breaking the operation of SSL for user web access.
I think what bluecoat proxy at my work does in this case is just inspect traffic and pass invalid cert to the client, since it is invalid already. If the cert is valid, replaces it with its own cert.
So to do that it's going to stop the users browsing session, redirect them to a local web page and then present something to let them make a decision about carrying on? not the best user experience in the world..
But remember like I said that's just one example of why it's a bad idea, there's others, e.g. what do you do about EV-SSL certificates? You can't fake the browser element for them (remember this is the case where the A-V product hasn't hooked the browser), so where you want to MITM a EV-SSL connection you have to downgrade it to Non-EV.
Also what do you do with certificate pinning (either browser in-built or HPKP headers?)
Easy, you don't tamper with HTTPS traffic, it's innately a very bad idea.
Consider the goals that are trying to be achieved. You're attempting to stop the user either downloading malicious content or perhaps getting hit with a browser exploit or possibly you're trying to stop users going to a "bad" site.
The first one can be covered off with traditional on-access scanning of files.
The second one is much better addressed by improvements in browser sandboxing or general app. security.
The third one can be handled at a DNS level with reputation based block lists.
That doesn't work for corporate communications, though. There are numerous use cases where a corporation must be able to penetrate HTTPS internally in order to comply with regulations, both for direct reasons such as regulations regarding corporate communications, and indirectly for things such as internal security, protection against insider threats, and a lot of other second-order issues like intrusion detection.
And, rolling back around to the main topic, preventing internal machines from being compromised by viruses, since a lot of people end up having to hit at least one website of some sort that has at least one tracking or advertising widget that could by three layers of indirection get compromised to serve viruses, which is, alas, not some sort of far out scenario nowadays, but just another day of the week on the web. (That is, even perfectly safe browsing habits can still get you owned on the modern web. And saying that a modern network can't count on "firewalls" and must have defense in depth still doesn't mean it's just peachy keen if an internal machine gets compromised.)
If you're going to insist those corporations can't penetrate HTTPS for compliance and security reasons, you're going to have to be willing to lift those restrictions and deal with it when their security fails. There's no two ways about this; either you grant them the necessary tools for compliance and security, or you stop complaining when they can't comply and aren't secure. (And at scale, let's be honest; the latter isn't on the table.)
I know corps need to do that, and they get to handle the trade-offs that it generates (although I'd argue that HTTPS interception doesn't in any way provide a panacea for the internal security issues you've mentioned).
The advantage is that they should have informed professional security people who can understand the trade-offs and make intelligent decisions about them.
Even then this strategy fails against certificate pinning which is becoming ever more common in mobile and also web space, so corps need other solutions to those problems (likely endpoint based)
However what we're talking about here is end-user A-V products and their use of HTTPS interception at a desktop level and the trade-offs that this forces on individual end users who are less equipped to handle this.
Realistically the A-V product will likely choose to cause "less noise" to the user so won't present them detailed technical information about the errors their masking, potentially making the user's security worse.
If the alternative is not scanning ssl traffic for malware then perhaps if it's done correctly then it's not a bad compromise. For example a broken upstream cert should just be treated the same as if malware was detected. I bet good AV would update revocation lists more often than the OS & browser does too.
i dont think you understand how this works. they install a root certicicate on your machine and do mitm "attack" so they can scan the urls, and block some attacks (i remember when some forum had embeded a pdf, that had some attack and antivirus blocked it )
also you have installed an application that has a root acces to the pc, if it was mallicius it could do allot more damage. it is ultimately a question of trust.
i created and installed my own root certificate because i dont want clicking on the exception if i open a new incognito window, its especialy anoying for websocket connections.
If you MITM the connection locally it triples the computational cost for both encryption and handshake operations. Then more websites don't use TLS because it's three times as slow for the user.
It also prevents you from using a good cipher suite when the MITM doesn't support it even though the browser and the server both do, again reducing security or performance or both. And it's very easy to screw this up the other way and have the browser show a good secure connection with strong primitives and forward secrecy while the MITM is actually communicating with the server using export ciphers or RC4.
The existence of a trusted root private key on your machine exposes you to KCI of all servers. And key compromise is not even necessary if they use the same root private key for everyone, which has actually happened.
This is not a comprehensive list of the reasons why that is a bad idea.
Compromising TLS is an infection vector. People regularly download programs from trusted websites and run them. Some apps automatically download updates from the vendor's site via TLS.
AV scanners do not have a 100% detection rate. Letting malware be where a trusted program is expected is how you get infected.
> so they can scan the urls, and block some attacks
The purpose of HTTPS is to provide a guarantee that your connection to Google is direct, with no intermediaries, such that (1) only Google knows your search query and (2) you get a guarantee that the received content is from Google.
And you get this guarantee from certificate authorities that have a good reputation and that are in business because they've proven they can keep their shit secure. And when one of them violates that trust, the OS / browser vendors can start to invalidate their certificates. AV companies are bypassing it all.
The blocking of attacks reasoning is kind of bullshit, because Google's Safe Browsing service and browser extensions maintained by a community like uBlock Origin are doing a better job of warning against potentially malicious websites. There are always vulnerabilities to exploit of course, though it's getting harder for those to pop up due to the modern sandboxing of browsers.
However I have yet to see evidence that AV software is doing a better job of catching those, because it's a whack-a-mole game and it's more likely that browser vendors find and fix those vulnerabilities faster than AV companies, because bugs get reported to browser vendors first. And sure, if you have Adobe Reader or Oracle Java installed as plugins in your browser, that's a huge risk, but it's actually easier to uninstall those and browsers have started disallowing plugins. Safari for example is disabling everything by default.
The problem with installing their own root certificate is precisely one of trust. Yes, you allow a piece of software to run with root permissions, for as long as you don't see it do stupid shit, like installing a root certificate, at which point all of that trust should be gone.
And that is because a custom root certificate that doesn't belong to a competent certificate authority cannot be trusted and will increase the attack surface. This is security 101.
"The purpose of HTTPS is to provide a guarantee that your connection to Google is direct, with no intermediaries, such that (1) only Google knows your search query and (2) you get a guarantee that the received content is from Google."
This is a popular misconception, but false. SSL creates an encrypted tunnel so that nothing "in the middle" can penetrate the tunnel (in theory), but there is no contradiction in the two sides of the tunnel delegating out their trust. There better not be, because in practice, there are almost always intermediates now on the real web. It is very common for a WAF or a load balancer to be the thing responsible for the SSL rather than the server generating the response, or you have CDNs or DOS prevention like Cloudflare doing the real work, etc. etc. There is no particular problem with the user doing the same thing. Sure, they can be irresponsible with it... well, so can the HTTPS server side, so, well, yeah? If you're going to declare a particular encryption technique unusably flawed because it could be used incorrectly and insecurely, you're not going to be encrypting very many things.
In fact, the very web connection you are reading this on, if I understand it correctly, was not actually encrypted by YCombinator. It uses a cert they own, but they're not the ones terminating the SSL connection; that's been delegated to a trusted third-party.
Cloudflare et al don't terminate TLS using their own root certificates installed on the client, which means that doesn't expose you to KCI of all servers.
And they don't MITM the TLS connection, they terminate it. The difference being that the performance is better rather than worse (so more people use TLS instead of fewer), the server is aware of this happening so it isn't fooled into thinking the connection is using more secure ciphers than it actually is, there is no third party forcing lowest common denominator security between the three, etc.
There are no intermediaries, AV mitm the traffic because otherwise it cannot scan the content, nor the urls, if it was remote on the AV vendor server then i would understand it but its doing it on you local machine, and you can disable it if you dont want it. If i have a antivirus installed on my pc and get infected with one of those fly byes i would be furious because i was thinking that im protected.
I dont know where do you live but there is a TON of different javascript injections in the wild with ad networks that are not caught by browser vendors or are caught to late. Google does not do deep scan of the page or files it just has a blocklist of urls. If i host my code on another url or change a file slightly only AV is good in this situation.
And remember they have to get it right only one time.
I still don't see why AV should scan websites. Get AD blocker + javascript blocker (with whitelist for trusted sites) and AV to scan local files. MITM'ing TLS makes more troubles and potential dangers than it solves.
>The blocking of attacks reasoning is kind of bullshit
>I have yet to see evidence that AV software is doing a better job of catching those
I wouldn't be so one-sided. Imagine a fresh new variety of ransomware starts spreading. No one can catch it at 0-day, but good AV can catch it at 1-st day (OK, 1-st week) and neither Google nor uBlock or the likes can't.
Do you have any evidence for these claims? What's the concrete mechanism that allows AVs to observe and react to threats earlier than Google? (Since you allow up to 1 week of reaction time, I'll assume that you're not referring to heuristic detection methods.)
With cloud reputation service all AV user base (provided sufficiently large) turn into global sensor network, along with honeypots vendors maintain separately. This allows (at the cost of users' privacy) to detect new emerging threats within hours, then acquire samples, analyze them and deploy new signatures within days.
Google can of course react equally fast. But "signal delay" may be much higher, as users report only URLs they can immediately link to their troubles, e.g. malware that crash browser.
And second, what Google can do now is to block only one attack vector, namely web page.
Thinking rationally, chances are high that Google is seriously considering to enter the AV business. They are in highly advantageous position to do it successfully, with their user base, resources and AI tech.
Google owns VirusTotal, so they either alternatively have a strong set of tagged samples to work with, or an incentive to not disrupt their partnerships with exiting vendors.
Good point. A huge, high quality dataset to train their own AI-based malware detection engine. Someone must be doing this already, at least as a research project.
And consider that I, a highly technical and security conscious software developer, only noticed it because I saw green icons appearing in my search results and then noticed that Google's SSL certificate is now a fake. And I only noticed it because I know how this shit works and those green icons seemed suspicious.
And yes, I'm using the word "fake", because I doubt that companies like Bitdefender have to pass the same certifications as a certificate authority or that they have any deals whatsoever with Google. And it's a serious vulnerability, because their certificate can get stolen and used by malicious software, not to mention you now have to trust a third-party with all of your secure connections, which includes your Google searches exposing your most secret desires, your Facebook and Slack chats, your bank account, everything. A third-party that does not have the scrutiny of your open-source web browser.
That's just preposterous and these products only survive because users are gullible and technically illiterate.