Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Kaspersky OS (kaspersky.com)
449 points by itiman on Nov 18, 2016 | hide | past | favorite | 285 comments


No word on if this is FLOSS or not in the article so I'm assuming it'll be something closed. Which essentially renders the entire exercise moot form my POV. I also don't like how they mentioned Linux. They make it sound as if

(a) Linux is very insecure...I'm no expert but I'd like to see them prove their system is more secure than a Linux distro dedicated to security.

(b) Linux is the only viable option. There's plenty of other operating systems, some even focus on security (starting with the list of existing microkernels).


> I also don't like how they mentioned Linux

I do. Every time I read an article on a new "operating system" someone has released, it turns out to basically just be another Linux distribution, or a new gui/runtime environment running on top of the Linux kernel. Kaspersky took a step back and thought about their requirements, and decided to go with a microkernel architecture because they felt that was a better approach for security (a choice I agree with).

It's refreshing to see genuinely new OSs built from the ground up (open source or not), because it shows that not everyone is stuck in the mindset of Linux when they want to innovate in the OS space. That's not to say that Linux is necessarily a bad choice for many scenarios, just that it's great to see projects that don't restrict themselves based on the design of Linux (and, more generally, Unix).


"Anticipating your questions: not even the slightest smell of Linux. All the popular operating systems aren’t designed with security in mind, so it’s simpler and safer to start from the ground up and do everything correctly."

It looks like a realistic assessment. General purpose operating systems ( at least the 3 most famous ones ) are built with ease of use in mind, not security. Even Torvalds admits it, saying that he sees performance as a top priority, and not security.

If Kaspersky OS is more secure or not, we will see in the future.


OpenBSD is pretty popular in the security community, and is as FLOSS as it gets.


Yet it's not widely used as embedded OS. I never saw any router with OpenBSD or web camera.

The underlying problem, IMO, is people. They just don't care about security, they want to deliver working device.

Also it's not clear how many vulnerabilities, used in real life attacks (like DDOS from IoT devices) are in latest Linux kernel? May be problem not with Linux, but with custom software or lack of updates.


The thing with a BSD project is that it is a kernel, a distribution of slightly modified and configured 3rd party applications, and a bunch of 1st party applications.

OpenBSD project includes stuff like OpenSSH, time sync tools, an SSL implementation, and these are very widely used. Are those devices OpenBSD based though?

Similarly with BSD code, you can just use it and you only need to credit it with the code, so lots of OpenBSD kernel code will be in other systems with no user visible attribution. If MacOS kernel contains OpenBSD code, even big chunks, is it OpenBSD based?

With Linux, GPL means its kind of visible when someone is using it, and GPL encourages you not to integrate your software with it too tightly, so you get an obviously Linux based device with proprietary code on the top. With BSD, you can integrate much more tightly, only ever need to give out a binary blob, and no one would know you were using it.

That said, OpenBSD is kind of server oriented, and FreeBSD would be a more popular starting point for a device, but you might well crib OpenBSD for best practices.


This defense contractor builds their stuff on OpenBSD:

https://www.genua.de/en/solutions.html

One can always strip out what's not necessary. One can also put it in user-mode on top of a secure microkernel with some services running directly on microkernel and some running in OpenBSD. That Kaspersky thought the choices were Linux-based or purely clean-slate shows limited knowledge of what's out there or a personal preference.


I it's more likely that Kaspersky lumped OpenBSD and Linux together as Unix-likes (very valid since they are talking about basic OS architecture choices in that section), and chose to say Linux instead of Unix-like because Linux is much more widely known.


It's possible but I'm not saying likely. OpenBSD's handling of UNIX architecture, low-level primitives, security features, and code quality stand out a lot. It's still UNIX-like enough to have UNIX weaknesses but they mitigate quite a bit.

Kaspersky has a big-enough ego that I suspect his main goal is just having one of his company's and country's own to brag about. Especially if it ends up better in track record than something like Linux that's very visible albeit not made for security. That will actually be straight-forward if it's a Layer 2 or 3 operating system given how little they need to get right with one.


These two statements contradict each other:

> The underlying problem, IMO, is people. They just don't care about security, they want to deliver working device.

> This unassuming black box is [...] designed for networks with extreme requirements for data security.

You can claim that nobody will buy Kaspersky's device, or that they did poor market research. But you can't claim that they don't care about security.


Kaspersky certainly care about security, it's their business. I'm talking about people who build routers or web cameras. I doubt that Kaspersky built OS for internal use, they want to license it to other manufacturers. But I'm not sure that other manufacturers will want to pay for this extra security (if they would want, they already have better options).


"Kaspersky certainly care about security, it's their business."

Their business is selling antivirus & other software, not security. They have no history whatsoever of building a piece of software immune to code injection by determined attackers. There are companies and academics that build stuff like that. Some were evaluated by pentesters of third parties. Kaspersky has neither that background nor evaluations by expert breakers. We should by default think they don't know what they're doing and the product is insecure until proven otherwise. Like always.

People often mistake "company in security industry selling security products" with "knows how to secure software or systems." They're two very different things.


Can't you sue a company selling security products if it ends up being insecure? This is different from ordinary software, because most ordinary software isn't marketed with security as its primary selling feature. And if they're not getting it tested by independent experts, that could be negligence.

If they're not liable in any way, then I agree it's nothing more than marketing to call it secure.


There's little to no liability in most situations. Security experts have been pushing for liability for a long time as a solution to this. Basically, we want a minimum level of responsibility like what exists in safety-critical industries. Schneier has a brief essay that explains it well:

https://www.schneier.com/blog/archives/2004/11/computer_secu...


I suppose the other economic option is insurance. That works best in general with a lawsuit, but certain companies that would suffer great damages with a breach should be motivated to buy insurance.

One example would be companies handling credit card information. If you leak a bunch of credit cards, Visa has to invalidate and reissue the cards, and the issuing banks have to spend a lot of money handling customer support. So they're motivated to punish companies when there are security breaches, and to write these punishments into the contracts.

Thanks for the article.


"I suppose the other economic option is insurance"

If they hack you, you can loose all your secrets, loose availability of your service in a way that customers leave, loose an election, or be implicated in crimes when your system is a proxy. I've never believed insurance would really compensate for such losses like it would a stolen TV or fire damage to a building.

"If you leak a bunch of credit cards, Visa has"

That's actually a good example. Both the regulatory and lawsuit-related penalties over mishandling PII have led to a boom of vendors offering solutions to make it easier. Still plenty of BS in the market but solutions are there due to incentives.


You can care about security and still make totally wrong decisions.


But OSX, Windows, and full-blown Linux aren't used as embedded OSes either. Linux is a bit, but there are a lot of other choices.


"A bit" is an understatement. The vast majority of non-server deployments of Linux are embedded systems, and embedded systems requirements are one of the most important driving factors in its development nowadays.


I'm not sure if it shares APIs with Win32, but Windows CE runs almost half of every ATM I've ever used.


The other half seems to be powered by Windows XP.


Also cash machines.


ATM (Automatic Telling Machine) is American English for a cash machine. A teller is a bank worker who handles cash.


It's UK English too. But not as common as cash machine/cash point/hole in the wall.


Yes, my first thought (after VMS) when he said no popular OS is designed for security.

Then of course, I realized that by "Popular" he meant Mac OSX, Windows, and Linux.

Linux of course, we all know is a security mess because Torvalds refuses to deal with security issues.


I don't think it's a fair statement to blame Linux's security problems on Linus.

Linux provides support for lots of security options, but the project's guiding philosophy is "don't break userland". This is 99% of the time what you see Linus cursing out other kernel contributors for. All of the possible options that Linus could _enforce_ would do just that. Heck, a lot of the security problems and blame have nothing to do with the kernel at all and lie squarely with systemd and how it's implemented. If these are dealbreaker features that you must have, Linux is not the tool for you. Your efforts are better spent working to improve OpenBSD.


Sure, but comments like: "Security people are often the black-and-white kind of people that I can't stand. I think the OpenBSD crowd is a bunch of masturbating monkeys, in that they make such a big deal about concentrating on security to the point where they pretty much admit that nothing else matters to them."

"So LSM stays in. No ifs, buts, maybes or anything else. When I see the security people making sane arguments and agreeing on something, that will change. Quite frankly, I expect hell to freeze over before that happens, and pigs will be nesting in trees. But hey, I can hope."

He's typically very critical of security-related changes unless they are a massive improvement. I think characterizing him as "functionality over security" is entirely fair. He's not even wrong necessarily.


The market has sort of proved him right.

Companies that use Linux and need security will either get caught with their pants down or they won't. It's up to their level of preparedness and luck. I wouldn't underwrite that if I were an insurer though.

Those of us that care are already using something else.


Absolutely.

I would actually argue that security doesn't depend on any one product, but instead a mindset, methodology, and toolbox. Defense in depth, etc.


Of course it's fair. He built it to just have a person UNIX that he enjoys working on. Lots of other people did the same. Security was never a priority for them. Whereas, by that time, there were already multiple OS's with strong levels of built-in security. He could've copied stuff from them. There's even more now. They're still keeping broken model for (a) their priorities that put security low and (b) avoiding rewriting anything depending on broken security architecture. All the container developments are the classic solution to (b) but their security is often similarly shoddy vs what it could've been.

So, yeah, blame Linus and others in that ecosystem who do everything but make a solid foundation. Contrast that to OpenBSD for monolithic or GenodeOS for microkernel approaches where they bake security in at various levels. MINIX 3 for reliability levels they achieved 10x faster than monolithic UNIX's did. You get what you focus on. :)


Right. I'm saying give up on expecting the project to change. The project isn't for what you want it to be for and never will be.

You should be shouting at companies who use it for applications where security is a must. That's where the madness lies.

I use OpenBSD and so should you :)


I also use OpenBSD (and donate to it, since I have more money than time right now)... but damn, they do need a VM that I can use, so that it can be my regular desktop.


Are you saying a VMM to virtualize other software or a pre-configured VM containing OpenBSD you can run on top of other OS's with better hardware support? Or something else entirely?


A VM to virtualize other software. In this case, my desktop has to support some clients who have very specific VM images of Win7 to access them. VirtualBox and VMware are fine, but for QEMU it's just way too slow (for me, at the moment)


QEMU too slow?


Sadly yes. For several specific clients I have to run their Win7 VM in an emulator, and it can't quite manage on my (rather old) hardware right now.

Due for a hardware refresh soon, and I will obviously be trying it again :)


Well that's mostly true. I guess we have to shout at the cloud companies then. ;)


http://arpnetworks.com/ offers OpenBSD VPS, and has a great service record! I strongly recommend them.


Digital Ocean supports FreeBSD, but not OpenBSD. Which is close, but not quite the same. Perhaps with a little persuasion, they could be talked into it


You can install and run OpenBSD on DO. Here's the guide by Derek Sivers: http://50pop.com/d.html


Thanks! I think I'll give that a try. That's a great idea.


Please do!

It's even possible to do modern web stuff on OpenBSD. Erlang and Elixir run on it, as does Postgres. Phoenix framework pretty much works out of the box.


What the hell does systemd have to do with kernel security issues?

Is this Hacker News or slashdot?


For the most part, they are not kernel security issues.

Those get fixed. There are places where the kernel could be further hardened, but would break software.

Most of the security issues that we talk about have to do with process permissions and runlevel. The runlevel of systemd and it's various components is actually my #1 security concern. Has nothing to do with the kernel.


> Those get fixed.

This has been done, but poorly and only for an extremely select few things. KASLR is just one example of a poorly implemented feature that pales in comparison to alternatives (completely subsumed by features in the grsec implementation, for example, since it only randomizes things like .text addresses. Maybe some of that got fixed.) Features like kptr_restrict can easily be subverted by a number of trivial infoleaks unless you back it up with a lot of other protections, etc.

Other things like __ro_after_init for read-only post-init memory have only recently been incorporated into arch/x86 (in the past few months AFAICS, so likely kernel 4.8+ only) since being available for nearly a year on x86 at least. I'm not sure if they fixed the fact __ro_after_init apparently didn't work on loadable modules. grsecurity's implementation is better anyway (allowing remarking variables as writeable for short windows), but I'll concede that extra power relies on KERNEXEC, which breaks some userspace things.

I don't think they botched the LATENT_ENTROPY plugin, at least...

If you actually look at the details, Linux has a pretty bad track record as far as "meaningfully implementing defenses" goes, considering how long this stuff has been around. It's better than literally nothing, I guess.

> There are places where the kernel could be further hardened, but would break software.

Uhhhh, sorry, but you need to do your research. There are plenty of places that could be improved without breaking userland at all, while still mitigating many classes of exploits. PAX_REFCOUNT is one example (a variant of which will probably soon go upstream in some weird way, of course). RANDSTRUCT, PAX_MEMORY_STACKLEAK, KSTACKOVERFLOW, JIT_HARDEN, RAND_THREADSTACK... If you're determined to fix the FPs, the SIZE_OVERFLOW plugin qualifies, too (it occasionally hits false positives, murdering an otherwise legitimate task, but this is a different scenario to a feature outright breaking userspace, and it's caught many bugs and can stop many actual exploits on its own).

There are probably at least a dozen major features in grsecurity that don't have to compromise userspace, and still aren't implemented in Linux, with no equivalent, and no timeline on the horizon. I'm not sure what to take away from this assessment of yours that the only remaining improvements will break things, other than you aren't really aware at all of what the current defense landscape looks like...



That's a nice link. It also has absolutely zero to do with any of the specific defenses that I mentioned, and the fact that upstream Linux doesn't have them -- again, my point is there are tons of things upstream can still do to harden the kernel, before getting to breaking userspace, despite your initial claim to the contrary. The point isn't "you must adopt the grsecurity patches" or the politics around why grsec won't help upstream. The point is that these meaningful improvements actually exist, and they are not available. It's simply factual Linux is way behind what's possible.

Furthermore, I really don't see how that link is relevant, given most of the more recent security features that went upstream, as well as many of the ones that will come in the future -- all originate in part from grsecurity anyway. Apparently, your position is that the kernel developers have already done all they can, and any further improvements else will break userspace, so systemd is definitely totes the #1 biggest problem now, everybody (non sequitur, but whatever). But when I bring up the defenses they could implement, but haven't yet, from the same source of the previous ones -- apparently I'm just doing some irrelevant posturing, or something? I find that funny. Maybe in 5 years when they've poorly ripped off more features and are still behind, you'll be moving the goalpost and saying "Any further improvements would break userspace", and it still won't be true in the slightest. :)

PAX_REFCOUNT is a non-breaking addition that would have stopped CVE-2016-0728 completely, for example, had it actually existed upstream -- it will soon enough, at least, as someone is working on it by porting the grsecurity patches.

The reality is very simple, even if you don't like it: upstream Linux is just bad at meaningful exploit mitigation, in many ways, and they have trailed behind what's possible for literally years. I'd also argue that some kernel developers seem to just have a complete, fundamental misunderstanding of what the point of the mitigations are, which is damning. One guy on the dev list argued with Kees Cook that people shouldn't bother with this shit, because we don't need to help "those bastards with proprietary modules be more secure" (Kees works on Android, so proprietary modules are just a fact of life, grsec improving their security simply being tangential), or help the people with out of tree kernels like the grsecurity team. It had apparently never dawned on him that mitigation tech could stop exploits that appear in the future. Like PAX_REFCOUNT stopping an exploit that would only appear in the future as time went on. This is mind-blowing as a position, for a developer of what is ostensibly one of the most complex projects in the world.

Anyone who has followed the grsecurity project for a while is pretty well aware of why they don't bother with upstream, and aware that upstream mostly reinvents their work, poorly. In any case, the given link is still irrelevant.

I'm going to go out on a limb and say you don't actually know much about modern memory corruption defense, or the landscape of modern kernel security and how it has moved forward over time, if this non-reply is your only answer to the examples I listed...


Which is funny, because they say that popular operating systems are not designed with security in mind. They give this as a reason to start from the ground up. I am from the camp saying that currently sole kernel does not an OS make. With this in mind network equipment OSes are certainly not popular ones. Nevertheless there are probably lots of less popular OSes with less popular kernels that are designed with security in mind. Some of them are probably good tries at such a goal. They could be used and extended and it would be probably better securitywise, because of years of bug fixes, that no new OS can have.


To support your point, the company below did a clean-slate OS tightly integrated with hardware protection features of Itanium CPU's to give people hell trying to compromise their DNS servers. Also re-did the networking stack to assume a hostile instead of benign network. Way over due there. The OS is available for licensing IIRC but OEM's don't give a shit about security if dollars are on the line. ;)

http://www.secure64.com/secure-operating-system

An older one that was pentested by the NSA with positive results reduced attack surface by using PPC embedded board, INTEGRITY microkernel, and carefully-coded state machines. That a small team made this shows both the big companies and startups could be doing way better if they cared.

http://www.sentinelsecurity.us/HYDRA/hydra.html


The article is light on details, but based on the way they talked about it ("impossible to hack in principle") it's likely that they actually can _prove_ that it's more secure than a Linux distro dedicated to security insofar as they're able to prove anything at all. Since the Linux APIs are not formally specified, or verified, it's essentially impossible to _prove_ anything at all (again, in a formal sense). Formal verification can get esoteric quickly, but the state of the art is very impressive (see CompCert, seL4, Agda, Idris).

As someone who has been paying attention to this space for some time I welcome anything new that increases exposure. It's cool stuff, and it's probably the future. However, there's a long way to go. Security doesn't stop at the operating system. In fact, the vast majority of security breaches occur due to misconfiguration, bugs, or good old fashioned social engineering. To really build a system that's "secure in principle" you can't stop at the operating system: you need to build a toolchain that makes it easy to build user space applications that are also "secure in principal" and you need to work on the HCI so the security model makes sense / isn't defeated by users. That's going to be a lot of work!


There's a difference between "we show you the proof, and you trust us as a service provider that the assumptions are correct" - namely that non-OSS software is loaded in the form it is shown in the proof - and "we show you the proof, and give you all information needed to audit it." Some people will buy based on only the first statement and the brand of the company.

Now, we're looking at a top-notch team of OS programmers; presumably, they all have access to do that audit themselves, and at least some of them must be ethical and proficient enough to notice and refuse to ship a backdoor, right? But the influence of a state actor given 14 years to compromise a team shouldn't be dismissed out of hand.

https://en.wikipedia.org/wiki/Kaspersky_Lab#Controversy


Now, we're looking at a top-notch team of OS programmers

Kaspersky AV hobbles my core i7 to the point where it can't even respond to keypresses. I have no view on its anti-virus abilities, but my every experience of it (the client and the management tools) is that it's shoddy enterprise churnware.

And, of course, they also promote it as if it's the best thing ever.

I guess the OS team could be the best ever. But my starting position is "ha ha, as if".


> Since the Linux APIs are not formally specified, or verified

That said, they are apparently stable enough to emulate enough of Linux that from another OS that you can simulate a Linux OS as a container and run software as if it's on Linux.[1] So, to my mind that implies that while they may not be formally specified, it's not too hard to look at what's actually implemented and treat that as a sort of loose spec.

1: http://www.slideshare.net/bcantrill/illumos-lx


A formal spec would be great but you're right. Plenty of people have done working layers of Linux in user-mode, etc. The only thing I know with formal specifications was filesystem API's for testing them. There have also been selective ones for things like robust drivers. Personally, I'd love to see one just to document all the state and covert channels. Especially visually. I bet it would be one, big-ass, messy graph. :)


Yes I would not take their mention of Linux as a personal attack on your operating system of choice.

Treat it as someone saying "I don't like pizza" when you do like pizza - it means nothing to your enjoyment of pizza.


Linux is very insecure. Maybe you have not been following the news lately.


There's plenty of great quotes from Torvalds about why it's like that.

He places functionality over security, and assumes security will just 'happen' with code quality. I tend to disagree - but I am typing this from a Linux box, not an OpenBSD box. Because Linux is more functional as a desktop - the irony there isn't lost on me.


Exactly. Priorities = what you get out of your work. Far as Linux vs OpenBSD box, remember also that contributors (including corporate) are partly to blame here since they chose to put their investments into a project that doesn't care about security instead of one that bakes it in. Even if Theo et al weren't pleasant, they could've forked OpenBSD keeping any of their improvements while making what changes they absolutely needed. We'd have had an OpenBSD desktop in a few years as easy as OpenSUSE at the least.


A very valid point. IBM contributes a TON of code to Linux. They could easily have worked to improve security if they cared.

Or, improved BSD, and avoided all that GPL stuff if they wanted.


I was actually shocked they didn't contribute to FreeBSD instead then rebrand their management or security customizations as their own enterprise OS. The Chinese ended up doing that with Kylan. Cambridge's CHERI team made it capability-secure with minimal modifications. IBM was in ideal position to do that, too, given they had legendary Paul Karger who already built high-assurance OS's and CPU's for them.

Let's just call it Another Missed Opportunity for the Big Blue. :)


> The Chinese ended up doing that with Kylan

Kylin, for others googling: https://en.wikipedia.org/wiki/Kylin_(operating_system)

It since switched to a Linux base and today powers Tianhe-1 and 2.


That's a big book right there!

I was really disappointed in how Apple treated BSD, but I am stupid and naive. I would have expected that from IBM though.


"I was really disappointed in how Apple treated BSD"

Would you go into more detail here? The BSD license is very liberal. That's no excuse for bad behavior, but I'm unfamiliar with bad actions on Apple's part wrt BSD, which I gather there were given your phrasing.


I don't believe they have in any way violated the letter or even spirit of the BSD license. It's extremely liberal, and should be.

However, they used to really flaunt their BSD roots on OSX, but seem to do very little to give back in the form of code, money, or even PR for that matter.

Here's an example of that: https://developer.apple.com/library/content/documentation/Da...

But now they've even removed any mention of this, and allowed projects like OpenDarwin and such to just plain die. I think it's pretty poor behavior from a company that's SO involved in intellectual property rights to just take something as valuable as a complete kernel and userland (not GUI) and then just sweep it under the carpet because no laws require them to do otherwise.

Puts a really bad taste in my mouth to see such important work treated like that. I'd have expected it from IBM, but not from Apple.


Gotcha. Thanks for taking the time to respond and fill in more of the blanks. I appreciate it.


Parent might be referring to how they just gobbled it up into Darwin and Mac OS X while contributing about nothing back. At least, I'm aware of them making a bunch of money off Mac OS X and iPhones but not hearing about contributions to FreeBSD at level IBM or Red Hat do to Linux.


Yeah, I was wondering about that, too. Granted, the BSD base is arguably one step removed from Apple (as that was NeXTSTEP, which Apple acquired), and both Darwin and OpenStep are open source. Apple continues to release the source of some of their software.[0] According to the "Myths" page at FreeBSD, "FreeBSD 9.1 and later include a C++ stack and compiler that were originally developed for OS X, with major parts of the work done by Apple employees",[1] so it hasn't completely been freeloading.

I don't know enough about the levels of contribution to either to make a strong comparison, and I'm more than willing to grant IBM and Red Hat make much larger contributions to Linux than Apple does to FreeBSD. And I sympathize with the desire to have more contributions. The "how Apple treated BSD" phrasing sounds like there is more active bad treatment rather than not contributing enough. Maybe the hiring of Jordan Hubbard is considered active bad treatment? (Honest question)

[0]: https://opensource.apple.com

[1]: https://wiki.freebsd.org/Myths#FreeBSD_is_Just_OS_X_Without_...


"so it hasn't completely been freeloading."

That's Clang/LLVM compiler that gets them off GCC's GPL codebase. It also allows them to keep more extensions proprietary if they choose. It's been beneficial to the OSS community but I'd say it's barely altruistic. An exception rather than the rule.

" The "how Apple treated BSD" phrasing sounds like there is more active bad treatment rather than not contributing enough"

Helps to remember that it's how a lack of contributions is commonly phrased. Freeloading is probably the dominant model for both users and companies far as FOSS. It's just how they word the gripe. I agree they should probably use clearer phrasing for others not deep into this subject matter.


You are correct, I should have been more clear in my phrasing.


Exactly - sorry, was busy :)



That is about the X server, which is not some Linux specific component as it is also used by all the other *nixes as well.

There is Wayland which aims to solve these security aspects mentioned in the blog post, it will be shipping by default in the next Fedora release IIRC.


But comparatively speaking is still more secure than Windows or OS X?


Depends on who you ask, for example, taking the top 50 products with new vulnerabilities discovered in 2016[1], Windows 10 got less vulnerabilities than the Linux Kernel and OS X.

This could either mean that Windows 10 has become more secure than its most popular competitors, or that researchers hadn't invested enough resources to audit Windows 10 properly.

Taking into account the results from previous years and previous versions (like 8.1), my personal conclusion is that Windows has actually become more secure.

[1] https://www.cvedetails.com/top-50-products.php?year=2016


Well, there's also how serious the vulnerabilities are. Linux kernel had 4 code execution vulns, Windows 10 had 44. Linux had 44 gain privilege vulns, Windows 10 had 79. Linux seemed to have mostly DoS vulns, which is admittedly not great, but I'd rather a server go down then get compromised and used to take over the rest of the network. Then there's the fun stuff, like mimikatz, that's been around since windows XP and still can pull passwords from windows 10...


It is expected that the smallest attack surface will have less critical vulnerabilities, comparing an entire distro gives you a different picture, since categories like code execution get similar results.

The stark contrast is in the privilege escalation vulnerabilities from the Windows side vs the other categories on the Linux side.

I would assume that many persons prefer a server to go down, corrupt its data and leak it, rather than get compromised. The fine print is that the leaked data may contain information to compromise the server[1].

[1] https://en.wikipedia.org/wiki/Heartbleed


Yeah, but now you swung in the opposite direction. Look at Debian Linux's 2016 code exec vulns and you'll see it's got a like Firefox and Chrome and Drupal and Mercurial... not exactly OS components... whereas the windows 10 vulns are are windows OS components. I'd personally be curious if any of those "debian vulns" would be equally applicable to the same software installed on windows.


The biggest issue with Windows isn't its inherent security, it's the way it's used. Most users run everything with super admin rights. Most developers require that for installs.

OSX is right behind it, with a culture of laxness that undoes most of the benefits the designers tried to give them.


I've run into "security software" for domain computers that required every computer to have the same local admin password... and for it to be enabled on every computer.


Hehe, I once saw a corporate network where the root password on all Un*x systems was ${vendor}xyz, i.e. sunxyz for a Solaris box, ibmxyz for AIX, and so forth. I hope at least they disabled root login via ssh.


I've not seen that, but I have seen one where the root password was the vendor who supported it - like capgem123 ibm123 etc.

Not quite as bad, but still pretty weak.


Exactly. That's ridiculous thinking far beyond what the OS designer is responsible for.


What do you mean by secure? What kinds of OS functions and capabilities do you consider the purview of an OS from a security standpoint?


An OS without FOSS targeted at routers is a total meme.

the security problems with linux I wouldn't want to dismiss though. I evangelize FOSS since the mid 90ies and we had a good laugh for 20 years whenever Windows messed up (remember trustworthycomputing.com ??). It is hard to dismiss that being accused for 25+ years for insecurity has nothing but boosted their situation. Whatever the perceived quality of Linux, security is currently not it's strongest feat.

Systems should become more resilient under pressure over time. But in the case of Linux the problem was lack of external pressure and increasing arrogance and dismissing many of it's shortcomings. People still are way too touchy whenever somebody picks on Linux. The 0day window of vulnerability is a pretty good indicator over how it compares. Thegrugq's comparison of OS security (slightly NSFW) https://grugq.github.io/presentations/COMSEC%20beyond%20encr...


What happened to Pond? The domain is gone: https://pond.imperialviolet.org/


This is proprietary.

Source: ex-employee.


It sounds very interesting, for sure, but the announcement is a little thin on details. The OS is apparently based around a microkernel. Which sounds good, but AFAIK, microkernels are comparatively popular in the embedded space (think QNX, L4) - so that choice is not in itself revolutionary.

They mention signatures, and it kind of sounds as if the OS will refuse to execute any non-signed code. Again, sounds like a good idea in principle, but remember how Stuxnet came with - IIRC - two valid signatures created with stolen keys. It might be better than nothing, but to an attacker with sufficiently deep pockets, no quantum computers are needed.

Of course, those are just random thoughts popping into my head. I should wait for more details before passing judgment. A (verifiably?) secure OS for embedded and "IoT" devices would be very desirable, that much is certain.


Yes, it was two valid signatures from two different stolen certs.

The more terrifying crypto demo, IMO, was the Flame rootkit, which used a new MD5 collision to forge its signature bona fides.


Stuxnet was signed, but not by any governmental entity (the certificates were stolen from hardware vendors). There would be no reason for Kaspersky to trust third party certificates, so unless Kaspersky manages to insecurely store their private key(s), it's a very good measure.

Of course, iOS has AMFI which is supposed to enforce code signing, but it doesn't always work...


L4 is popular in embedded space? Can you provide some examples?


https://en.wikipedia.org/wiki/L4_microkernel_family#Commerci...

> OKL4 shipments exceeded 1.5 billion in early 2012, mostly on Qualcomm wireless modem chips. Other deployments include automotive infotainment systems.

> Apple mobile application processors beginning with the A7 contain a Secure Enclave coprocessor running an L4 operating system. This implies that L4 is now shipping on all iOS devices, the total shipment of which is estimated at 310 million for the year 2015.


So we have 2-3 very niche deployments. Does it really make L4 popular in embedded systems?

Also, OKL4, specially the version claimed to run on qualcomm is very different from the original L4 (I say claimed since multiple attempts to reverse engineer qualcomm baseband firmware showed no traces of OKL4)

edit: if you think this is incorrect please provide a valid counterpoint. downvoting a post this way to hide its presence is not a valid response.


It runs on billions of baseband processors. It's popular by any sane definition of the word.


Following that logic, iOS is also popular in the embedded space.


That's not the embedded space; an iPhone is a full (small) computer.

iOS is quite popular, but embedded ≠ “runs on ARM”.


Things like a cash register or a media player are also considered "embedded systems", even if they use x86 CPUs. Embedded ≠ incomplete computer.


It will also help you to remember that embedded space is extremely fragmented with many RTOS's & non-RTOS's to choose from. Developers also do custom stuff or just run on bare metal with runtimes. It's an interesting field due to diversity of solutions they use. That one product hits a billion installs with a bunch of vendors in a space that exclusively use Win Mobile, Symbian, and Android is significant.

It's popularity was just relative, though. In mobile phones, one microkernel was more popular than others. vxWorks and QNX were more popular in embedded space in general than L4's. I don't know the exact distribution.


Wikipedia says: "L4 is widely deployed. One variant, OKL4 from Open Kernel Labs, shipped in billions of mobile devices.", links to a press announcement: https://web.archive.org/web/20120211210405/http://www.ok-lab...

Okay, if I look more closely, OKL4 seems to be used as a hypervisor to host regular kernels. I am not sure how much that buys one, really, from a security point of view.

Also, the same Wikipedia article states: "Apple mobile application processors beginning with the A7 contain a Secure Enclave coprocessor running an L4 operating system. This implies that L4 is now shipping on all iOS devices, the total shipment of which is estimated at 310 million for the year 2015."


Not sure if the parent meant this specifically, but the commercial version of L4 was ported to ARM to host Android, and was then used in some commercial phones -- deployed in millions says Internet: http://linuxfr.org/nodes/88229/comments/1291183


its thin on details but if you read it to the end he says there are more details about the os coming in a future post.


Just a few days ago I became aware of CertiKOS, which is apparently a formally verified kernel [0] [1]. I think these two aspects--formal verifiability and being open source--are key for a truly secure OS of the future.

[0] http://news.yale.edu/2016/11/14/certikos-breakthrough-toward...

[1] https://www.usenix.org/conference/osdi16/technical-sessions/...


My suspicion is that most attempts to create a better OS for IoT will fail for political reasons. AFIAKT, one really important characteristic of Linux (and JavaScript also) for large tech companies is that they can control their own stacks, without having to license tech from another corporation, but still have the benefit of network effects. Samsung have Tizen, Google have ChromeOS etc. etc.

At the component level, Linux also empowers the chip vendors to build what they want on their own timescales and then address a large market. Even open Linux drivers are sometimes loaders for proprietary firmware, so they aren't really giving up the ability to ship code in a black box.

I don't see any of them willingly cooperating with a company that wants to manage and deliver the whole OS from the the kernel up. TLDR: The industry probably does not want a Microsoft for IoT.


Linux doesn't scale down very well though. I really don't need the full Linux API for a lightbulb. Most embedded OSs are microkernels for this reason. If any mega-trend has a chance of unseating Linux and generally disrupting the OS space it's IoT.


A lightbulb will have an RTOS, not a microkernel. While microkernels provide utmost isolation between processes and well-defined primitives for communications, a lightbulb might not even have separate supervisor and user modes.


A lightbulb won't even have a RTOS. It will be a state machine or series of them in C language on cheapest MCU imaginable. Probably 8- or 16-bit. That keeps the cost at nickles or dimes.


Here is my 50 cent. Zephyr OS from Linux foundation: https://www.zephyrproject.org/


Speaking of OS's for IoT - I think http://www.contiki-os.org/ deserves a mention.


> All the popular operating systems aren’t designed with security in mind

Kaspy OS runs on a switch, and they're talking about popular operating systems, so in the same vein, OpenBSD wouldn't be a popular secure OS for e.g. routers?

But hey, I'd be really happy if they based it on seL4 and formally verified their security concepts. That would be a real game-changer. OTOH I'm really sceptical until they provide any relevant details.


Also, in a microkernel-based system, verifying the kernel itself is but a start.

To make a verifiably secure system for network infrastructure and IoT-devices, you need, at the very least, a provably correct IP stack. Want a nice web interface? Now you need to verify the HTTP server, too. Want to talk to other devices? You probably want a DNS resolver. And so forth...

Simply signing code and having the OS refuse to execute code without valid signatures is not going to be sufficient to convince a lot of people that it's a significant improvement security-wise.

(If, on the other hand, they make it open source and provide proofs of correctness for all these components, that would indeed be a significant step forward.)


But isn't an isolation kernel enables you to limit spread of malware between modules and to detect when a module was compromised and reset it ? Isn't that a big improvement ?


It certainly is an improvement.

But my point was that it still leaves a whole lot of attack surface. If you want a provably secure system, you basically need to verify much more than "just" the kernel.

On the upside, if one did so, it would have benefits beyond security.


You do. The good news is that it's not a lot of components. A good example would be what's going on in DO-178B space that requires strong verification albeit not necessarily formal. They subset things like networking standards then build partitioning and quite-static design into them. Resulting ones still have decent amount of features:

http://www.lynx.com/lcs-lynx-certifiable-protocol-stack/

The minimum I've seen are a few drivers, bootloader, partitioning filesystem/storage, partitioning networking, and language runtime (esp Ada or today Rust). That's not a lot to build given what's already available in FOSS or embedded. More than just a separation kernel but not much more.


Only use it if you want to send all of your information to FSB (modern KGB). Evgeniy Kasperskiy has friends in government, police and FSB. He also is apologet of state surveillance.


We should look at this as it really is: Russia is super paranoid, and increasingly isolationist. Putin is also realizing that all of his technology comes from western companies, and they are trying to build their own, so that they aren't so reliant on Cisco routers, Intel CPUs, and Apple smartphones.

This will be a russian OS, designed to allow russian companies to buy routers, switches, and firewalls that are not made by western companies.

It has to scare the shit out of russia to think that if they did have a war with the west, they would lose their access to the technology needed to run their businesses. This is a first step towards trying to build some type of technological independence from the west.


> Russia is super paranoid, and increasingly isolationist

And they have good reason to be. The US has really shot itself in the foot by intentionally compromising the systems we create. Hopefully we can use this (and similar actions by other countries) to turn that around.


Let's cautiously assume that China also does this.

Open-source hardware would be great at counteracting this (and for other reasons), but it's harder than open-source software.


That's why China built their own OS on top of an open-source BSD. :)

https://en.wikipedia.org/wiki/Kylin_(operating_system)


Nowadays it's Linux though, according to the article.


Ecosystem effects most likely. Most users prefer new features over stability or security. Theirs must be the same. They've followed in the U.S.'s footsteps it seems where DOD & "Trusted UNIX" vendors started with stronger stuff then gradually moved to weaker ones (esp Linux-based) for kernel features or specific apps.


> Open-source hardware would be great at counteracting this (and for other reasons)

This is actually the crux and to my knowledge is non-existent.


Not just open hardware, you need trustworthy fabs:

https://lwn.net/Articles/688751/

Some folks trying to do open hardware:

http://riscv.org/ http://www.lowrisc.org/


There is no reason why both cannot be true. You should expect that there are FSB accessible backdoors/0-days.


Absolutely, but it's also helpful to understand why an FSB/KGB aligned company would be building their own OS from scratch anyway, when they could just adopt any BSD or Linux-based OS with a permissive license.


> This is a first step towards trying to build some type of technological independence from the west.

And this is not step forward, but to the side, since they're becoming technically dependent on Chinese now.


I prefer the FSB to have access to my files than NSA. What FSB can do to me? Send to Guantanamo?



How does it relate to my porn collection?


Blackmail?


This is a valid point. I told people elsewhere best way to deal with subversion [aside from avoiding computers] is to assess one's threat model against various countries, identify which secrets need to be protected from which if at all, and then protect them using stuff from most opposing country. The one you trust might compromise you but the one that's truly a threat won't.

Outside of national politics, Russia is most likely to steal the I.P. of say an American user outside of it. Whereas American intelligence and police organizations might lock up that same user for some bullshit. Situation is reversed for a Russian considering U.S. solutions. Some countries or organizations are unlikely to steal your I.P. or attack you. They become better choice than the aggressive ones. Then there's multinational, FOSS projects with strong security focus at the high-end. Gotta build them from carefully-acquired source, though. ;)


Guantanamo may seem like a 5-star resort compared to where the FSB could send you...


They can all of your files over to Wikileaks.


Any reliable source? This is quite the accusation


The Company Securing Your Internet Has Close Ties to Russian Spies: http://www.bloomberg.com/news/articles/2015-03-19/cybersecur...

Kaspersky Lab: Based In Russia, Doing Cybersecurity In The West: http://www.npr.org/sections/alltechconsidered/2015/08/10/431...

Russia’s Top Cyber Sleuth Foils US Spies, Helps Kremlin Pals: https://www.wired.com/2012/07/ff_kaspersky/

Global Cyber Security Firm Kaspersky Denies KGB Ties and Helping Russian Intelligence: https://themoscowtimes.com/articles/global-cyber-security-fi...



It's another FUD. Keep in mind that Kaspersky lab have found EQUATION Group, no one else could.


How many Russian state sponsored hacking teams have they uncovered?


How many Russian state sponsored hacking teams have Symantec uncovered? They didn't even assign any specific state to the hacking teams, it can only be assumed from the targets. The fact that anyone spends time and publishes such findings is commendable, and is more than I have seen from other companies.

PS. Perhaps I just missed some publications, so I would welcome any links.


I see 2 options:

1) there is none 2) they are too good at hiding


Or option 3 they covered it up for their Russian buddies like the U.S. firms probably covered up Equation Group. This should be the default assumptions in what are essentially police states with surveillance and hacking prone intelligence services + ability to gag, bribe, or destroy commercial vendors. Best not to trust any of them in general and especially not trust them to turn on their domestic, intelligence services.


Whats the problem for Norton, Symantec, Mcafee to find them and do nice research paper for everyone to read?


I just said the problem.


You could say the same for Cisco, Juniper, and Microsoft.

Even your anti-virus companies has worked with the govt to allow some govt sponsored malware through.

Would be great if Kaspersky OS will be free and open source.


Come on... it's not like Oracle started as a project Larry Ellison worked on for the CIA...


Yes, but the quality of oracle means that a backdoor is your least concern ;)


This is the first thing I thought of tbh.

I've been slightly paranoid after it was revealed that Kryptowire discovered backdoors in some Chinese made smartphones:

http://www.nytimes.com/2016/11/16/us/politics/china-phones-s...

It was not a bug. Rather, Adups intentionally designed the software to help a Chinese phone manufacturer monitor user behavior, according to a document that Adups provided to explain the problem to BLU executives


Iran and other SCO countries will be customers since they are friendly with Russia. I wonder if they are running it on Elbrus chips. That would make it extra super secure and Russian.


Given that the photos show obviously bodged-on VGA connector and AHCI-related messages with Intel's vendor ID on screen I would assume that it is just a normal PC (ie. white-box switch).


> That would make it extra super secure and Russian.

And extra slow, considering Elbrus performance.


So should we also only use McAfee products if we want to send all the our information to CIA?


What choice do we have? All other operating systems send your information either to NSA or China. State surveillance reigns supreme.


FreeBSD, OpenBSD, and Linux don't.


The Intel ME and and AMD PSP are still executing proprietary code on an independent processor in your CPU package all the time, with full system access.

Linux cannot do anything about it.


Yeah you only FTrace your entire networking stack at watch if it ever sends/receives packets without your knowledge. Or use libpcap and accomplish the same task. Or use a user space packet stack stack and disable your default network interface.

I get not everything on the system is pure FOSS. But every binary ball isn't NSA spyware. If you assume that is true, you literally cannot use ANY computer.

FOSS OS's make it nothing but a question of work-hours to do the full trust but verify paradigm.


If you can't trust your processor you can't trust any of your verification chain that runs on said processor.


Ahem http://rationalviews.com/t/presentation-on-fully-open-source...

You could go into that rabbit hole. I'd recommend against it... I lost days reading all the docs and playing with this


POWER8/9 isn't fully open. Read the license agreements. While you get access to a lot of stuff, there is a lot of fine print you are ignoring. A lot of the deep docs are behind paywalls.

To get access to POWER8/9 literature you sign away your rights to OPEN-POWER. Also if you make anything for POWER8/9, under a public license (what license you can/can't use are dictated by the license agreement), using docs obtained from an OPEN-POWER member company if that member company upon leaving the OPEN-POWER may claim ownership of your code.

They'll really only let you use 3 Clause BSD or Apache2. Linux has the only exception for GPLv2, and GPLv3 is banned, using it on a project can have your membership to OPEN-POWER revoked, and your code ownership transferred to IBM. If they decide to purpose it.

OPEN-POWER isn't open. The docs are free and if you write anything too useful a high paying member can seize your software. The only protection from this, is to buy in as a high level enterprise member. OPEN-POWER is down right predatory for research free-tier membership.


Yeah, I went down that rabbit hole a bit.

The point is - it's about the only way to do a real actual code audit on what your processor is doing.


You can pay a company to fab a Leon3 or Leon4 for you. Leon3 is GPL with eASIC already supporting it in their Nextreme's. There's also Rocket RISC-V core that was fabbed on a 45nm SOI process. Do it on the same node with anything extra an external, swappable component on the PCB for supplier diversity. Additionally, Cambridge has FreeBSD running on a capability-secure version of 64-bit MIPS on FPGA's. It's called CHERI CPU and CHERIBSD. One might put that processor on an ASIC.

There's been many options but basically little individual, non-profit, or corporate work to make them happen. (shrugs)


Unlikely to be much unless there's a prevailing need.

I think it's easier to do deep packet inspection if you're that concerned, honestly.


Deep-packet inspection doesn't help if they leak along RF or covert channels in legit traffic. Catching RF leaks outside the most common spectrum is also something requiring expensive equipment and talent. The RF methods are in the TAO catalog as pre-built tools.


Yes - if we're going to include ultra-sonics, air-gap spanning networks, things of that nature... yeah, it gets very quickly into the range of nearly impossible to catch.

Especially if it's intermittent or simply passive. Then you could have an embedded issue for years and never know (I've long suspected that this could eventually be a problem for Defense companies)


I'm for different threat profiles with different schemes targeting them. We already have regular, security researchers and black hats hitting manipulation of flash, RAM cells, sound/speakers, and I/O firmware. It has to be in the threat model at least on the software side. Unfortunately, esp given the speeds of these things, mitigation probably demands new hardware either in general (eg custom RAM) or for detection (eg verifier of RAM's expected behavior). My old scheme of diverse, triple-redundant hardware with voting algorithms just can't match performance needed of modern workstations and servers in software alone. Maybe not FPGA's either.


The thing that really scares me, as a fortunately ex-security guy, is the fact that everywhere I've worked for the last 10 years people are super casual about keyboards and mobiles.

Mobile phones are an amazing platform to do... well, almost anything. There are some areas where their possession is restricted, though I suspect a motivated party could sneak a stripped down mobile device into nearly anywhere.

Keyboards, on the other hand. Wow. I've seen even airgapped systems have random keyboards right off the pallet slapped onto them. These sit in racks for months or years, then get tossed usually to a recycler, a donation program, stolen, or just thrown into a dumpster. Considering how much tech is in a keyboard, and how much volume it has, you could place nearly anything in there and possibly go ages without catching on.

A scenario that I recently pointed out as a 'thought exercise' was a refitted USB keyboard with a microphone, pinhole camera, and simple keylogger+screenshot engine that contained an intermittent RF/wifi/bluetooth/ultrasonic network. Programmed to dump its payload whenever an individual passed nearby and triggered it remotely.

Such a trojan could sit in a datacenter or conference room for years completely unnoticed. The data it captured transmitted only to the cleaning crew or whatever.

Worse yet, such a device could also pass instructions to the system it was attached to as an actual USB device.

You could fit a lot of horsepower in an innocuous Dell or MS or whatever mass-produced keyboard. Toss it into a top level conference room for corporate espionage, toss it into a data center for more direct trouble, whatever.

Scary thought, and I think part of why I still use the same keyboard I've had since 1997 ;)


"Considering how much tech is in a keyboard, and how much volume it has, you could place nearly anything in there and possibly go ages without catching on."

You're thinking on the right lines. I've thought of weaponizing them, too. Main reason most don't is someone might look inside one. Even if it's not the target, finding something obvious could make the news with result that attack no longer works. That's why NSA weaponizes the USB connectors themselves. I do think there's room for doing what NSA is doing in a mobile-style SoC that replaces main MCU of the keyboard with same labeling. People would be none the wiser unless carefully measuring electrical properties.

" and I think part of why I still use the same keyboard I've had since 1997 ;)"

Haha. I keep updating but I stopped trusting the computers a while back. Far as subversion, most PC-level subversions seem to have started close to 2000 with NSA's programs kicking in around 2004. So, I recommend people use pre-2004 or pre-2000 tech. Plenty of usable stuff in that category.


Ahem what? That proves nothing. Here's a write-up I did on verifying hardware against subversion.

https://news.ycombinator.com/item?id=10468624

It's a hard problem. That's why DARPA is throwing tons of money and brains at it right now. Also why a number of defense contractors maintain their own fabs and packaging plants despite the technology aging.


And your point is? It's not a hard problem. You either trust your fabricator and tools, or you don't.

End of freaking story. If you don't, and/or you can't throw a fab plant at it, your options are limited. You can read all the docs on the open CPU stuff (as far as that goes), you can literally do everything from scratch, but unless you're a Nation State or a massive company, you're pretty much wasting your time.

/edit: I don't mean this as a criticism of your writeup (where you basically state the same thing), or even some of your other comments (where you state much the same thing). It's literally an issue where there are VERY few people in the entire world capable of doing cutting edge processor design, coding, and implementation. They cost phenomenal amounts of money, and even with the money, people, and the best of intentions - a motivated nation state actor can muck things up.

The only real defense we have, as regular people, is to basically see if anything we own is misbehaving. Even that is a specialized skill set and time investment beyond what most folks are interested in committing.


"you either trust your fabricator and tools, or you don't."

Equals you trust blindly or don't trust at all. There's a whole range of verifiability between those two. It's worth exploring.

"but unless you're a Nation State or a massive company, you're pretty much wasting your time."

There's smaller firms on buying and supply side of the equation benefiting from simpler, easier-to-inspect stuff. Especially for energy or cost savings. Examples include Moore's Forth processors, Java CPU's, Plasma MIPS (FOSS), 16-bitters in smartcards, etc. Those on 0.35 micron or up can have random samples inspected by eye with microscopes if user wants to go extra mile. Alternatively, they at least have black boxes they can analyze or test for conformance to white-box designs they're supposed to be. Or more easily monitor at analog or digital levels for inconsistencies w/ power shut off during such an event.

Much more to this topic than you're suggesting.


Even the aerospace/defense companies I work with use almost 100% off-the-shelf hardware pre-installed with OS and software by the vendors, or at least with some minimal IT work (often offshore).

I'm happy to hear that there are some smaller fabs and things that are easier to inspect, but I think that the commodity level of most hardware still makes it super unlikely.

If a company is willing to use Office 365 (which I can neither confirm nor deny some very large shops might use, but wouldn't be out of the usual), you cannot seriously expect them to pay proper attention to what their processors are doing.

I would hope that if someone worked in high-clearance, there would be MANY such measures in place. The ease with which a fully loaded laptop can walk out of Los Alamos wouldn't really lend credence to them patching the biggest hole... the people.

You are my favorite form of security guy - the insanely suspicious sort who is always looking for the weakest point. But when it comes right down to it, there's billions of weak points at much higher levels and much more easily compromised than a chipset or compiler. It's a good academic exercise though.


"but I think that the commodity level of most hardware still makes it super unlikely." "If a company is willing to use Office 365"

Oh I'm with you on this. Steve Walker's Computer Security Initiative and the Orange Book gave us lots of highly-secure stuff for defense, etc. They got rid of that for cheap, fast, fully-featured COTS. The same happens in business, aerospace, etc. The exceptions are usually pre-made appliances or, in aerospace, better components in the DO-178B Level A stuff. Much of it is shoddy. A good chunk of the defense fabs' business is probably replacing legacy parts in old equipment at prices guaranteed through corrupt contracts. They don't give a shit about security in general: just money. ;)

"But when it comes right down to it, there's billions of weak points at much higher levels and much more easily compromised than a chipset or compiler."

There's lots of weak points. Stopping code injection from all known vectors with simple, proven techniques at CPU and language levels eliminates the whole malware problem if apps are whitelisted, built from source, and include no executable scripting/JIT. There's ways to conveniently enforce POLA within a system (eg CapDesk), do secure (even automated) configurations of networks (Boeing's Survivability Grammers), and so on. There's components for most use-cases just waiting to be productized, integrated and sold to larger audience. Given this is 90+% of attacks, it's certainly worth pushing to establish a stronger baseline for companies that want less loss of secrets, availability, data, etc.

There's only so much the traditional methods like coaching and monitoring can do if one can simply open a folder (not a file!) that immediately results in full-control of machine by malware since a thumbnail rendered. Endless crap like that exploiting underlying foundation of quicksand. The HW and SW of endpoints, at least at lowest layers, need to get in check for that other stuff to be meaningful. I'm also in favor of an integrated networking stack that makes different applications, even if using TCP or HTTP, look visibly different at the packet level so NIDS spots weird patterns more easily. Like the MLS extension but not MLS policy itself. Do it at application layers with stuff like Ethos's eTypes or security-enhanced ZeroMQ where developers don't worry about plumbing much.

" It's a good academic exercise though."

It's also an industry bringing in tens of millions of dollars at least. That's with costs that are too high, lack of key software support, and little to no advertising. I imagine it could be larger than tens of millions with such obstacles reduced or eliminated.


I think it would an interesting exercise to attempt to build a reasonably modern device that's truly audited and secure head-to-toe.

If it could be done with proper 'design by contract', inspections, and at a cost that folks could swallow, you might have a new Apple 2 on your hands. Not in the corporate world, at least not immediately (where tomorrow's profits outweigh next-week's), but among security cautious folks and researchers.

I'd love to see if a laptop, for example, could be built to that standard. And if built, if it could actually accomplish real (not hobbyist) work. BlacktOPS or something catchy :)


> But every binary ball isn't NSA spyware. If you assume that is true, you literally cannot use ANY computer.

You can use one, but you can expect it's exploited. It might not be a happy fact, but we shouldn't deny it if it's true.

The NSA by itself has 40,000 employees, tens of billions in budget, the best tools and tech in the world, and a track record of doing such things. I expect that if they see a valuable vulnerability, they will develop an exploit.


Most of them aren't exploiting software. I like to instead give the dollar amount they put into backdooring, hacking, or tapping major software or networks: $200+ million a year. Still a staggering amount that supports your point. If they doubt, just ask if they think whatever they're using is safe from all the black hats NSA could hire for just 1% of that amount? And for how long?

Answer isn't optimistic...


By comparison, the UK government announces £300 million investment to teach children how to join a choir. https://www.gov.uk/government/news/thousands-of-children-to-...

Priorities. :|


Neither can Kasperky OS for the matter.


Well except that they have vulnerabilities that the aforementioned agencies can exploit. So pragmatically, it's the same thing.

Even disregarding vulnerabilities, these OSes are not secured by design. Keeping your OS secured is a strong trade-off with convenience.

Although I guess that if you use OpenBSD, that's a strong signal that you want to go the extra mile in the direction of security.


I really do need to put more effort into using OpenBSD for my daily stuff.

I use it for my router, but my desktop is Linux (Steam. It's because of Steam. I'll be totally honest :D )


Great sweeping accusations require citations to back them up. Without any citations, the above statement is meaningless hyperbole.

There are quite a few open source operating systems that you can personally verify that they protect your information appropriately.


> Great sweeping accusations require citations to back them up. Without any citations, the above statement is meaningless hyperbole.

That's basically what I was always telling myself in the back of my mind during the latest US election, every time I heard that the Wikileaks DNC email leaks were originating from Russia :D


That's a bizarre accusation and very easy to fact-check for yourself. It's trivial to run a packet sniffer and see all the information being sent out of your network.

I know for sure that my apple and my linux boxes aren't making any network connections that I don't understand.


Unless you are running a sniffer on a separate device, you're likely far less sure of that than you think you are. There are multiple components in modern computers that have some level of system access and aren't terribly constrained by the kernel (possibly including the processor on your network card itself), and that's before you get into the kernel itself lying to you (either by design or through a module).


But how do you know your packet sniffer isn't compromised?

It's turtles all the way down.


Huh - I was merely repeating parents (admittedly ridiculous) accusation.


Although I thought the same when I saw it, I think that's unfair. You could just as easily say Symantec/Norton/Microsoft Defender/Windows/Google is CIA/NSA. Since everything's being watched, Kaspersky might be just as much FSB as it is NSA, or any other country that could get its mitts on it. Cisco was definitely completely NSA there for a while, because of the backdoor. China's got Lenovo and every smart appliance, phone, and TV made in China, CIA/NSA have Dell, Korea and Japan have all of their smart TVs, phones, appliances, etc.

I'm being a little sarcastic, as obviously, they aren't watching everyone all of the time, really, and most devices' backdoors, for those that have them, are unused. I'm fine with all of it, for the most part. Yeah, I'm not perfect, and I don't want all my info made public or sold, but we give up much, much more just by everything being online. Our banks, investments, to some extent our medical history, geneology, likes/dislikes, actions, schedule... it's all there. Each of us could be simulated with all of the info they have at this point, but they can't fully- yet. Now they just have to keep and mine all the data- which they do, but it's selective; it'll be a lot less selective about what is analyzed as time goes on. Then one or more AI's will decide what will happen to all that information, and us, if we don't all kill our planet or each other before then.

Best thing to do? Use the hell out of Kaspersky OS. Use Red Flag Linux. Just take all of your banking info and give it to the Nigerian whose been asking for it. If we all just gave up on security, what would humanity do with all of that trust? Ok, maybe not such a good idea... maybe paranoia can help you be a little more secure, for now. However, if it's open source and you build it yourself- then at least you could look at it, if you wanted, and had time.


You could just as easily say Symantec/Norton/Microsoft Defender/Windows/Google is CIA/NSA.

Isn't it, though? Apple seems to be the only company that has stood up to the three letter agencies. (I'm a US citizen.)


Actually MS was the one to first challenge National Security Letters. Moreover if it was found out that they had left a back door in it would result in the loss of billions of dollars in business, from Europe alone. Whereas Kaspersky very clearly will always have a Russian government/corporate client.


Are you sure Microsoft was? I thought Yahoo did. I can't find the reference now (Google is full of references to Yahoo being the first to disclose an NSL), but I seem to remember something from 2008-2009 era and Microsoft's challenge was in 2012-2013 era if memory serves. Not that this matters much in the grand scheme, but I'd still love to know which company really has that distinction. Links appreciated if you find one!


I suspect you're right, that said Yahoo has lost all respect from me after they've basically turned over everything without question after Mayer became CEO. In all honesty it's very hard to tell because the challenges are secret and sealed for the most part.


Apple only kind of stood up to the feds, they happily handed over the terrorists icloud account and all things associated, they only refused to be compelled to write software knowing full well that the FBI already easily had the capability to break into the iphone. I would say that the government having some sort of restricted access to this data could be useful and important to society IF, and only if, there were better civil liberties protections in place. DHS has way too much power, and we heard from Edward Snowden that our data is often irresponsibly used.


> Kaspersky might be just as much FSB as it is NSA, or any other country that could get its mitts on it

No, it's not the same. Despite its flaws, the United States government is not at all the same as Russia's.


It's spying powers are only much more capable, and its information war powers equally so, which is why you defend it.


No, I defend it because one is a liberal democracy that has civil rights and has promoted liberty and democracy throughout the world (despite many flaws). The other is Russia.


>promoted liberty and democracy throughout the world

You sure about that?

https://wikispooks.com/wiki/US/Efforts_to_Suppress_Democracy...


No, there's difference when companies are doing it hidden and when leader of the company publicly declares that surveillance is a must-have thing.


Will this be open source, or will we have to trust Kapersky that it is secure?


I guess you'll have to trust him and his friends from the FSB (former KGB) :)


Do you make the same remarks when Google, Microsoft, Apple or Palantir releases software?


I have the impression that ever since Snowden happened, yes. Any mention of a non open source OS gets accompanied by remarks about NSA, FBI and co.


Open source projects are not shielded from this either. Plenty of paranoia, justified or not, going around these days.

https://igurublog.wordpress.com/2014/04/08/julian-assange-de...

http://www.theverge.com/2013/12/20/5231006/nsa-paid-10-milli...

https://www.quora.com/Is-there-any-backdoor-left-in-Unix-or-...

https://blog.cloudflare.com/how-the-nsa-may-have-put-a-backd...

On one hand, we've been privy to some of NSA's operations, on the other, there is still a lot left to be disclosed, most will of course, never be out in the open.


Let's not forget the great OpenBSD code audit caused when someone from the FBI claimed to have planted a backdoor. http://arstechnica.com/information-technology/2010/12/openbs...


Don't know about him but I know I would if they released closed source security-critical software. Would you use Windows to host your super critical top secret backend? I know I wouldn't.


If it is a proper open source project, no.


They don't have this kind of history with security agencies though.


um... except they do. That's what the snowden documents showed us


No. Snowden documents didn't show that Larry and Sergei are childhood buddies with FBI, NSA or CIA chiefs.

That's the problem with intellectual americans - you guys know how bad things are in your country, but not realize how worse they are everywhere else.


No, I'll concede the specifics, but the impact is the same. They show that Google, FB, etc are all sending their data to the NSA and others.

> That's the problem with intellectual americans - you guys know how bad things are in your country, but not realize how worse they are everywhere else.

What? Just because some situations are worse in other countries means the US can't be doing anything wrong?


> No, I'll concede the specifics, but the impact is the same. They show that Google, FB, etc are all sending their data to the NSA and others.

Are you talking about National Security Letters? That's a complex topic. I would hope the companies did what they could to fight where they had room, but I also don't necessarily expect companies to break the law.

If you're talking about data, the NSA tapped private datacenter connections, and I'm under the impression that Google at least was working to mitigate this (encrypt all datacenter-to-datacenter traffic) before all this came to light. Or are you referring to something else?


> Are you talking about National Security Letters? That's a complex topic.

Indeed it is complex, but the end result is still the same. Sure, they don't have to fight the law, but I also don't have to trust them with my data. Nor does Russia, and I'd even argue that it's stupid for Russia to do so.


This is interesting, yes, but secure is not secure because we had a nice architectural idea. It's a long term process of review and improvement.

If they release this as open source I would be interested in looking and learning more and in a year or two, if the community develops maybe it's something worth deploying to production

But a completely new OS? Not on day one I think


There are no real details about the OS in the article. Did anybody here work on the project?


From what I heard from people that work there, this company mistreats employees and has huge problems with management. I could provide a proof link, but it's in Russian.


It is preferable to even outright include the link in the original post when making that sort of statements.

Without specific details, it sounds pretty much like any random Glassdoor report from un unhappy employee.



Go ahead and provide it, Russian or not.



I worked there in Moscow HQ. Now it's just a generic BigCo with all it's corresponding problems and Dilbertesque bureaucracy with lots of meetings to get things done.

There is a history of the company suing ex-employees and vice versa. It's pretty ruthless if you break NDA - people get real jail time. I got reminded about that during my exit interview.

I left for financial reasons. They simply refused to give raises to virtually anyone after RUR tanked in 2014, even though 85% of the company's revenues are from foreign sources.


RMB - Translate to English.


One of their developers said they're using C code generators in Haskell in this OS:

https://youtu.be/f6TmB6Zw8MQ?t=1860


> Meanwhile, all around this alchemy folks were fairly astonished: just what were we thinking? We’d decided to make an unhackable platform and ruin our other security business model?!

If a business wants to survive in the long term, it should always answer this question with: Would it be better if we would build it or others would do it? Because someone will eventually do it.


Haha! I am curious how long will it take to have it completely cracked.


On the basis that the grander the announcement, the greater the motivation to break it, I reckon about 20 minutes.


Correct! Kaspersky has just placed a MASSIVE TARGET on his back.

Understandably, investors do want catchy stories and media buzz... :)


It likely comes pre-compromised. See other discussion on Kaspersky's "dodgy history".


I want to highlight how their "sign up to our newsletter" pop-up only popped up when I scrolled to the bottom of the article, and how it had a prominent "No thanks" button. So much better UI than all those pop-on-load-before-I've-read-anything ones out there.


Somebody finally did this. Good for them. Unbreakable boxes for DNS, BGP, and routers will be a good start. Those boxes don't need to do anything else, and contain no user data.

Looking forward to hearing more about their OS.


"Finally?"

Did you miss HYDRA, Secure64, or the MILS/DO-178B groups building certified, partitioning middleware?

https://news.ycombinator.com/item?id=12988808

http://www.lynx.com/lcs-lynx-certifiable-protocol-stack/

It's been going on. The OEM's just aren't buying or cloning any of it outside a few. Many of those that do are going out of business because customers vote against security with their wallet. Even when the cost isn't much more. Outside of pure defense contractors, one of only ones making it is Genua selling OpenBSD-based solutions in Layer 2 and 3. Assuming they're still OpenBSD-based.

"Looking forward to hearing more about their OS."

I agree with this and your view on the need to develop more bare-bones, clean-slate solutions for these kinds of services. If we're lucky, Kaspersky's will both be more secure than Linux-based products and have convenient, affordable licensing than forerunners to get major adoption. However, as you probably know from high-assurance, a company with no history of making a system secure against strong attackers will probably fail to do so with its products. That's my default anyway. So, I also look forward to the details to compare and contrast against competing solutions in medium- and high-assurance spaces.


Unbreakable?

From a company based in Russia?

Really?


Does Russia make backdoored hardware and software? Do you have proof of that?

But we do have proof USA does that. See Snowden leaks.


Recent Russian legislation mandates backdoors:

http://arstechnica.com/tech-policy/2016/06/russias-new-spy-l...


No leaks doesn't mean there's nothing to leak.


Quite literally the opposite of any reasonable definition of security. "Security" isn't magic fairy dust, or a tagline, it is a social property that arises only _after_ many years of use and adaptation > "everything has been built from scratch... it’s simpler and safer to start from the ground up and do everything correctly. Which is just what we did."


"In order to hack this platform a cyber-baddie would need to break the digital signature, which – any time before the introduction of quantum computers – would be exorbitantly expensive." I loled


Very "Secrets from the future" of him :D


A lot of OpenBSD mentions here, but what about MINIX 3? It's so underrated and ignored :( They made a nice microkernel-based UNIX-like system with a NetBSD userspace.

Also, Redox is switching to a microkernel architecture...


re MINIX 3. It's not made for high-security. It's about high-reliability. Safety is usually a precursor to security but security can take a lot more. The core has to be designed for it like Genode is doing.

re Redox. It's a nice project I've praised before. It's an alpha work-in-progress, changing rapidly, and unclear how much of its design is truly for security. A trimmed OpenBSD is probably a safer bet than it so far given the staggering amount of review that went into it. Rust's features and a microkernel only prevent so many kinds of problems.


For the uninformed, what problems would Rust's features and microkernel not protect against?


I can't even remember for Rust. I just know Rust team here admitted there was a cut-off point in terms of safety features it provides like any other language. For microkernels, all they do is memory isolation plus limit kernel-mode damage. Past that, you have to design extra capabilities into the microkernel, trusted code, or apps. You can even have concurrency errors in your apps with those if there's a shared-memory space allowed.



How is this more secure than any other L3 switch with an out of band (or otherwise hardened) management interface?

I'm not going to say no one is attacking switches, but that's definitely not where I'd start.


> Third, everything has been built from scratch. Anticipating your questions: not even the slightest smell of Linux.

Built from scratch and secure, no Linux input. It's the Holy grail, move over Theo.


Closed source OS from an anti virus company with a dodgy history? I will pass.


Could you expand on the "dodgy history"? I always thought that Kaspersky was one of the "good guys"


There are several controversies, you can Google it, but that is not really important. Only "anti virus company" clause would be enough for me to not touch it with a ten foot pole.


Kaspersky had a history of working with Russian security agencies, has a lot of buddies there and a lot of people have throughout their careers moved from Kaspersky to these agencies and vice versa. If Russia will need somerhing from Kaspersky, government won't even need a warrant - he'll be happy to help.


The same can be said of most if not all US companies. They either co-operate willingly or are served 'secret letters' by 'secret courts' in 'secret processes'.

Not only is this co-operation extensive, documented and widely known there has been zero action, prosecutions or accountability from Snowden's leaks so far.

Kaspersky is an antivirus vendor with insignificant users and influence. Why care about Kapsersky when wide scale surveillance and 'co-operation' with the NSA of most US companies carries on uninterrupted inspite of the Snowden leaks.

That seems far more 'dodgy' than anything Kaspersky and Russia could be up to.


Are there any news articles to substantiate the claim that Kaspersky readily gives information to the Russian government?


Don't have links at hand because for me personally it's firsthand knowlesge from friends and classmates working there. It's not a big secret though.


Went to a KGB sponsored Uni and worked for the GRU :-)


Someone deep downthread had these links. I haven't vetted them so much as bringing them higher in thread to put more substance in discussion:

https://news.ycombinator.com/item?id=12986906


Well, it makes sense to anyone who fully trusts the Russian government. :-)


Dodgy history?


The same here!


No mention of verification like seL4 or CertiKOS?


I understand CertiKOS used Coq so the verification was at least half-automated? How L4 was certified -- what were the tools available at the time? Verification still remains huge work but sounds less heroic nowadays.

Now that we have tools and methodologies for verification, the announce of yet another secure OS suddenly sounds much less impressive.


Here is seL4 proof: https://github.com/seL4/l4v

To quote, "Most proofs in this repository are conducted in the interactive proof assistant Isabelle/HOL".


^ i'm interested in this as well. Would it be faster to build a secure OS using similar tech from seL4/CertiKOS now than their 14 yrs?


Gernot Heiser, the leader behind seL4, thinks so: https://microkerneldude.wordpress.com/2016/06/16/verified-so...

They built infrastructure to build verified OS stuff faster/cheaper. For example, they reimplemented ext2fs for Linux.


Security through obscurity. I thought we all have learned that it doesn't work. Well, good riddance, KasperskyOS!

"And then there are some details that will remain for certain customers’ eyes only forever, to ward off cyber-terrorist abuses."

https://eugene.kaspersky.com/2012/10/16/kl-developing-its-ow...


I agree. What I find interesting is that real world security is also a function of popularity. You won't get many outside hackers to attack a platform, if it's hardly used.

So any new platform would first need broad adoption, then a few years of maturity in able for the outside world to assess if it's more secure than current systems.

Obviously, security centric design helps a lot, but on the other hand Kaspersky is a relatively small player in comparison with the other OS-movements (whether capitalist or FOSS).


Popularity is not exactly what attracts most hackers nowadays; it's profit.

Yes, in the common case, the more popular the system, the more profit can be made by hacking it. But if a system is running on a few, but very strategic places, it will be an interesting target and thus attract a lot of effort.


>Security through obscurity. I thought we all have learned that it doesn't work

We've all learned that it can't be relied upon as a sole defence.


What about Firebrick? the company that manufactures it says that it an in-house TCP/IP stack (I don't know if they run their own OS). http://www.firebrick.co.uk/


My thoughts went as follows - Interesting..., Hmmm Not much real information..., hope its open source..., is this for network appliances or for general use..., again if isn't open source I really don't care.


> not even the slightest smell of Linux

Interesting to see if this catches on in the embedded / IoT space, I guess it's a bit of a leap writing drivers / software for a different OS


Since this is an OS designed with security in mind, and since they even mentioned Linux (albeit not too nicely), I missed a comparison with or mention of OpenBSD.


"not even the slightest smell of Linux."

So me too I have a dream. In my dream, every shop that needs two threads running and one semaphore synchronizing, will cease porting Linux and instead will roll up the sleeves, writes their own kernel 100% matching their need, prove its validity via formal verification and then use it.

In the world where this is possible, why would one go for a non-generic OS from a third party? Does look to me like Kaspersky might have that same vision for the future and tries to leverage their assets while it is not too late.

But security-by-brand-name is not really better than security-by-verification, isn't it?


> writes their own kernel 100% matching their need, prove its validity via formal verification

It's hard enough for one organisation to do this, given the fairly specialised set of skills it requires. Let alone every IoT vendor. There's no reason to massively replicate this kind of work. People would be better off building an ecosystem around sel4.


Agree, I'd like exactly to look closer inside "hard enough".

First, it depends on each specification -- what if the hardware is much-much smaller (IoT) and task to perform is well defined? It is hard today primarily because the required skill set becomes less and less current, but it is all demand-driven, it was not so some time ago.

Secondly, it could be replicated to some extent only -- for example, verified libraries for each device could come from each HW IP provider, instead of coming from the SoC vendor who integrated them. Say, Synopsis would provide a verified lib for the GbE controller -- to use with all SoCs that integrate it, etc. And the final integrator would take care of verifying the final integration, including his very small, fast, low-power-consuming and maintainable and 100%-dedicated kernel.

But my main argument is that the alternative is not so good-looking either. Porting and validation of the Linux kernel is performed mostly by engineers who have not the required skillset (managerial decision to spare on hires since "we have Linux") -- this is also something that I wish wasn't replicated among product makers but is, unfortunately.


> what if the hardware is much-much smaller (IoT) and task to perform is well defined?

Anything IoT needs a full network stack at least, and usually a set of radio drivers for WiFi, 6lowpan, Zigbee, Bluetooth or whatever. That usually amounts to quite a lot of software, which in the case of the radio stuff is often proprietary and patent-encumbered.

Asking the hardware vendors is a dead end. You might as well ask for a pony while you're at it, you're not going to get it either.

"Verified libraries" would necessarily be written against a particular OS interface and its guarantees. I'm not even sure how this process would work in terms of formal verification; even sel4 is forced to make assumptions about hardware.

The reason why you get bad Linux ports with no source and universal default passwords is simply cost. Customers do not incorporate security into their purchasing decisions - or they wouldn't buy these things - so this is what we get.


In other words, you believe that Kaspersky OS has its place and future market?

> Anything IoT needs a full network stack at least ...

My point really is, every IoT device would need only its part of the full network stack, not all the protocols currently implemented in, say, Linux, and chances are, some device will require extremely reduced subset of the network stack, especially at the lower layers and with respect to kernel interaction.

I am not a verification expert, but verifying a very reduced subset of a well-defined specs seems at least feasible -- how would we verify something open-ended? Those pesky little theorems would become much more general and would come in even greater numbers?

Verifying a generic OS, good for all devices and all application, if at all doable from theoretical standpoint, looks too much of work for no particular profit for the verifier (very similar to the validation of the Linux kernel which is relayed on to the distro builders).

I wouldn't be as pessimistic either about the hardware vendors. After all, (some of them) already use formal verification in (some of) the silicon design, and verifying closer to the silicon seems simpler -- the closer to the metal the less genericity (assuming the verified silicon underneath).


Given that there's not only software bugs, but the hardware ones, I wonder how secure it would be. I personally hate their software, but still, it would be nice to know.

P.S. Security without open-sourcing is impossible. Although, dunno how for other countries, but here in Russia some people have a different point of view.

Some people believe that “opensource is insecure by design, because everyone can see the code”. Probably Kaspersky has the same point.


Kaspersky is too smart to have such point, but their target markets might require different marketing.


> Security without open-sourcing is impossible

is just as documented an assertion as

> opensource is insecure by design

The two are completely orthogonal.


Those were two different opinions. First is mine, the second is my interviewer's.


Somehow this reminds me to John Draper's CrunchBox. Of course, it ran OpenBSD, an open source OS.


>"All the popular operating systems aren’t designed with security in mind"

I really disagree.


Anybody know which microkernel they are using? I was not able to find it


I would stay away from every snake oil product this KGB agent sells.


I think after Microsoft is part of Linux foundation now, and now your telling us that you have a powerful and secure CLOSE source OS !!! .. I think Open Source already won the war!! and your somewhat 14 years late...



Russia. Ha. No my friend. Kaspersky. Ha. No my friend. Closed source. Ha no, and you ain't ma friend.


Please stop posting unsubstantive comments.


Happy Pocky Day! (11-11)


Alas, the allusion was lost on some.


The marketing makes it sound a bit like SeL4. https://sel4.systems/


Should probably be named "Unbreakable Linux"... oh wait, that name is already taken by Oracle.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: