Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> and why is it on by default?

Because Cortana would be useless without it and that's a big user-facing feature of Windows 10.



Something like Cortana could have been built to work locally, using your own resources. But it wasn't. This was an opportunity to say "we're not like Siri and Google Now, we respect your privacy", but instead they built something just like them.


I imagine the resource costs would be too high to really make that worthwhile. The choice is probably between bad privacy settings, a bad personal assistant, or bad battery life and disk usage. Microsoft chose the option that would likely upset the smallest group of people.


I don't buy it. My Windows 10 machines will be Ivy Bridge, Broadwell, and soon Skylake-powered desktop processors running with gobs of excess computing capacity. Voice recognition should be feasible with this hardware, as it has been in the past with Windows 7 and Windows 8's often-ignored voice transcription feature. Furthermore, even if the performance were slightly worse, I would gladly sacrifice some performance for local execution with local data.

Now, I expect that such a local agent would need to have quite a bit of fine-grained control to satisfy privacy concerns (e.g., do you agree to allow me to send your query about films in your zip code to the MSN Movies site to get showtimes?) But I feel the actual processing of the day-to-day personal assistant features is not only eminently feasible on my desktop, but most likely also on my Surface or laptop.

The cloud is pernicious and voracious, its dominion grows quickly enough without needlessly exaggerating the necessity of offloading computation like this. Local computing devices—especially those that conventionally run Windows (desktop PCs and laptops)—are extremely capable.

Cortana is a cloud agent not because of requisite processing power. Illusory local processing deficiency is just a convenient justification for why it doesn't run locally.

But then, I am a strong advocate of personal compute servers and mainstreaming secure private networks. So I am obviously fringe in today's culture that embraces the centralized cloud.


I agree with you with everything except the expectation of performance of your desktop - you're underestimating the ever-growing bloat. Faster hardware is just an excuse for businesses to include more useless shi^H^H^Hvalue-added features and a way to speed up their delivery by caring about performance even less.

WRT cloud, we've already reached the point of ridicule with the new generation of Internet-connected hardware. So many useless webapps (er, "value-added cloud analytics platforms") and so many devices sitting centimeters from each other but communicating all the way around the world. There is absolutely no engineering reason for it to look that way - it's all just attempts to milk users by making them depend on cloud services.


processors running with gobs of excess computing capacity

You're right about the CPU. However, it's possible that good voice recognition also requires gigabytes of data. That wouldn't work so well for a tablet. Or maybe there is some custom hardware (like DSP chips) in the data center that is used? I don't know, I'm just playing devil's advocate.

I do agree with your sentiments. I'm not about to opt in to this garbage. I came of age in the era of the mainframe and I despised the lack of personal control. I won't willingly return to that. Today's cloud is just yesterday's mainframes and time-sharing by another name.


You know they sell things like the Surface 3 using an Intel i3 processor, and still sell Windows tablets with Atom processors, right? Just because you have a 12-core i7 with 320GB of RAM doesn't mean every Windows machine does.


Well, like I said, Windows 7 and 8 had voice transcription support built-in and it worked well on my old computers from 2009, with plenty of CPU capacity to spare. I expect a modern i3 would probably match my desktop i7 from '09.


It actually does run locally with access to API's that you authorize. Just saying...


Interesting. Is it easy to "firewall" Cortana so that it does local voice processing and connects only to external services you authorize?


You can actually customize a large part of it and turn all of it off.

You can't firewall it off, but you can learn how it actually works and just turn it off.


> turn it off

For now. This crap is going to get a lot harder to avoid when the Intel SGX instructions are widely deployed and it becomes possible to extend the lockdown from SecureBoot to the kernel and kernel-authorized apps.

I suggest fighting it now, while it is still just an annoyance.


But the entire point of the service was to use data mining techniques so that you could use natural language directives to say "add a reminder to my team's calendar to update some presentation in O365, etc"...

Maybe you don't find it that useful, but I think that a lot of people would. It will, in a future release, be genuinely useful. It's getting there.


All of which could be done locally. There's certainly no shortage of processing power available to do so. All of those services have APIs.


That's actually how it works. You give it permission to use your O365 account. You give it permission to use your location either at setup or in the config settings at a later date.

A whole host of the Cortana functionality is local that interacts with online services via API's that you authorize.

I don't think that really anything that I say is going to change your mind, but you could check out some of the video's on Channel9 where they go into it in detail. Some of it's pretty good and if you use headphones you can't hear your co-workers talk about stuff that makes you want to slap someone.


The claim that started this thread was that Cortana needed this data to be supplied to Microsoft, as controlled by the privacy settings mentioned in the linked article. If Cortana or similar services don't actually need this data, great; then they shouldn't ask for it or need to have privacy settings that allow it to be sent to Microsoft.


Josh, maybe you don't get how difficult Speech Recognition is now that it comes as standard in your smartphone, but they use Google/Apple (delete as appropriate) servers for a reason. There's a reason people were amazed at the response time of Cortana - local speech recognition that doesn't hog the processor is a big deal.

And connecting to O365 calendars offline? Is that not a stupid concept?


Last time I checked, and it was few years ago, analyzing voice locally was much faster than what phones do today - because well, mobile networks have latency. The round-trip to cloud and back itself can easily take a second.


I'm well aware of how phones handle speech recognition; there are reasons they do so via services that have little to do with the computational difficulty of speech recognition. It's not by any means necessary to upload raw voice data to a server and process it there, especially if we're talking about full computers rather than just phones.

> And connecting to O365 calendars offline? Is that not a stupid concept?

I said "local", not "offline". Though in any case, you should likely have a locally synced cache of your calendar for efficiency and the ability to read it offline. Web apps are quite capable of working while offline.


> we're talking about full computers

Worth mentioning: Windows 10 is not just for "full computers."


I'm aware, but the line is becoming increasingly blurred, and there's enough power on even the average phone to do speech recognition.


> And connecting to O365 calendars offline? Is that not a stupid concept?

Did we enter a new era where using your calendar offline is considered a special case ? I would assume there are few people who actively modify the same calendar, and it's pretty easy to tell a user when they modify a calendar offline meaning that it's not synchronized on other devices; is there really a need for making calendars online first ?


[deleted]


You misread their comment


Ah, I did. Thanks for the heads up instead of just downvoting!


I know. I think it's actually pretty cool. (Though it doesn't seem to work well with some builtin mic's.)

I really like the direction they are headed.


How about a box that shows up the first time you try to invoke Cortana, asking if you want to turn this feature on.


There are dialogs and UI hints that come up when the service is first accessed. Is it enough to placate someone who is seriously concerned with online privacy...probably not. It meets the minimum requirements to not be too sneaky.


I made a similar suggestion in a somewhat related topic concerning browsers and was told this is bad UX so it shouldn't be done. Informing people of what's going on and giving them power over their choices is bad UX. Somehow.


UX people can't seem to agree on much of anything. On environments where permissions must be explicitly granted (like iOS), I've seen articles saying to go both ways: "ask for everything right at the start", and "ask immediately before use".


> UX people can't seem to agree on much of anything.

It's almost as if "UX people" isn't referring to "UX person." Go to Stack Overflow and the vast majority of questions have multiple answers, as if "programmer people" can't agree on much of anything.


If you have a local account you have to sign into a microsoft account (changing from local) to get Cortana.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: