Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Contrary to public claims, Apple can read your iMessages (arstechnica.com)
144 points by shawndumas on Oct 17, 2013 | hide | past | favorite | 95 comments


This seems a bit disingenuous to me. If I understand it correctly, they are saying that the messages are encrypted using an RSA key unique to the sender, and that this is not secure because Apple could, if they wanted to or were ordered to, replace the certificates with ones that were not secret:

> Since Apple controls the entire infrastructure, there's nothing preventing company employees from swapping out the proper keys with ones controlled by Apple or other parties.

What is misleading here is that (if I understand it correctly) the cryptography is solid, it is the implementation being owned by Apple that raises the doubt. But this is NOT a statement about the cryptography used on the iMessages, it is a statement about the fact that Apple effectively has "root access" to their phones.

Imagine that you had a PERFECT and UNBREAKABLE code system. Perhaps it interfaces with angles who alter the laws of the universe to prevent anyone but the intended recipient from reading the messages. Such a system would still have the same vulnerability. All Apple needs to do is to alter the phone so that when you click on the icon to launch this perfectly secure messaging app it instead launches one that has the same UI but which actually sends a duplicate to the NSA (or whoever it is you don't want listening in).

ANY system where code can be deployed at will and the owner of the device is not in full control of the device will have this same level of vulnerability. Saying that a system is insecure because you could change the system doesn't tell you anything interesting about the system.


>(if I understand it correctly) the cryptography is solid

The issue is that iMessage outright ignores a long understood challenge in the APPLICATION of cryptography, namely knowing when to trust that someone is REALLY giving you their public key rather than being man-in-the-middled.

While this is a thorny challenge there are well understood, if imperfect, ways of addressing it that have been around for decades. At minimum it works like ssh, where you get a warning when an unfamiliar new key is presented and decide whether to trust it or not. More sophisticated is the certificate infrastructure around TLS/SSL where some effort is made to bring semi-trusted third parties into play.

These are all flawed solutions, the problem remains essentially unsolved. But any truly secure system based around public keys has some mechanism that addresses this challenge.

The issue with iMessage is that 1. Apple did not attempt to address the key exchange problem, which would maybe be fair enough on an "easy" consumer product with weak security claims but then 2. Apple made strong security claims very publicly and 3. even said a counterfactual, that it had no capability to intercept messages.

THAT is what is disingenuous: Apple implements public key cryptography, which since its inception and at its very heart raises difficult challenges around key exchange; Apple chooses to ignore these challenges; Apple then claims strong public key security. Either attempt to address the challenges like everyone else, or don't make the security claim, you can't do both.


I disagree.

The flawed systems are SMS and predecessor systems like alphanumeric pagers, where your messages are archived indefinitely in a database by the carrier. Remember the Wikileaks release of all of the pager traffic in lower Manhattan on 9/11?

In my mind, the NSA/PRISM stuff is a meta-problem that is just something that is hanging out in the background. I work in enterprise IT, I'm not a dissident, drug dealer or whistleblower, so I'm more worried about incompetent carriers and other hostile parties than the NSA.

So iMessage gives me encrypted message content that may be readable retrospectively by the NSA, and is almost certainly "tappable" by normal law enforcement.

The reality is, that's about as good as you are going to get outside of a defined community of interest. My employer has highly secure voice and text solutions available for key personnel, for instance. Perfectly secure communications won't be available to the masses, because the operators of those systems face liability.


This ignores the context of the system, which is everything in this controversy.

Apple did not say, "Here's iMessage, it is much more secure than SMS."

Apple said (in essence), "Here's iMessage, we couldn't tap it if we tried."

THAT is false, and all of the nice things you point out about iMessage aren't in dispute and don't have anything to do with the glaring inaccuracy of Apple's claim.


They didn't say that. Previously they said: "no one but the sender and receiver can see or read [FaceTime calls and iMessages]. Apple cannot decrypt that data." Now, they are saying "iMessage is not architected to allow Apple to read messages".

They key message is: Apple cannot decrypt or read your data. That's all they said.

They didn't say "Apple will not lawfully provide iMessage encryption keys to the government" nor "Apple will not provide the secret key for the iPhone CA to the government upon request/demand" nor "Apple will not provide iMessage data to the police."

When a secret court compels you to remain silent, what isn't said is critical.


Whats the liability difference between the systems at work and the systems for the general public?

Strong crypto is, and should be, available to everyone.


The law.

Common carriers are required to to have "tappable" systems if the system is deemed to replace phone conversations.


The biggest difference is ease of use. The more secure a system, the less user friendly it tends to get, and that is the cause why the general public don't get secret and top secret cryptographic products...


And so the challenge is similar to the TOR challenge, in a sense. If you want real security it requires a certain amount of inconvenience (such having to use an OS and a device for which you can verify it is doing what you think it is, or using a protocol that slows down your already slow internet connection) and because inconvenience is not popular this means using a tool that only people who are "afraid of being caught doing something" would want to use.

In a sense, you are an instant target for suspicion, you have created a "reasonable and probably cause for suspicion" by using such a device or software system.

I remember driving through my neighborhood in suburban Philadelphia and seeing a young man walking down the side of the road and being shocked by this act. Nobody walks in that neighborhood or along that road. Nobody bikes there. Frankly, it's ill advised because of the winding nature of the road and the absence of sidewalks or even a shoulder. I could feel myself become immediately suspicious of his presence and his actions.

All he was doing was going out for a walk. Frankly, I was the weird one.

At some point, as technology becomes embedded in our culture we expect people to use it and we look at those who do not as odd balls and are suspicious of them (think the Amish).

So if you are avoiding using the "latest and greatest" and opting to use a "secure system", people are going to look at you like you are crazy and be suspicious of your activities. This will probably drive you to keep it secret, which will only make it look more incriminating.

My point is: how do make these things mainstream in such a way that ordinary people who aren't interested in the security aspects can and will use them, but those who want or need the added security can use them without garnering suspicion for doing so?


That's a really interesting problem.

1. Say i care about privacy. I decide that all the help i give people over the internet, would be composed of a plain text teaser ,and an encrypted content. That's easy to imagine doing so in private messages , and even in small groups. Maybe even using some technical trick this could work on publicly posted information(or at least slow the rate/chance of wiretapping). And in those cases i only reply to messages sent securely.

If enough people do this , soon enough , there will be plenty of encryption.

2. Make it a habit to embed an encrypted background channel in many apps that always transmits, that need comes be , can be used to transmit info. "Hey it's just a cool app man". If enough such apps spread, suddenly everyone encrypts.

' 3. Same applies regarding to anonymity.


I don't find it disingenuous. Per the article's title, it is refuting Apple's claim that it "cannot" access iMessages, and on that basis, I think it's entirely valid. Apple put out a statement earlier this year:

For example, conversations which take place over iMessage and FaceTime are protected by end-to-end encryption so no one but the sender and receiver can see or read them. Apple cannot decrypt that data. [1]

I didn't read the article as asserting that the encryption is broken or poorly designed, simply that if compelled, Apple could indeed eavesdrop on iMessages, which contradicts the above statement. The author explains this explicitly:

In fairness to Apple, most other commercial messaging systems are also vulnerable to man-in-the-middle or similar attacks mounted by insiders. The difference is that few if any of those other providers have issued public statements claiming the messages sent over their services can be read only by the sender and receiver.

[1] https://www.apple.com/apples-commitment-to-customer-privacy/


Even disregarding Apple's control over the software, the system depends on Apple as a trusted key distributor. This is the vulnerability which the article is concerned with, as far as I can tell.


Two corrections to your comment...

ANY system where code can be deployed at will and the owner of the device is not in full control of the device will have this same level of vulnerability. Saying that a system is insecure because you could change the system doesn't tell you anything interesting about the system.

1. The NSA vacuums up network traffic. Meaning they use man-in-the-middle attacks. Sometimes they do force businesses to deploy a malicious program, as you describe, to gather a user's password. This is called a Pen Register, or Trap Trace Device: https://ssd.eff.org/wire/govt/pen-registers However, this is the exception, not the norm.

Defending against man-in-the-middle attacks is a bare minimum requirement for any system that purports to be "secure". Defending against Pen Registers is an unsolved problem, but it's unrelated to the attack presented here.

If I understand it correctly, they are saying that the messages are encrypted using an RSA key unique to the sender, and that this is not secure because Apple could, if they wanted to or were ordered to, replace the certificates with ones that were not secret. [...] What is misleading here is that (if I understand it correctly) the cryptography is solid, it is the implementation being owned by Apple that raises the doubt. But this is NOT a statement about the cryptography used on the iMessages, it is a statement about the fact that Apple effectively has "root access" to their phones.

2. From http://blog.quarkslab.com/static/resources/2013-10-17_imessa... page 36:

- All iMessages are encrypted and signed using asymmetric cryptography

- Thus, there has to be a key directory

- iMessage client retrieves recipient’s public keys by querying Apple’s ESS server

It's that last point which is the most important. When someone sends you an iMessage, their device queries an Apple-controlled server for the recipient's public key. The server returns a "Public Keys Buffer". The buffer contains an RSA public key (1280-bit) to encrypt messages for the remote device.

Apple controls that server. They can send you whatever public key they want.

In other words, while I wouldn't say it's "trivial," it's at least "realistic" for the government to be able to order Apple to serve up an MITM'd public key. Combined with a copy of the network traffic, they can then decrypt all iMessages. And the government isn't the only adversary to worry about, either.

The point of cryptography is that you should be able to trust who you choose to trust. Apple is saying we can trust their claim of not being able to decrypt iMessages. But that claim rests on Apple to serving up the proper public keys through Apple-controlled servers. This is called the key exchange problem, and it's why their iMessage claim can be accurately described as "completely bogus." They don't even attempt to address key exchange security.

The takeaway is: iMessage shouldn't be trusted.


>Apple controls that server. They can send you whatever public key they want.

Isn't that also true (minus Apple) for the PGP keyservers?


Yes. But with PGP you are supposed to check the key fingerprint before trusting it, either directly offline or by someone you have checked offline signing it.


That's not really going to fly with a consumer level product like iMessage though.


Why ? just as an option for extra security ?

And if you want to make it extra nice, you could add an automatic/half-automatic back channel through bluetooth. could create interesting design options.


This says that, yes, Apple can read your iMessages, but it seems to me that the protocol is pretty much designed to be as secure as one can make it while still being usable. Would you really expect people who used iMessage to call their friends and have them read hex fingerprints over the phone?

As anti-Apple as I am, I think we have to give them this one, especially when the alternative (i.e. making iMessage completely unencrypted) would have been much, much easier.


As anti-Apple as I am, I think we have to give them this one, especially when the alternative (i.e. making iMessage completely unencrypted) would have been much, much easier.

I would be prepared to give them this one, were it not for the fact that they claimed the opposite. Make all the security vs usability trade offs you want, just don't lie about them!


They may very well not be able to read your messages after the initial key exchange.


This is known to be false; Apple will restore your iMessages if e.g. you replace your phone.


These come from your iDevices encrypted backup in iCloud, not from iMessage servers. I guess if you dont use iCloud and back upp to your local iTunes on your Mac/PC, your messages are a bit more "private".


The backups stored on iCloud are encrypted with your Apple ID password.


Your password can be reset and you don't lose your backups so either Apple has your password in cleartext (so the encrypted backups can be viewed freely) or they have full access.


this seems to only be true if you are using iCloud backup. it is easy to not back up to iCloud if you choose.


Well - to be fair, BBM over BES is as secure as it gets. The only people who can read those messages are sender and recipient, and the owner of the org keys. Neither cell provider nor blackberry can decrypt them.


I think it's established that there are backdoors in BBM. E.g., http://www.zdnet.com/in/indias-blackberry-monitoring-system-...


Leaders from Saudi Arabia, India, and probably the UAE, Lebanon, Algeria and others all disagree with you on this one.


No contradiction. The telcos are the owners of the org keys.

Bbm is only secure if you run your own bes.


Incorrect. They have access to consumer level, same as any other nation that permits BBM.

There is no access to BES - it's a private key owned by the companies.


How does the key exchange happen between the two parties? With a party in the middle that has an already existing relationship with both parties, no? That is where you sit and exchange fake data with both endpoints.


With enterprise server (BES) presumably the company which owns the phone acts as the certificate authority. So your employer could pull off a MITM attack, but then they already own the hardware anyway.


Yeah like Apple makes your iPhone hardware anyway.


Except that this would have to be done at each individual company...


As far as I know, the Indian govenrment gave an ultimatum to BlackBerry to place servers in India and provide a mechanism to read messages or to leave the country. BlackBerry caved in and was forced to do so. So even BBM isn't as secure. Anytime you trust a third party with your data, there is always that element of risk involved.


That was for BIS - consumer-level BBM, essentially, and the same access is given (if not advertise) to governments anywhere.

Nobody has access to BBM over BES except each company running BES.


Note that BES communications are only secure within a community of interest. (Assuming whomever is running your BES isn't logging and transmitting your texts) Once you talk to someone outside of your BES, RIM controls the key.


How does it exchange the keys? If you don't verify fingerprints, there's always the possibility of a MITM.


You can arrange a chat protocol, I think, where MITM isn't possible without being easily detectable in a low latency environment. The idea would be that when you see the "..." of the other person typing, you're receiving an encrypted copy of the letters of their message. When the message comes through it contains information needed to decrypt the message that was received over many seconds. You could always be chatting with an imposter, but a middle man would have to wait for the decrypted message before sending out the re-encrypted keys, which would add a big awkward pause before every response even started being typed.


Hmm, interesting idea, thanks.

EDIT: I found the flaw: You have to wait until you have a full block before you can encrypt. If you send that, it's immediately decryptable. You can't send half a block, because you don't know the rest. Besides, the MITM can just pretend you haven't been typing until they receive the full message, and then start sending it.


Another interesting idea against MitM attacks on key exchange is implemented in ZRTP: verbal cross-check of two code words displayed on both terminals (during the first call) and key continuity (during subsequent calls).

http://en.wikipedia.org/wiki/ZRTP#Authentication

Cross-check of code words is essentially a humanization of RSA keys fingerprint cross-check. Only true geeks will speak hex digits over the phone, but normal people won't hesitate to tell a few words or small quizes/stories/whatever (if they need added security for this call).


The brilliant part of ZRTP isn't the cross check, it's using hash commitment to shorten the SAS to only 16 bits, from 160+.

By the way, if anyone knows how a birthday attack is possible on the verification without hash commitment? It seems to me that the attacker has to generate a key whose fingerprint matches the fingerprint of the key they agreed on with the second recipient. However, this means that they have to generate a key to match a specific one, rather than multiple keys, hence no birthday attack is possible. What am I missing?


I don't understand the first flaw you found. You can encrypt single characters + salt.

You are correct that the MITM could wait until it's seen the whole block and pretend your partner hasn't started typing, but that results in big pauses with no typing after every question, which is detectable.


If you encrypt a single character, you can also decrypt that single character. What would that gain you?

Also, what does the salt have to do with the encryption?


Forget the salt. I thought you were possibly concerned with encrypting one character being a weakness.

As a recipient of my message you receive my message encrypted one character at a time. When I press enter you receive the key to decrypt all of these individual characters and can string them together into my message. I pick a new key and start sending the next message one character at a time. I'm thinking that any MITM attack will introduce noticeable latency into this scheme.


How are you going to send the key over the wire without anyone intercepting it?


You have a session key that encrypts the entire channel. A MITM attack would require negotiating two session keys: one for you and one for your human chat parter.

Next you have a message key for each message that is used to encrypt and send each character and is sent at the end of each message (encrypted with the session key). A key detail I forgot to mention is that message keys should be tied to session keys so that the MITM can't just forward on the characters encrypted as they arrive.


Ah, I see what you mean now. Then a MITM attack would basically double your typing time. Given that messing around/thinking about what you want to say/etc takes a variable (and pretty large) amount of time, I'm not sure that just doubling your typing time would be detectable.


FYI it's not just increased typing time. It's increased pauses with no typing.


I meant that the added pause will be equal to your typing time, so if you wait 10 seconds and type for 1, the other person will see an 11-second pause and 1 second of typing.


(Replying to your new message)

Yes, it would be something you'd have to know to look for. The good thing is that under suitable conditions it would let you know that you weren't being MITMed.

A more practical system would probably be to just display a session ID in each chat window that is tied to the session key and is supposed to match between chat partners -- a fact that can be checked later in person. That would prevent MITM attacks from being commonplace without everyone knowing about it.


That's what every system currently does, you're supposed to verify the fingerprints before talking to the other person. Phone systems are easier because you can verify that they're who they say from their voice (when it's reading you the fingerprint).


Does iMessage do this?


I'm not sure, I've never used it. I was referring to OTR, Silent Circle, etc.


This reads more like "Apple can't read your messages as-is, but controls all the iMessage key-exchange infrastructure, and thus could sabotage it if it chose to".


So if I get this right, the vulnerability is that Apple could sit in between you and your friend, impersonating each to the other while relaying and reading your messages.

This seems about as vulnerable as the certificate authority/public key infrastructure system used for SSL, code signing, etc... We're always delegating to another party the responsibility of authenticating the person on the other side of our messages/web browser/signed software that we run. In the case of iMessage, Apple is responsible for authenticating your friend. In the SSL or signed code case, the certificate authorities are responsible. Seems that both Apple and CA companies might be subject to the same legal pressures to eavesdrop on people.


Not quite. A CA signs your public key. You never give anyone the private key, because... well, it's private.


You're not giving anyone your private key here, either.

stellar678 is basically correct - Apple, like a CA, is vouching for the authenticity of users' public keys. The only difference is superficial: a SSL CA signs a certificate and the validation is done without needing to contact the CA, whereas with iChat the public key is validated by virtue of coming from Apple. Just as a CA can sign a malicious certificate, Apple can send you a malicious public key.


I wouldn't call that a superficial difference. Sending a malicious public key from a central server is easier than injecting a public key into a network stream. The former requires just a single entity to be compromised. The latter requires a compromise in the network in addition to a compromise in the CA.


Except in the iMessage scenario, Apple generates (and keeps) a copy of the private key so you can recover a lost device by signing into iCloud on a new phone.


That's not what these researchers found. They found that each device has a distinct private key, and that the sender of an IM actually sends a separate message to each device, encrypted with the appropriate key. Do you have a source for your claim?


While Apple boasts of "end-to-end encryption" it's pretty clear that Apple itself holds the key -- because if you boot up a brand new iOS device, you automatically get access to your old messages. That means that (a) Apple is storing those messages in the cloud and (b) it can decrypt them if it needs to. From http://www.techdirt.com/articles/20130405/01485922590/dea-ac...


Since Apple controls the entire infrastructure, there's nothing preventing company employees from swapping out the proper keys with ones controlled by Apple or other parties.

Well internal controls could prevent that but since we don't know we'll assume they don't exist, print the headline:

Tim Cook is reading your iMessages right now!


Well, being prevented by internal controls is outside the scope of the question. Apple had claimed that they were incapable of intercepting the messages, not that they had policies against it. And Apple is known to be subject to secret US government demands that come with a gag order.


They said that they couldn't read iMessages as it is. One would think it was pretty obvious that they could change the system to be able to read them.


What we know is that it seems possible, when looking at the system externally, for them to perform MITM attacks. What we don't know is if their system internally makes it possible for anyone inside apple to intercept and read those messages, with or without an order from the NSA. They could very well have told us the truth!


No, what we know is that it seems impossible for them to perform MITM attacks at the moment, but they appear to have the ability to rewrite the app later if they wanted to perform MITM attacks. At least that's how I understood it.


"being prevented by internal controls is outside the scope of the question"

/shrug, when Gabriel W. says no one at Duck Duck Go can read your search history he's making a claim based entirely on internal controls that DDG has established that make this impossible. Similarly, Apple didn't claim it would be impossible for them to ever <rewrite everything> and one day be able to read your iMessages.


[The researchers] went on to say there's no technical measure stopping Apple employees, working under a secret court order or otherwise, from performing the same kind of attack and making it completely transparent to the parties exchanging iMessages.

Since Apple controls the entire infrastructure, there's nothing preventing company employees from swapping out the proper keys with ones controlled by Apple or other parties.

We have evidence of a secret court order for Verizon to hand over the meta-data associated with phone calls. IANAL, but this seems quite different from ordering Apple to perform a MITM attack against a user.

Under what kind of court order would Apple have to remove or compromise end-to-end encryption from a target's device?


The same kind of court order that told Lavabit to hand over their SSL key, one where the person of interest was communicating over iMessage. It would probably be accompanied by an NSL which would then prevent Apple from talking about it.


AFAIK, the Lavabit case is not legally decided yet:

  http://www.volokh.com/2013/10/11/lavabit-challenges-contempt-order/
I imagine Apple would put up quite a strong legal fight if they were issued the same order, especially considering their public stance.

A NSL can force Apple to not disclose it, but they can't force them to publically lie about it. And lying about that would be ridiculously stupid.


You also misunderstand a bit what happened in Lavabits case (easy to do I know). Basically the government got a court order for one of Lavabits user's email [1]. Lavabits responded they couldn't decrypt the email because they don't have the key. Government responded, ok, give us your SSL key so we can decrypt any traffic we may or may not have between you and your users. They stalled but eventually complied [2]. Court further forced them to give them the key in digital form, which they also did. And then shut down to prevent the Government's possession of that key to give them new data from their users.

There wasn't a "case" here, Lavabit was compelled and risked fines, jail time, or both if they did not comply. You get a legal court order you can challenge it in court, but when the court upholds the legality of the order (as it did here) you either comply or go to jail. Apple will comply with any legal court order (although they may object to it first, as Google has done according to court records).

[1] Presumed to be Snowden.

[2] Printed out in 4pt font (which I thought was brilliant btw)


> A NSL can force Apple to not disclose it, but they can't force them to publically lie about it. And lying about that would be ridiculously stupid.

I'm not sure they'd be put in a position to lie. It would have to be a very direct, specific question to _require_ a lie. Otherwise, dodgy language would suffice.


They didn't use dodgy language: "conversations which take place over iMessage and FaceTime are protected by end-to-end encryption so no one but the sender and receiver can see or read them. Apple cannot decrypt that data." [1].

1. https://www.apple.com/apples-commitment-to-customer-privacy/


And per the article, that statement can be true at the same time this statement can be true:

"Apple can intercept the creation of new conversations between users on iMessage or FaceTime if compelled to do so by a valid court order."

English being as fungible as it is, leads to this sort of mess.


I meant using dodgy langugage in future statements. They may not have received a NSL yet, so one doesn't know if their strong language could poentially be tempered by a NSL. I don't think one can conclusively argue this point either way.


Suppose Apple made the broad, unequivocal claim before they got an iMessage-related NSL. While the NSL can't compel Apple to lie, would Apple choose tone down their language afterwards?


A subpoena?

I'm not aware of the protocol, but it may be that Apple could only do this the first time someone sent a message, when keys were established. It might not, I'm just speculating, but it sounds reasonable.

If Apple wanted the protocol to be insecure, they'd have just sent messages in plaintext over TLS and be done with it. The fact that Apple actively designed a reasonably secure, encrypted protocol is evidence that they wanted to protect users against themselves too (to the point where it didn't impact usability).


If a subpoena were used for this purpose I'd be quite surprised. The Federal Rules of Civil Procedure only allows subpoenas to issue for the following:

1. Compel a witness to testify;

2. Compel someone to produce a document or "tangible thing";

3. Require someone to allow the inspection of premises.

Fed. R. Civ. Proc. Rule 45(a)(1)(A)(iii).

I can't imagine the Government would attempt to use subpoena power to force a business to take any other action, and if they tried to do so, as an attorney I would definitely appeal it (and probably prevail).

That's not to say there aren't other means at the Government's disposal, but subpoena is definitely not one of them.


The police will make this sort of request for subjects of criminal investigations. It's a specific process, governed by the courts and warrants -- I believe it's called a pen register.

Whenever a new technology comes out without lawful intercept, it gets embraced by criminals. Remember Nextel/Boost phones with direct connect? You might recall that there were 3 resellers on a block in the hood selling these things in the early 2000's. DirectConnect wasn't tappable for a few years, and the addressing scheme was tied to a phone, not a user. Those users later moved to BlackBerry.


Hasn't there been some discussion that law enforcement sends phones to apple to decrypt messages and the whole thing takes forever because Apple needs to apply some amount of brute force to it?


Does that mean they can view private pictures sent via iMessage?

I've wondered the same thing about Dropbox. If I were to open my dropbox at work, could my employer view private things in the dropbox? Even if I don't open the file, an icon will still load in the browser view.


If I were to open my dropbox at work, could my employer view private things in the dropbox?

If your employer has physical access or the admin password to your work computer (very likely) then anything you store or do on it is potentially compromised. (including passwords, keys, etc.) Same for work-issued phones etc. The security of the protocol is irrelevant in this case as you're trusting a compromised client.


Your employer could MITM all your SSL connections just by installing their own cert installed on the machine, so yeah.


Indeed. This is why Apple sending your AppleID and password in the clear inside the SSL connection to Apple is troubling. That makes it relatively easy for the right IT people to steal those credentials. Hopefully this will be fixed soon.


Article's author is Dan Goodin. I'd take everything he says with a quarry full of salt. This guy is symptomatic of all that is wrong with Ars these days.


Is there any commercially viable messaging protocol that is 100% defensible against any and all attacks?


i dont see how something like this could conceivably work for 2 parties that do not yet know each other. the problem of establishing identity/trust between strangers can't be solved in any 100% manner over any medium that can be tampered with.

if both of you rely on some intermediate authority for authentication (as HTTPS works), it's by definition not 100% because the intermediary can, in theory, be compromised.

if the two parties know each other and can authenticate via some basic questions, that may work. or you can just exchange symmetric AES keys in person or public certs through another channel.


How about physical trust creation protocol? Something like bumping phones to create key pairs and exchanging them via bluetooth. That might work fine in theory.


No, nothing is defensible against all attacks. OTR, Silent Phone/Text, (I'm forgetting some here), GPG/PGP, they're all good enough if you verify the fingerprints.


Did anyone ever figure out how the iMessage for Android developer got his implementation to work?...


saurik said this on Twitter[1]

> A 3rd-party iMessage app was just released on Android; I believe all data sent from/to Apple is resent to/from China for processing: beware.

[1]https://twitter.com/saurik/status/382387551144652800


Imagine a giant cluster of VMs running desktop iMessage. Imagine the android client is actually a VNC client connecting to one of the VMs and screen scraping it. Sprinkle with magic dust and UI polish.


I don't mind apple is able to it... as long as not the chinese government ..


Most likely, the chinese government can/does tell Apple to decrypt certain msg or don't do biz in China, similar to what India government told RIM.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: