Here's a simple scenario and I don't see any mitigation.
A site exists, let's say "search.com", and having your searches leaked is bad. A victim opens up a new browser (no HSTS from existing connections) and goes to search.com. An attacker MITMs this HTTP request and injects their own session cookies and then redirects to https://search.com. At this point, let's assume full-domain HSTS kicks in and no more HTTP/MITM are possible.
Now the victim is on https://search.com, lock icon and cert are a-OK. But the user is signed in as attacker. The user might not notice this, and makes searches, which are now available to the attacker, in the attacker's history.
Problem: Fix this, without overly relying on the victim noticing they are signed in as the attacker. After all, they just opened search.com for the first time and aren't expecting to be signed in (though they are wary of HTTPS and do notice the green URL bar).
Perhaps it is slightly contrived and the answer really is "Well, always look on a site to see if you're logged in as someone else". But otherwise it seems rather difficult to fix, outside of HSTS preload. (And this ignores all the other issues noted in the paper.)
Unfortunately no. An attacker can inject a signed cookie just as easily as an unsigned one. A signed cookie can prove integrity, that its data hasn't been tampered with; it can't prove that it is coming from the browser it was issued to.
That's a main point of the paper: taking legit sessions from Attacker and shoving them into Victim, then being able to spy on Victim even when Victim is on HTTPS. Apps aren't handling this case well, as in the example of being signed into GMail under Victim, but showing the chat widget of Attacker.
Exactly. A better, or at least more general, mitigation than enabling HSTS (though that's a good idea anyway), is to not design your web application in such a way that a modified cookie in the client creates a vulnerability. Since cookies are stored in the client, they are always going to be susceptible to malware on the user's machine. So, trusting that the contents of a cookie were authored by the service receiving them is a bad idea in general. Cookies should be stored along with some additional information that verifies their authorship.
A relatively simple way to accomplish this is to have your application include an HMAC in the cookie contents, and verify it whenever the cookie is received. e.g, if you are storing $session_id in a cookie, change your cookie contents to be "$session_id:$hmac_of_session_id", and verify the HMAC every time a cookie is presented.
Now a user, or malware, or a MITM, is not in the position to take over or modify a different user's session simply by altering the cookie, since they will not be able to produce a valid HMAC (the key is never shared with the user).
If even storing the key in your web frontends is too risky, you could use RSA or DSA signatures, only store the public key in the web frontend that verifies cookies, and store the private key in a more hardened cookie-signing service that isn't directly exposed to external networks. This service can be invoked when new sessions are created or upon user login, if applicable.
On top of this, if the client supports ChannelID, you should include the ChannelID in the message that is HMAC'd, so that stolen cookies cannot be reused on other machines.
> Now a user, or malware, or a MITM, is not in the position to take over
I fail to see how this fixes this issue. I can just set my cookie to $their_session_id:$hmac_of_their_session_id, or I can set their cookie to $my_session_id:$hmac_of_my_session_id
Sure, I can't modify signed cookies. But I'm still in a position to take over their session.
> I can just set my cookie to $their_session_id:$hmac_of_their_session_id
If you can steal somebody else's cookies (which are not Channel-bound) then that's true. If you can only steal or predict somebody else's session ID's, the HMAC provides protection.
It's not atypical for session IDs to be simple counters that get incremented for each new session. If your session ID is 100042, it's a pretty good bet that 100041 and 100043 are valid session IDs as well, and without HMAC, a user could take over these sessions trivially.
The even better mitigation to cookie theft, which I also mention, is TLS ChannelID.
ChannelID creates a unique private/public keypair for each new TLS connection, and sends the public part along in the TLS handshake. Then, when you resume sessions from the same machine, you can prove that you have the private part and the server can accept your existing cookies. With this approach, cookies are no longer bearer tokens and stealing cookies becomes worthless.
This can be hardened even against local malware running as the same principal as the user doing the browsing if the browser's ChannelID implementation generates and stores the private key inside a TPM or HSM.
> If you can steal somebody else's cookies (which are not Channel-bound) then that's true. If you can only steal or predict somebody else's session ID's, the HMAC provides protection.
Session fixation. You don't need to steal any cookies. The attacker can plant his own session ID cookie in the victim's browser using the OP exploit. Using signed cookies doesn't change this attack at all.
Presumably this could be done transparently by the web server? Incoming cookies could be validated and then the underlying cookie value passed on to the app. Meanwhile, cookies set by the app get rewritten by the web server to include the signature. This would mean that no server-side code would need to be changed in order to support the cookie signing.