Talk:Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet
- 1 Is this statement really correct?
- 2 Confusion About Encrypted Token Pattern
- 3 CSRF Prevention via Alternative HTTP Methods
- 4 Identifying cross-origin requests
- 5 Origin/Referrer Check doesn't Work When the URL is Entered into the Browser
- 6 The link-presenter host with regard to the Referer/Origin check
Is this statement really correct?
This means that while an attacker can send any value he wants with a malicious CSRF request, the attacker will be unable to modify or read the value stored in the cookie.
This seems incorrect. An attacker can modify cookies from a sibling domain. See https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf for examples. Thus double-submit csrf cookie should also be checked for some tie to the logged in session so that it can't just be pushed in.
Jim July 27, 2014: What about "This means that while an attacker can send any value he wants with a malicious CSRF request, the attacker would be unable to modify or read the value stored in the cookie other than via cross site scripting. (And cross site scripting resistance is a requirement for good CSRF defense to begin with."
--David Ohsie (talk) 09:47, 30 July 2014 (CDT) I don't think that this is 100% accurate either. Cookies can be written via CSS in a sibling domain, so that even if the application in not vulnerable to CSS, if the sibling domain application is vulnerable to CSS, or is simply not trustworthy, cookies can be written. Cookies can also be written via MITM attacks on http. So the application in question can be completely "secure", but it's cookies can be tampered with anyhow. That is why the paper referenced above suggests that the CSRF token must in some way be tied to the user's session or identity and the "naive" double submit method is vulnerable. At the very least, with naive double submit, the application should check to make sure that there is only one CSRF token cookie value as this will mitigate some attempts to write the cookie from another domain.
Jim July 30, 2014: Ok I got it. Can you make your change live in the document itself to clarify this? Or hey, the entire CSRF Defense cheat sheet needs to be fully re-written and made more concise and more of a "cheat" for developers. Are you interested in taking this over? Aloha! Ps: I'm email@example.com if you want to take this to email.
Jim July 31 2014, Thank you - any help is appreciated. I'm happy to discuss any changes you wish to make. I think this entire page needs a re-do, but happy to do that one change at a time. Aloha!
Confusion About Encrypted Token Pattern
Based on discussion about the encrypted token pattern taking place on security.stackexchange.com, I think that the discussion about the encrypted token pattern should include provide an explanation for the use of the nonce. Is the nonce supposed to be validated? If so, the validation section needs some additional text. Is the nonce there to provide some cryptographic protection? If so, what?
Neil Smithline, http://www.neilsmithline.com 14:45, 15 November 2015 (CST)
CSRF Prevention via Alternative HTTP Methods
Browsers permit cross-domain form submission, but only via GET and POST methods. Modern browsers do not allow you to set the method to anything else, cross domain, unless CORS is used. One can leverage this to prevent CSRF without the need for cryptographic tokens. If you code your site so that material changes to stored data (record updates, other writes) happen only via other HTTP methods (such as PUT, DELETE, PATCH, etc), then no other domain can submit those changes. Of course this means your site will need to use AJAX to submit changes, but this is a very common development pattern already. Definitely interested in any technical discussion on this idea, but if no one sees a problem with it, I think we should add it to the page as an option.
Identifying cross-origin requests
Thanks to the September 15, 2016 change, I see that CSRF can be reliably prevented by the action handlers checking the Origin header (or, in the absence of that, the Referer header). This is because the CSRF scenario abuses unsuspecting users' authorization, where the users generally run an up-to-date browser.
Еxtra checks such as those of anti-CSRF tokens could help mitigate the case of allowing a broad list of UI origins. Perhaps, this is what was meant by the phrase "we recommend a second check as an additional precaution to really make sure"? (The style and the title of the article seems to put the burden of proof on the reader. The existing prescriptive style encourages that, so I wonder if OWASP could use a descriptive style instead). --Eelgheez (talk) 10:02, 27 March 2017 (CDT)
Origin/Referrer Check doesn't Work When the URL is Entered into the Browser
When a page is reached by entering or pasting a URL into the browser, there is no $_SERVER['HTTP_ORIGIN'] or $_SERVER['HTTP_REFERER'] value. In this case the recommendation to block the action wouldn't seem to make sense. --lindon (talk) 19:02, 29 May 2017 (EDT)
- Thanks you Lindon for the remarks and note about these headers. In CSRF context, we want to protect from request altering data, is the reason why the CSRF filter is applied on request targeting "backend service" in order to be sure to be invoked for services and not for UI URL (ex: when loading a form or the site home page), perhaps i should mention it on sample. If the URL of the service is invoked directly (i.e. entering or pasting a URL into the browser as you mention) is not a normal behavior and the protection should block the request because service is invoked normally by ajax (more common way in recent apps) or submitting a html form. Thanks again for the talk, it helps to fine tune the sample. --dominique (talk) 06:20, 30 May 2017 (CEST)
- Indeed, the abuse scenario CSRF normally focuses on involves luring a victim user into a malicious site/vulnerable blog/forum and letting the user's browser execute requests against a CSRF-vulnerable target site on behalf of the user without the user's participation (or with the user clicking a form submit button aiming at the vulnerable target site). Luring a victim into pasting a link could be considered a less likely scenario. I guess that would rely on the target site interpreting a GET request or its embedded requests as actions. In that unlikely scenario I can see that checking the origin and "referer" referrer will block the unexpected abuse, encouraging developers to rely on REST conventions in mirroring the page state in its address. (Changing the application's state on receiving GET requests would make it vulnerable to embedded foreign requests such as <img src="https://bank.test/my/transfer?to=GogAndMagog">). --Eelgheez (talk) 09:11, 30 May 2017 (CDT)
I heard arguments for extending the whitelist to sites potentially hosting the links that point to the application that is protected with a Referer/Origin check. This can be a slippery slope as sending a second-factor authentication link through email may end up being hosted by a huge number of webmail providers. Besides, the argument had a design flaw where the CSRF protection applied to both GET and POST requests.
Instead I suggest to mention an implementation detail that relies on a common practice of separating the UI interface from the API. That is,
- instead of sending a link clicking which is supposed to generate a GET request such as SITE/authorize?id=XXXXXXX (with some non-predictable GUID) authorizing the user immediately,
I suggest to avoid implementing CSRF protection for GET requests entirely and keep actions that change user profiles to POST handlers. Therefore,
- send a link pointing to the UI service such as UISITE/authorize.html?id=XXXXXX Clicking it can be handled safely assuming that none of your GET request handlers implement CSRF protection. Once the user finds themselves in the unauthenticated page generated by the UI service, clicking a button in it will send a POST request to an API service such as SITE/authorize?id=XXXXX. The POST handler can safely apply the suggested Referer/Origin check, insisting on all of the received headers of these two to contain white-listed hosts. Eelgheez (talk) 16:29, 12 November 2018 (CST)
- The second-factor authentication link scenario seems degenerate in the above scheme because it does not rely on cookies. Only authenticated actions would need to rely on their map to the UI URL in order to get around the Referer/Origin and GET limits related to passing the link via email or any other medium. Eelgheez (talk) 17:59, 12 November 2018 (CST)