My masters thesis [1] was about ways browsers can help protect users from risks involving deception. We concluded that we can't stop attackers from imitating or manipulating UI/UX elements, but we can be clever about how we protect users by being more attentive to their interactions and more focused on subtle cues, rather than codified, absolute allow/block lists.<p>We discussed about how most browser warnings currently fill the page below the line of death in a way that is easy for phishing sites to impersonate. The user can click "Back to Safety" only to be taken to the real phishing page.<p>One of the experiments we conducted was presenting browser warnings above the line of death by replacing security indicators with risk indicators, and even popping-out a warning explanation upon a risky interaction.<p>Overall, subjects reported that they felt safer when the browser alerted them to abnormalities, rather than simply showing them when they were "secure" or having the browser making absolute trust decisions for them by blocking access to a page with a big warning.<p>[1]: <a href="https://scholarsarchive.byu.edu/etd/7403/" rel="nofollow">https://scholarsarchive.byu.edu/etd/7403/</a>
See the related browser-in-a-browser attack:<p><a href="https://news.ycombinator.com/item?id=30697329" rel="nofollow">https://news.ycombinator.com/item?id=30697329</a><p>The trusted UI battle has been effectively lost. Or it was not much of a battle in the first place, as an average consumer trusts anything with a lock icon on it, as UX researchers found out in 00s - 10s. WebAuthn and passwordless trust flows are our best hope to stop phishcalypse.
Seems to me a lot of this is possible because developers are lazy and want to shoehorn application delivery and runtime into a system originally designed for sharing documents.<p>Those same developers seem to heavily overlap with the group that loves to shit on FTP and DNS etc., because they were designed for a less adversarial internet. I'm not sure what to make of that cognitive dissonance.<p>But, maybe browsers as we know them should die and be replaced with something better.
"Security UI is hard". Yup.<p>It combines a lot of different aspects that make UI (which is always hard) more difficult:<p>* Catastrophic implications, but rare (in the typical user's experience). How often does the average user get phished or have their account taken over, compared to how often do they have to log in to Random App X to do their job?<p>* Can impede user's job, even when done right.<p>* Competes with functional features, sometimes directly. Why is there now a full window API? Because it is useful.<p>* People who work in the space are experts and will notice things that typical users will not (the example the author gives about Vista/XP)
Is the line of death actually a thing? I thought that users just trust everything that's on the screen tbh<p>A "line of death" sounds like something only technical users would notice
Using a bookmarks toolbar not only saves you time accessing frequently-used sites, it also makes the line of death a lot clearer and makes it harder to fake notifications/permissions popups.
I'm working on a project that aims to give a lot of freedom for user-generated content, and I've been wondering for a while how to protect from the picture-in-picture attacks.<p>One way is to ban an entire color region around a particular color you choose for fields requesting passwords or doing other sensitive data. The problem with it is of course that it's too big of a limitation.<p>But how about a pattern like yellow/black checkerboard or stripes? This would require the parent to be able to analyze the child's look, and whenever the security pattern would be detected, it would display some kind of a warning about the content being similar to a secured input without actually being the secured input...
Netscape Navigator 4.0 (NS4) would let a page open new browser windows, but if you wanted to hide the Navigator UI (the stuff above "The Line of Death" in this article), you needed to sign your scripts with your developer certificate.<p>The Netscape Security Team was worried about UI spoofing, the browser-in-a-browser attack.
- <a href="https://news.ycombinator.com/item?id=30722033" rel="nofollow">https://news.ycombinator.com/item?id=30722033</a><p>Alas, they need not have bothered. Users didn't notice fakes, and got mad if a web application was blocked. The whole apparatus to support public-key certification of web elements was pulled in later versions of Netscape.<p>25 years later, and essentially no one thinks about bad guys before dutifully typing their password.<p>Microsoft Windows tried. Windows shows a distinctive, full screen alert if you want to do something with elevated priveleges. Windows supports custom security policies and signed PowerShell scripts.<p>But the only way to prevent users from leaking authentication is to require auth that can't pass over a network. 2FA with local (not remote) physical token.
See also, when this was first posted:<p><a href="https://news.ycombinator.com/item?id=13400291" rel="nofollow">https://news.ycombinator.com/item?id=13400291</a> - Jan 2017 (106 comments)
Can this not be mitigated by paying attention and having browser add-on buttons on the main interface or a non-default config for the window? I see the bookmark bar has been mentioned.<p>I think this likely less affects me as I use Linux and Firefox. The window manager on my distro supersedes Firefox's, so if window in widow happened it would look weird because no window manager.
Funny that he clearly has a lot of insight into secure UI design but <i>still</i> thought that some kind of "trustbadge" would help with full screen web pages.