One of the "features" (back in the day) of running a diskless system was that you could set change policy on the server hosting the file which was completely out of reach of the "client" machine that was running the program. For nearly all of the system files there was no reason for them to change. NetApp turned this into a huge win when they could use snapshots to support multiple VM images with just the small configuration changes.<p>Given the well known benefit there, and that the processor on your hard drive is about as powerful as your phone, why not have the drive set up files that are 'read only' unless allowed to change out of band. Here is how it would work.<p>Your disk works like a regular SATA drive, except that there is a new SATA write option which can write a block as 'frozen'. Once written that way the block can be read but not written. You add an out of band logic signal and wire it up to a switch/button that you can put on the front (and/or) back panel. When the button is pressed the disk lets you 'unfreeze' or write frozen blocks, when it it isn't pressed they can't be changed.<p>Now your hard drive, in conjunction with a locally operated physical switch, protects sensitive files from being damaged or modified.
Okay, so I know Windows probably doesn't actually work this way, but from a user interface perspective... what's the rationale on giving an App permanent access to the user's home folder directories? Don't most well behaved apps have a file open / folder open dialog, which should be able to grant access to files at runtime? If the file opening dialog is provided and controlled by the operating system (I realize many, many legacy apps work differently in Windows) then the OS can silently grant permissions at the time of open, rather than letting apps either have free reign or no access at all.<p>I feel like this is the expected behavior anyway; Power Users may run utilities that need to touch the whole system, but most regular users are doing pretty good to juggle more than a handful of open files in their mental model of the machine while they're using it. The idea of file permissions is already pretty foreign to the average end user. Applications already have a designated area (%APPDATA%) where they can store their temporary files and things, so perhaps the documents folders <i>should</i> be more locked down by default.
I've always wondered why Windows and other OSes don't offer a 'cold storage' area where you need thaw out files before editing. Files not modified within a selected time freeze from further modification. I've got plenty of files that are archived that I'd never want to change, but it's a hassle to unmount/remount just to add a new file to an existing directory.
My concern is first off, this seems like it is going to break a massive number of applications. It also seems that they are pushing this layer of access management that doesn't have proper support on any platform but UWP.<p>I see this as Microsoft taking yet another step to force people to move to their new Appstore model. by choking the access to the operating system away from any other platform, which I find really amusing because their own top tier applications aren't built on these platforms (office, visual studio, etc..).
So last ransomware we seen in the news actually tried to reboot system and encrypt files before OS is loaded. So unless that new tech gonna protect MBR (which should be protected anyway) - not sure how it going to stop encryption.
Completely unrelated, but am I the only with an impression that MS has switched Windows into a rolling release OS (like Gentoo or Arch) with infinite updates of Windows 10? This would be a genius move to solve the issue of the users remaining on the old unmaintained release like it was with XP, and like it is now with 7.
I always thought protecting users from malicious code they willingly download and run themselves is futile and a waste of developers' resources.<p>Do I miss something and this is actually a viable security approach?
> If an app attempts to make a change to these files, and the app is blacklisted by the feature, you’ll get a notification about the attempt<p>So it's allow default? That sounds useless.<p>We need a deny default thing. Like Little Snitch but for disk. Every time an app accesses a directory it hasn't accessed before, ask. (Skip asking when files are opened using the system "Open file" dialog for a bit less annoyance.)
I think that the most recent attack in Ukraine already overcame this obstacle. They were able to use an in-place update system by a trusted software vendor to install their malicious code on the victim's computer. That software would almost certainly have had permissions even under this list, so it's not that effective.
How about using ML to detect profiles of access and disallowing un-common access patterns? If I only use VS Code to access my source, prevent win-malwr.sys from accessing that folder.
I'm surprised Google hasn't run a Chromebook advertising campaign which just says "use a Chromebook and never care about ransomware again"
This sounds like a feature that will be painful to work with for regular apps, but that malware will easily work around.<p>I mean I am no security expert at all, but you kind of need administrative privilege to install a malware, so why not keep it to access all the folders you need?
This seems like a good idea, and I'm pretty excited to see this step. Though I suspect if certain apps are whitelisted to edit in those folders, ransomware will simply turn to finding exploits in those apps. And most of your document and photo editing apps out there may not have been designed with security in mind, as they never expected to be gatekeepers of file access.<p>This will also probably be a UAC-level nightmare for getting old software to work on newer PCs, as today's software generally just assumes it can have file access to document folders.
How about we just have "copy-on-write" filesystems by default?<p>Something which then tries to "encrypt" your hard drive merely winds up creating another layer on top which you wipe out to get back the original files. You only have to flip a "hardware switch" when your disk fills up or you get a catastrophe.<p>I cry every time I see something that IBM or DEC got right <i>40 years ago</i> that we <i>STILL</i> haven't adopted.
What are "end-to-end security features"? They mention it once but then never again.<p>As far as I know, the term end to end is about communications: an exchange between two or more parties, or endpoints, which can be encrypted "end to end". I'm afraid they just dropped it as another term nobody knows the meaning of, so we'll have to find a new term to describe why Signal and Wire are better than (non-PGP) email.
I'm skeptical. The cost of managing these permissions might outweigh the benefit. But hey, why not try it. As long as I can disable it when it ends up getting in my way...
Linux has had the same issue for the longest time: You need root or a capability to set the time, but any program you run can wipe your entire home directory.
Perhaps the place to implement countermeasures is in the disk drive (SSD these days)?<p>e.g. arrange for the drive to never delete anything unless some key exchange has recently been done, that depends on user input (bio parameters, or password).<p>From a user perspective you'd see this as :<p>All deletes (and file version changes) go to a recycle bin. Emptying the bin can only be done upon presentation of the secret.
I wonder MS has given any thought to 'sealing' executable regions so no new instructions can leak into memory. IOW Once executed, a process can only reference instructions present in the binary itself. Basically make running JIT-ed code, self-modifying code, etc, a special process privilege, that can then have a limited process context for I/O.
This seems like another strange workaround. We need to change the way the operating system behaves for the future. The problem is default allow for untrusted code to execute. Everyone recognises this as the problem, no one wants to step forward and implement the change.<p>We do it for mobile, mostly, the desktop needs the same shift.
<i>"If an app attempts to make a change to these files, and the app is blacklisted by the feature, you’ll get a notification about the attempt,” Microsoft explains."</i><p>I don't understand. If they have a blacklist, why ask the user? Or is "blacklisted" used loosely here to include code flagged by heuristics?
The filesystem itself is a risk: per-user default permissions so any application launched by one user can trash all his files is scary. Even applications being able to access other installed applications is dangerous. I hope the industry find a way between all closed (a la Apple) and all open.
Or "Windows Will Protect Vulnerable Client Software With More Client Software".<p>Wouldn't it be much easier and more effective to offer a one-click low cost encrypted cloud backup-service? They could bundle this with Update or Defender to offer point in time recovery.
macOS already does this.<p>System Integrity Protection.<p><a href="https://support.apple.com/en-gb/HT204899" rel="nofollow">https://support.apple.com/en-gb/HT204899</a><p>[edit] apologies, indeed, SIP only protects system files, which is not what this article is about.
This seems like a rushed reaction to recent events - I think there will be problems as a result of the rushed implementation. I could only begin to imagine the embarrassment if this was the cause of the next zero day attack.
The UI is not really explained. I hope this is not going to train more generations of Windows user to click "yes yes yes" in response to annoying dialogs.
How often are browsers affected by 0-day exploits these days?<p>I they are not, wouldn't using web-applications and keeping your system up to date solve the whole issue?