I'm already not wanting to have personal conversations on teams. My tech savvy colleagues and the ones who can be convinced are on signal, where we talk about job offers and relationships. A few others do Instagram, and get to see my art photography. And occasionally I'll bump into someone when we're both in the office and be able to say whatever not looked over by AI. There's a real chilling effect on getting to know people.
So, now we can't use Teams to have the "water-cooler" moments that supervisors claim we need, but really we are having them on Signal or IOS and they just can't measure that. Organizations really, really, really hate transparency.
This is nothing new: corporations have scanned instant messages, emails and even recorded phone calls for decades, and will fire you based on that evidence for violations of corporate policy. And will sue you or call the cops if they detect potential crimes.<p>I’m kind of surprised so many people are shocked by this. I know of one company where dozens of people were fired because their email was scanned for external job interviews and the CIO had a report, which he used to prematurely cut staff when he needed to save budget.<p>The only difference now is that the tech is smarter and cheaper so that you don’t need to pay as many people to spy on their coworkers.<p>Your defence against this is to find a job where you’re too valuable for them to do anything. As with any jurisdiction where there is at will employment.
"The leavers classifier detects messages that explicitly express intent to leave the organization, which is an early signal that may put the organization at risk of malicious or inadvertent data exfiltration upon departure".
In other words "how to promote and encourage paranoid behaviors from employers" :(
It seems maybe not the intent, but the practical result is to use the <i>private</i> sector to implement CCP like social credit scores isn't it? By doing everything in the private sector they get around all those pesky constitutional protections.
Submitted title was "Office 365 implementing AI to detect employees colluding, leaving and more". That broke the site guidelines: "<i>Please use the original title, unless it is misleading or linkbait; don't editorialize.</i>" - <a href="https://news.ycombinator.com/newsguidelines.html" rel="nofollow">https://news.ycombinator.com/newsguidelines.html</a><p>The proper place to include that sort of interpretation is by adding it in a comment in the thread. Then your interpretation is on a level playing field with everyone else's (<a href="https://hn.algolia.com/?dateRange=all&page=0&prefix=false&sort=byDate&type=comment&query=%22level%20playing%20field%22%20by:dang" rel="nofollow">https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...</a>). Also, a comment gives you room to actually substantiate your interpretation.<p>On the other hand, a thread like this probably wouldn't have gotten attention without the sensational title in the first place, so this kind of submission is a borderline case and at worst a venial sin. (We still change the title once it does make the frontpage though.)
I think if you have an E5 license there is already thoughtcrime functionality built-in. I remember someone demoing this to me in a Teams user group, and no one seemed to think it was creepy at all. In addition to flagging keywords it also used AI to detect undesirable thoughts and emotions, under the guise of anti-harassment and compliance. Unfortunately I can't remember the name of the feature but I think it might be this:<p><a href="https://docs.microsoft.com/en-us/microsoft-365/compliance/communication-compliance?view=o365-worldwide" rel="nofollow">https://docs.microsoft.com/en-us/microsoft-365/compliance/co...</a><p>So I think if Microsoft existed in the world of 1984, they would easily be the preferred tech vendor for IngSoc.<p>Side note, do you think this would also detect the money laundering and bribery going on within Microsoft itself?<p><a href="https://www.theverge.com/2022/3/25/22995144/microsoft-foreign-corrupt-practices-bribery-whistleblower-contracting" rel="nofollow">https://www.theverge.com/2022/3/25/22995144/microsoft-foreig...</a><p>Side-side note, I think the reason why that is allowed to still keep going on given that the SEC knows about it and that there's ample evidence has to do with national security reasons.<p>It's extremely troubling that given all this corporate authoritarian AI tech they built that Microsoft is still trying to be the voice of reason about the dangers of AI.
There is no way they will be able to make an AI at this point that will<p>A) Be accurate<p>B) Work across multiple contexts<p>C) Run efficiently on billions of messages<p>This will just result in many false positives, and unnecessary eavesdropping on employees personal conversations.<p>Once its revealed an organization is using this, people will quickly move all conversations to another platform, even if policy forbids that. Resulting in an even greater security risk potentially.<p>And as per usual, if Microsoft gets someone fired (e.g. comes in looking for money laundering, finds out the staff member is making fun of their boss), there will be no repercussions.
Part of what makes stuff like this surprising is expectations of privacy. Like for example. If you start a video chat on Hangouts or Zoom, even or maybe especially on a work account, you don’t expect that meeting to be recorded or analyzed surreptitiously. I think in many places, it would be illegal.<p>Because of this, one might feel like the same standard applies to other one-on-one and small group communication avenues, but it’s actually completely the opposite.
Reinforces that during interviews candidates should be determining what the company uses for internal communications and choose accordingly.<p>Anyone using Teams is already a red flag.
I honestly first thought this article was satire. It is so unreal to find myself in a world where this is acceptable. What's next? Installing cameras in restrooms to catch offline conversations?
I have zero confidence that this system is smart enough to differentiate between all these things and the legitimate variants thereof (e.g. collusion and cross team collaboration are basically indistinguishable) that companies actually want people doing or discussing and likely outnumber the bad by orders of magnitude.
This seems to be office 365 implementing monitoring of official communications of employees & contractors for the office account? I don’t think it extends to a personal office 365 account, at least it didn’t seem to.<p>Why is this exactly newsworthy? Any communication through official channels is the property of the employer anyway. To collude, leave & other stuff use personal channels maybe.
I am almost surprised it took this long to get to this point, but I suppose the recent resignation wave made it into a viable product offering. My last MBA class was HR analytics class that, among other things, dealt with email sentiment analysis and stuff like that. Part of me was thinking average HR person won't touch this stuff, but if a company just happened to offer something that would do it for them..
I've always had a preference against working with microsoft products but this is getting to the point where I'd find a new gig instead of being subjected to this stuff.
What is the effect on creative expression and sociability, between co-workers, when they know they're being analyzed by a computer to figure out if they should be fired?
If you don’t feel like turning off your adblocking: <a href="https://archive.ph/3XVFT" rel="nofollow">https://archive.ph/3XVFT</a>
This is absolutely going to be used against unionizers, which is what's really meant by "colluding". In the US this is going to get a lot of people fired. In other parts of the world, it's going to get them killed. This kind of software is Zyklon B for the 21st century.
Fun fact: I used to work in this team.<p>We have come a long way now that we have these advanced classifiers. You would be surprised how low tech the initial product was, by low tech I mean devoid of any ML/AI. We went GA in end of 2019.<p>Saw a lot of interesting use cases too for e.g Japanese enterprises wanting to detect cases like suicide or intent to suicide, that is why we have multiple types of classifiers.<p>I worked on the Infra side (not ML). That too was “low-tech” or the more apt term would be “not the latest tech”. Core parts of the app were part of a monolith (think Exchange). Then we were using a really old .NET Framework version for our MVC app. Lot of the storage technologies we used were very MS specific as well. AFAIK, all of this is still valid today.
Seems to only apply to messages, for now. My understanding is that unless a call on Teams is explicitly recorded, there's no capability for the organization to monitor the content within.<p>Is this still accurate? Are there any features in the pipeline planning to change this?<p>Microsoft offering "communications compliance" within the same product is certainly chilling enough as it is. The reality where people lose their job as a result of previously-protected casual [voice] chat doesn't seem so crazy now. All it takes is missing a quietly-introduced feature update by a week before the organization flips the switch and doesn't tell anyone.
That link was not happy about my pihole swallowing their ad links, so I could not read it.<p>I will say, however, that I don't use my personal phone to host any employer apps. It is my phone, not theirs. I pay the service fee.<p>So conversations I have on my phone are mine. My coworkers all operate the same way.
Sounds like its time to set up a scheduled batch file that sends a bunch of messages around that would trigger watchdogs like this, as well as the NSA prism keywords just for funsies.
It's interesting looking at the way they try sell this monitoring to the employees as being a positive thing[1]. At least the wider population can experience what it's like to live under DTEX[2]<p>[1]: <a href="https://www.microsoft.com/en-us/microsoft-viva/insights" rel="nofollow">https://www.microsoft.com/en-us/microsoft-viva/insights</a><p>[2]: <a href="https://www.dtexsystems.com" rel="nofollow">https://www.dtexsystems.com</a>
Oh great - AI thought police to make the corporate existence even bleaker.<p>Could someone head over to MS HQ and slap some sense into whoever thought blessing the world with this is a win?
I hope the people implementing all these policies and technologies are seriously weighing the consequences of their actions. I suspect that they are not.
MS Office Home > Admin > Exchange Admin Center > Mail Flow > Rules > Click the plus sign for New Rule > Create New Rule > Apply this rule if > Subject or Body includes > Specify a word or phrase<p>How good the AI is, depends on the flood of false positives the current system generates. If MS is true to form getting anything useful comes at great expense.<p>The #1 thing they search for is notably missing from the list.
At one of the previous employers, they sold part of the company to an outsourcing enterprise, including employees and founded a new one to move the remaining employees.<p>As a part "of being sold company" When I wanted to interview to the new company, my future to be manager send me his phone number, and advised not to use Teams for any sensitive conversation.
Why is <i>everything</i> duplicated in this announcement? The list of classifier descriptions effectively appears twice, the first time with the text of the "What you need to do to prepare" (which, btw, says exactly nothing on how to prepare) appended to each item.<p>What even is this site? It looks like grade A content rehashing from various MS sites...
There are good and bad for those feature. The only rule of thumb: Never perform non-working related matter in workplace or facility provided by company, no matter how good your performance in company or how good your relationship with superior.
hmm I think I have a new reason not to use <i>any</i> Microsoft products in the office. I can even claim an ethics issue with interacting with them now. Unfortunately the existence of this feature breaks trust that my management <i>hasn't</i> abused it, the only way to avoid this is by not engaging with Microsoft offerings such as word or excel in the office.
Wish I could read the page but apparently my ad blocker is too offensive. Well I'd be fine with supporting the publisher through online ads but I am really not okay with the tracking those advertisers do. You ditch the tracking and any annoying ads and I'll ditch the ad blocker. Until then, we have to agree to disagree, the Faustian bargain of internet advertising is untenable.
Employees get what they need and give what they can..<p>But seriously, I always found it amusing that once you step into a corporate you can get food, drinks & other amnesties for free.. almost like it's a socialist society.. But when said employees step outside, they are the first in-line for the capitalist agenda..
The only way this sort of thing changes is with labor organization ie unionization.<p>The government won’t save you from efforts like this. The government represents the interests of the capital owning class.<p>The demonization of unions is one of the most successful cases of propaganda in the last century. It’s gone so far s people who will die on the hill of Jeff Bezos paying slightly more taxes because everyone seems to think they’ll be Jeff Bezos one day.
If you've ever thought your employer isn't monitoring the chat then you're a fool. I'd go as far to say that if you think there is any form of electronic communication that isn't being monitored on some level you're also being foolish.
Has this been reported to EFF? Not seeing anything on their site <a href="https://www.eff.org/" rel="nofollow">https://www.eff.org/</a>