Your FICO score and detailed credit history already serve this purpose to a large degree. Everyone knows that affects whether or not you can get a loan (and thus purchase a car or home), and on the surface that seems fair enough, but is already fairly discriminatory. But it's far worse than that; if you have a bad credit score, you can't even <i>rent</i> an apartment. And many employers won't hire you. It's not just used to calculate what interest rate you get charged; it's used to judge your value as a human being in general.
<i>"The most disturbing attribute of a social credit system is not that it’s invasive, but that it’s extralegal. Crimes are punished outside the legal system, which means no presumption of innocence, no legal representation, no judge, no jury, and often no appeal. In other words, it’s an alternative legal system where the accused have fewer rights."</i><p>Also, what if you disagree with me politically about illegal immigration - does that give you a "right" to ban me from your establishment? Imagine it on the other side as well for people who are against abortion.
<i>Honest</i> discussions and reporting around anything involving China is quite difficult to come by these days. I would very much recommend everyone take any Western reporting with a grain of salt, especially from obvious pro-U.S. sources. Much of the coverage of the situation in HK can be seen as evidence of this. On this topic specifically, Wired's recent article over this was a breath of fresh air[1] for the overall state of news reporting when it comes to China.<p>Some clarification: First, when I talk about this slant in reporting Chinese affairs, it's not from a pro-CCP or pro-China position. Discussions about Chinese media and bias are still essential to have, but has nothing to do with the point I'm making here. Next, this issue isn't limited to just US media. There is arguably an observable bias even when it comes to Western academics that study or cover China in some capacity.<p>I'm sure aspects of this system in China earns a healthy dose of criticism and skepticism. However, it's important to consider the way this may be reported in the West, especially as tensions heat up between China and the US. Just think, for example, that it would not be very difficult to cover the US's credit score system as authoritarian, racist, or Orwellian. In fact, such cases have been made in the past and have some weight to them.<p>Just a thought.<p>[1] <a href="https://www.wired.com/story/china-social-credit-score-system/" rel="nofollow">https://www.wired.com/story/china-social-credit-score-system...</a>
Why do you think Google is indexing all your online purchases?<p>Look, online tracking inevitably leads to behavioral profiling. Put two behavioral profiles side by side and you have scoring, ranking. Add some weight on behaviors, age, income etc. and you have a social credit system.<p>Edit: the question is not if big tech already have social scores in us or not. The questions are when and how they will use it
Fully, fully agree.<p>Tech has opened up "channels of analysis" that the US spent a huge amount of time, effort, and social strife legislating against. Because of the hub-and-spoke model of tech and data, a lot of these channels have been re-opened in indirect, but socially important ways.<p>What goes in one end for social media, comes out the other end in your insurance rates. How do we think Rocket Mortgage generates am instantaneous rate, when mortgage lenders used to rely on a great deal of relationship management and building to do the same loan issuance (and, they had credit scores back then, so it's not only an API into FICO that's changed).<p>There are so many unknown unknowns here now. Previously, your mortgage rate bumped up if you lived in a red-line neighborhood. This was legislated against. Now, is the same thing happening with online banking/online health insurance rate quotes, etc., if you have a history of social media locations that place you in minority neighborhoods? Odds are, I bet yes, or something very similar.<p>The more people that realize what is going into seemingly innocuous uses of tech - game apps, social media, food ordering, comes out in the other end in things that really matter to us - banking (this guy's kid spends $1k a month on candy crush), insurance (see the article), politics (we all know this), the more this can start to be legislated safely, and at least make the consumer aware.
> That Instagram pic showing you teasing a grizzly bear at Yellowstone with a martini in one hand, a bucket of cheese fries in the other, and a cigarette in your mouth, could cost you. On the other hand, a Facebook post showing you doing yoga might save you money.<p>It will be a cold day in hell when any insurance company investigates you for potential fraud, decides you're not guilty, and then decides to <i>LOWER</i> your premiums as a result of their investigation. Premiums don't work that way. It's frustrating to see that float around as even a possibility from a blog like FastCompany that claims to know how the business world works. I find it incredibly frustrating anyone would even insinuate that insurance companies have a conscience.
Isn't this already done to a great extent through black balling?<p>SV recruiters blackball candidates/potential employees (often for no reason than petty vindictiveness);<p>SV incubators/VCs use blackball lists;<p>even insurance has internal lists in order to assess and deny claims based on nothing other than the "social credit" of the claimant rather than the merits of the claim itself. They even go a step further and insurance will rate/rank a claimant's attorney, so you may have a good claim that gets denied because you have a low ranking attorney, or you may have a weak claim they approve because your attorney is highly ranked (probably has a number of jury verdicts in similar cases).
This makes me wonder which company made this "Social Credit Drone Strike List" [1]<p>> In 2014, former CIA and NSA director Michael Hayden said in a public debate, “We kill people based on metadata.”<p>> According to multiple reports and leaks, death-by-metadata could be triggered, without even knowing the target’s name, if too many derogatory checks appear on their profile. “Armed military aged males” exhibiting suspicious behavior in the wrong place can become targets, as can someone “seen to be giving out orders.” Such mathematics-based assassinations have come to be known as “signature strikes.”<p>1. rollingstone.com/politics/politics-features/how-to-survive-americas-kill-list-699334/
Fundamentally, for a service to not be discriminatory, it must be regulated like a utility (e.g. telephone, electric, etc).<p>Using something that has the same functionality as a utility but is not regulated as such raises some questions.<p>Also, profiling social media is nothing new. It's not uncommon for insurance companies to hire private investigators to look into suspicious cases. One case I heard was a man going on disability citing being homebound but the private investigator found evidence completely to the contrary. I fail to see how this is any different.
Comparing it to social credit is disingenuous even though there are real problems with existing and proposed ratings and their applications.<p>Social Credit is a wrong-headed authoritarian tool of control that is enforced and has jack and shit to do with actual creditworthiness. It only "works" from being pressures because otherwise the companies shouldn't give a shit if they want to optimize for profits. If you tried to sell a loan evaluation system for banks based upon how they treated their parents in the US or Europe they would tell you to not waste their time again.<p>There are problems with the current scoring systems of course - the burden of proof for identity theft is utterly backwards, credit scores have major "how good of a cash cow are you" aspects mixed in to what should be pure reliability, and idiots in recruiting use it for employement evaluation when it is utterly irrelevant.<p>Even if the system winds up unjust and stupid there are large differences - the comparison isn't helpful.
Indeed so. That is one of the major reasons why I don't use social media, and go to great lengths to evade the pervasive spying that major tech companies have brought into fashion.
This could and should be a huge US election issue. Tulsi Gabbard is suing Google and has made some pretty strong statements about big tech. I fear all the other candidates are under the sway of the DC lobbyists where Google. FB et al spend tens of millions
Insurances assessing risk based on publicly available information (social media posts) is in no way comparable to a social credit system.<p>PatronScan looks potentially more dangerous, but the law in the US and UK (for example and afaik) is that you are free to refuse service to anyone you please as long as it is not illegal discrimination (e.g. in the UK, race or sexual orientation).<p>Once again what this highlights is the power gained by these online platforms. On the one hand as private companies they have no obligation of universal service, on the other hand some of them have so much power that being excluded has a real impact on people.<p>This reinforces my opinion that either these tech giants will effectively rule, or they will have to be controlled in a way similar to what China does in order to keep decisions on censorship, exclusion, and provision of service within public hands.