If you're an accredited or institutional investor, then you have only yourself to blame (unless the investment was misrepresented, in which case it should be the one who sold/brokered the investment).<p>In this case, the investor was foolish to believe that a magic AI system could generate reliable positive returns. At the same time, it sounds like the performance of the system was misrepresented. Either way, if you're responsible for 1bn of assets, you better do really good homework.
Robots don't just spring in to existence from nothing. <i>Someone</i> made the decisions necessary to make the robot do the work they're selling (even if that's just deciding what training data and biases to feed in to something they choose to call AI). You can sue that person.
It seems like this hedge fund was run by people who didn't understand finance in the first place. This is how many quant funds work:<p>1.You develop an alpha factor that you believe is associated with out performance.<p>2. You control risk factors such as beta, volatility, Fama French 3 factor, etc.<p>3. Now you create a neutral L/S portfolio that only has exposure to your alpha factor while also negating the risk factors like beta and volatility.<p>4. Backtest it, run a paper portfolio for awhile, etc.<p>5. Combine your new alpha factor with a bunch of other factors that you have already developed. The idea being that layering these alpha factors on top of each other will negate some of the noise inherent to each factor.<p>6. Make money and cycle out and in alpha factors for as long as they work/don't work.<p>It seems like this "hedge fund" was solely trading based on sentiment without creating a L/S market neutral portfolio or layering on any other factors. This is a <i>very</i> bad idea. The quote from the guy who made the software is pretty damning:<p>"The signals we have been provided have a strong scientific foundation. I think we did a pretty decent job. I know I can detect sentiment. I’m not a trader."<p>This is a big red flag. A lot of AI/ML people have this arrogance that trading is easy and that you don't need any financial knowledge to make money. Maybe that was true in the 1980's, but at this point, it requires an incredible level of expertise to generate and implement profitable quant trading strategies.<p>But all of this is off-topic, to get to the point: the Bloomberg article is garbage click-bait. You sue the General Partners, and I don't think anyone is confused about this.
I'm surprised with comments that puts all the blame on the investor and make the impression that authors never do similar things (rely on systems they don't fully understand). The thing is, software is all around us, some is more or less harmless (web browser) and some can create a lot of damage (self driving car, buggy aircraft controller, bank software).<p>The thing is that we generally have absolutely no clue what is the quality of the piece of the code that is controlling our destiny right now. What if the airplane we are flying right now decides to dive? What if bank looses all our record? What if Nest thermostat goes crazy and burns thousands of dollars on heating during your vacation? What if your 401k disappears because of the obvious bug in the bot's script? I can go on and on.<p>To be able to live in this world without going nuts we have to trust that those systems are correct and if something goes wrong we do need a legal way to punish responsible party (if there is a fault on their side).<p>The weird thing is that we still treat software and real engineering differently. If you enter the bridge that collapses under you feet because it is a bad design - you will sue. But if you trust a company that sells a superhuman trading bot which makes silly decisions and loses your money -> it is your problem. Following this logic - don't step on the bridge without reviewing the design and making sure it is safe.
The founder/CEO of the company that built the trading system had previously "agreed to pay $17 million to the U.S. Securities and Exchange Commission to settle charges of defrauding investors at his mobile-payments company, Jumio Inc."<p>How could you trust him to trade $2.5B?
Salespeople often exaggerate to make a sale, it's not uncommon. Believing them with enthusiasm is a sign of someone salespeople love to find. This guy is right to sue but he's still a sucker who was taken in and then leveraged himself to the hilt based on a pitch he could not (or would not) verify. A prudent person would have tested it with a small amount to start with.
I'm sorry but this sounds like a couple idiots with too much money.<p>"Li eventually let K1 manage $2.5 billion—$250 million of his own cash and the rest leverage from Citigroup Inc."<p>Leverage = Loan<p>r/wallstreetbets is that-a-way.
If my microwave oven burn my pop corn, I don't sue the microwave oven. The product (robot) is not what seller tell you : you sue the seller. The product (robot) has a defect : you sue the maker.
This is a straightforward dispute about to what extent the backtested performance claims were accurate, and whether the fund was following the strategy as declared to its client. The fact that it's a "robot" is irrelevant to the dispute - the same fact pattern would apply to a human fund manager.
This is an important aspect:<p>> Over the following months, Costa shared simulations with Li showing K1 making double-digit returns, although the two now dispute the thoroughness of the back-testing.<p>And I wished the article linked to the fillings or at the least discussed this more thoroughly.
Should have bought Old Glory Robot Insurance!<p><a href="https://www.youtube.com/watch?v=KXnL7sdElno" rel="nofollow">https://www.youtube.com/watch?v=KXnL7sdElno</a>
"People tend to assume that algorithms are faster and better decision-makers than human traders,” said Mark Lemley ... “That may often be true, but when it’s not ..."<p>Is it just me or is that quote kind of disingenuous? "People tend to assume" and "That may OFTEN be true" make it sound like it isn't as clear cut or there's still doubt over whether such algorithms outperform humans.
Is that truly the case? I don't know much about trading but aren't algorithms doing most of the trading now?
It depends on what basis algo trading was sold to the customer. Regardless of whether it's a human or computer, anyone willing to give such an amount a go, should be asking how risk is beimg hedged?What happens when shit hits the fan? How fast can you exitvthe position?If it's Forex, what stops are being used to minimise the losses and etc. I'm sceptical that either sides signed the deal without firstly going through some details on how it works, the risks and rtc.Also,the legal department probably did their homework to cover as much as possible in such cases. The responsibility, ultimately,falls upon the institution utilising the software and not the software itself.
Haha, let the algo decide the stop-loss, genius. This guy was sold on a fancy robo investor that couldn't beat VTSAX if had twice as much money to start with.<p>If you don't know who the idiot is, it's you.<p>The dudes who put this thing together were using a theory from, get this, 2015, to do sentiment analysis. Cool 4 character domain, but they couldn't even be assed to get a let's encrypt cert for it -- why would you trust them to manage billions?<p><a href="http://42.cx/" rel="nofollow">http://42.cx/</a> The number of buzzwords is both overwhelming and inherently fishy.
The whole framing of this case is incorrect and stupid. It doesn't matter if the software used for trading is a bunch of if statements or machine learning, it's still just software. If it's misrepresented the case is pretty straightforward. The algos used don't matter.
If it was bug that caused him to lose money, sure, he should be compensated by the company that made/owns the robot. But he says that it's capabilities were misrepresented, so the robot doesn't actually have fault, he either just made a bad investment or was scammed.
If I buy a chainsaw and accidentally cut my finger off I wouldn't assume that I have any legal right to sue the chainsaw manufacturer or the person who sold it to me.<p>At least not where I'm from - is this not the case in the USA?
Maybe the concept of blame is meaningless when it comes to robots, and we’re clinging to a cultural and spiritual concept of blame that serves no purpose when it comes to robots.<p>If a robot doesn’t function properly, you fix it or take it out of service. Taking retribution against a robot (or abstractly, the civilization that created it) is of questionable value. The right question is how do we build robots that don’t lose fortunes?<p>The fact that human beings get angry at robots and want retribution or recompense for malfunctions is an evolutionary adaptation for dealing with other humans. It is useless when it comes to dealing with non sentient deterministic agents. Sure, if the robot was being controlled by a human, go after the human. If not, what’s the point?
Clearly depends on the laws of the nanny state you live in.<p>In my nanny state, the government will appoint a team of therapist to mange the trauma, we get mandatory employer 4 weeks paid pain and suffering leave. The government refunds your loss plus potentially lost opportunity cost<p>In the country next to us, they are savages. Can you believe each adult is responsible for the consequences of their decision
They have some meager protection for fraud and general exploitive behavior. But if it’s a legitimate investment firm and you go in knowing what you are doing and they loose your money THAT IT your money is lost. savagery<p>Edit: post is tongue in cheek, assuming there is no fraud some of the comment suggest there could be subtle fraud involved
Forgetting who's responsible for trading losses (the customer should know investing is never a sure thing), this sounds like a very crude algorithm. The best trading programs don't use machine learning and sentiment scraped from the internet, they use well defined strategies that are set by a competent human. The whole point of using AI is to use it for tasks humans aren't suited to; we're far better than machines at gauging sentiment over days and weeks. Computers are better at seeing numerical trends over shorter time periods, or identifying lesser known securities based on trading signals. Using a computer to try timing a major benchmark seems like a massive waste of time.
I don't get the whole issue with the who to blame conundrum when it comes to robots or AI. They're just glorified buttons, pressed by a human.
Pedantry: "Whom to sue", not "Who to sue".<p>(Pedantic tl;dr: I refuse to put the punctuation inside the quotes. I consider that an outright bug in English and it needs to be stamped out by rebellion.)
> "That may often be true, but when it’s not, or when they quickly go astray, investors want someone to blame"<p>Uh... don't get me wrong, but when you go to the casino and lose all your money, it's no one's fault but yours. Same way for stock markets.<p>Whoever is dumb enough to put money into essentially a gambling algorithm should be able to pay for the losses himself.
I'm glad it's not possible to Sue in my country. Saves a lot of silly nonsense and has people a bit more responsible about their actions instead of doing random shit and sueing willy nilly when the shit hits the fan