For all those people asking why the VCs did not catch this: what sort of diligence would you do?<p>I don't know the company or the details. Assume two scenarios:<p>1. The target company is willing to lie, fabricate code, mix in tensorflow etc.<p>2. The company will not outright lie, and will answer honestly. However, they are very optimistic about their chances, and about their ability to deliver some sort of AI-enabled solution.<p>Right now they have -- let's hypothesize -- some sort of funnel and they route bits of code to different developers. They think they will replace some of it. They are using various AI libraries.<p>Suppose you believe that even if the AI won't eventually code the whole app from scratch, it will make huge strides in certain areas that we don't even know about. These strides will dramatically reduce the cost of making an app (eventually, you believe). Suppose you think that this company is basically an exploration of those areas.<p>In other words, be generous to the diligence undertaker.<p>Now, how would you know? What steps would you take to that you suspect these people did not?<p>(Because this is the internet and no one knows for sure: this is a real question, and not a rhetorical attack on people asking why VCs were "tricked")
> Duggal “was telling investors that Engineer.ai was 80% done with developing a product that, in truth, he had barely even begun to develop.”<p>Due diligence? I mean, I understand VCs are not the smartest bunch but if you are investing $30M, please do the due.
Couple of comments:<p>1. Fake it 'til you make it is pretty accepted in startup world. It's only a problem if you don't actually make it. If you do, then you're a hero---even if you made wildly unrealistic projections initially [and got lucky]. It's kindof unfair, but nobody said life is fair :)<p>2. Most software people (like me) assume that due diligence goes deep into software. I've been through DDs at several companies, including my own startup: it's not that deep. I would say growth metrics, financials, legal structure, executive team is more important.<p>3. If you haven't read the Theranos story, read it. It's a good example what can happen in the extreme, edge case.
Ha! This company tried to recruit me a little bit ago. The CTO walked me through the business model, and it was pretty obvious they were just a typical agency. I pointed that out, and he got defensive and tried changing subjects.<p>In their defense, there is a slight twist in that they subcontract to hundreds of other agencies when those agencies have additional capacity. Essentially, they arbitrage on that.<p>But, yeah, the pitch that they use AI to build apps -- it's pretty ridiculous. They don't. Even with a very open mind to that phrasing, it's still a huge stretch.
As someone who used to work at one of these places: shocking.<p>It pretty much always is this way. They pretend it is AI, then when it comes out that it is pretty much all humans, they pivot to admitting it is "human-assisted".<p>The humans were truly creating data that was being fed back in, that wasn't a lie. Engineers would have to poke at the bot a bit to get it out of corners it would get itself into occasionally.<p>The big issue is the VC nature of the business. You are fighting a shot clock on an extremely hard problem. So you have to rush things out to get to the next step, then realize at the next step all of the data you collected, oops, can't be used because there was a small issue.<p>Or maybe they realize a model was inaccurate and has to be rebuilt.<p>I truly don't think a VC-funded true AI company is possible, especially for hard and fairly unbounded problems (speech is one thing, engineering is just... that's insane).<p>If someone made a sustainable AI company that could run infinitely, that company would have a huge shot due to that financial position.
I recently wrote two blog posts that touch on this. I honestly think many people cannot tell real automation from "a box full of little elves with a tech interface." (I often compare it to the MIB2 scene where Will Smith opens the <i>automatic mail sorting machine</i> and reveals a multi-armed alien rapidly flinging mail, not robotic parts.)<p>It's made me less aggravated with certain things to realize that. It also makes me wonder if founders are genuinely being intentionally deceptive or just unclear where to draw that line themselves.<p>How much AI inside the box do you need to qualify as an AI company when advertising what you do and wooing VC money? I bet some people honestly don't know and some of those people may be in decision-making positions at such companies.<p>Serious tech people may be clear on that, but most companies involve more than just tech people. If your PR people don't really get it and your tech people don't have adequate power to insist "You cannot market the company this way," then it will get sorted out in ugly headlines and court cases and the like.
"The company claims its AI tools are “human-assisted,” and that it provides a service that will help a customer make more than 80 percent of a mobile app from scratch in about an hour"<p>By the 80/20 rule, that would no doubt be the 80 percent that takes only 20 percent of the time to write; the remaining 20 percent that the tools can't do is what takes 80 percent of the time to write.
Majority of AI startups are starting with manual approach to generate training data set for future algorithms...<p>Plus it is a faster way to validate demand for given business model.
One thing I've noticed in fundraising is that many potential investors almost expect you to have some AI-driven solution.<p>Many don't even know what AI is, and would't be able to sniff out bullshit no mater how much due diligence there's involved. Dumb money is flowing in, as long as you have a great pitch and sleek presentation.
Contrarian view. Read the article top to bottom - there is no fraud here. This is exactly how it should be done.<p>VC's dont know shit about AI and you cant expect them to.<p>Anyone building a cutting edge AI product, SHOULD NOT build it before selling it product.<p>First use humans to build/sell the product and then in parallel train the AI to take over. Often the training phase is best done using the human taskers.<p>The CEO - 'Sachin Dev Duggal' is doing it exactly right. Anyone claiming otherwise, including the journalist who wrote this post, don't know what they are talking about.
I wouldn’t see it as a problem if human actions were systematically recorded into a structured dataset to be used as training data.<p>But it seems from the article that the labor is not used for this purpose at all.
So, question:<p>Why is this fraud, but Uber isn't?<p>This company claims they're using humans to build apps while they develop an AI platform out of hand-wavium.<p>Uber claims they're using humans to drive cars while they develop self-driving cars out of hand-wavium.<p>Seems like the same model to me.
LOL, obvious #MagicalPixieDust peddler is obvious. Real AI is currently three-to-eight years away, just as it has been for the last 40 years. They shoulda just said it uses “computers”.<p>In the meantime, you know what <i>does</i> work here and now? Building up a domain-specific language to the level of that domain’s expert users, empowering those users to tell their machines what they want without requiring a CS degree to do it.<p>Small steps make Progress.
Disclaimer - I'm a VP E at Engineer.ai<p>We actually wrote a blog post a little while ago that might answer a lot of the questions I'm seeing here:
<a href="https://blog.engineer.ai/a-little-bit-about-ai-and-more-straight-from-the-builders-mouth/" rel="nofollow">https://blog.engineer.ai/a-little-bit-about-ai-and-more-stra...</a>
I like where this is going. Almost daily now, we're seeing reports of "AI startups/companies/products/features" getting unmasked. Technical people knew it all along, but corporate-speak, prefabricated demos, half-baked products and puff pieces were slowly inflating that bubble. Glad it's bursting.
If I were using that company & found out after the fact that they were mostly people, I might feel a little misled, but I also kind of wouldn't care. AI is a hot buzzword, but what I really care about is can I input resources (time, money, unpolished diamonds, whatever) in one end of your black box and get predictable results out the other end. If the answer is yes, do whatever you want (in an ethical manner). Whatever you're building, whether it's powered by people, software, IBM Watson, or free range chickens pecking buttons for treats, I'm happy if it works at a price I care to pay.<p>Until we've truly built self-replicating machines, I just assume whatever you're selling me requires a lot of people to stay competitive anyway. There's no farm-to-table AI raised by AI farmers yet.
Any language or system sufficiently detailed to accurately describe the steps necessary to solve the problem turns into a programming language.<p>A very large number of companies have tried to automate software development with little success.<p>What is supposed to make these folks special?
Probably the AI part is in the configurator, at most spec generation. Development is still done by humans.<p><a href="https://imgur.com/a/hlsALdj" rel="nofollow">https://imgur.com/a/hlsALdj</a><p>So a fancy new SAP but with cheap consultants.
I'm pretty sure we already all knew they were using humans 9 months ago. Take a look at the comments here:
<a href="https://news.ycombinator.com/item?id=18391280" rel="nofollow">https://news.ycombinator.com/item?id=18391280</a><p>It seems like they were fairly explicit about it, so I'm not sure if the outrage is justified. komali2 even noted explicitly, "There doesn't appear to be AI involved. A very good business model, but no AI."
VC funding needs to stop. It's a complete cancer on the software industry. All this money going towards half-baked promises that are completely overvalued, only to fund companies for decades that never turn a profit.<p>Maybe I'll hire an animator or something and go to VC firms and ask them for money by showing them an animation of a new flashy product I've never designed. Better than working an honest living it seems.
Similar approach used by a lot of self-proclaimed self-driving car companies. They have a driver and an engineer in the front seats but can't stop from saying we have self-drivings cars on the streets :-) .<p>Also, the same pattern with Cloud hosted companies. It might be true these days but back in the day - a lot of them were claiming to be hosted in the Cloud to look cool but actually, they were using colo data centers.
> The company was sued earlier this year by its chief business officer, Robert Holdheim, who claims the company is exaggerating its AI abilities to get the funding it needed to actually work on the technology. According to Holdheim, Duggal “was telling investors that Engineer.ai was 80% done with developing a product that, in truth, he had barely even begun to develop.”<p>Ouch.
This reminds me of a couple of KYC companies, the ones that help you check a user's passport and other docs.<p>They talk a lot about algos, then when they demoed it to me it comes out that they actually send my picture to India for a human to look at. There's literally 24h service with real people there doing the "image recognition".
The HN title is "AI startup that raised $30m claims to automate app making just uses humans". That's a painful and confusing sentence. The real title is "This AI startup claims to automate app making but actually just uses humans". Can someone set a more grammatical and accurate title?
They contacted me for a Software Engineer position, 2 months back by them. I checked Glassdoor review, majority of those are stating that CEO is not a person you will like to work with and several of them saying it is just manual labor, no AI, everything they market is fake. I am glad I trusted those reviews.
I was attempting something like this but the company paying me to do it lost patience around 30 days where I was only able to identify widgets visually from mockups from past training data. This was a nice step but going to know what to do with those widgets contextually got pretty rough.
I'm happy we are starting to move on from all the AI hype and BS. Hopefully some of that VC money will start shifting to something useful. Mitigating climate change, or educating children, or feeding children ... Nah. Just kidding. VCs just want to pretend they are Tony Stark.
Hmm, I met these folks in Lisbon late last year, at Web conference. They did tell me it’s humans building, and their play was to build MVPs quickly with AI APIs - which I thought was honest and useful. Of course, I’m not a VC :D
> The number of companies which include the .ai top-level domain from the British territory Anguilla has doubled in the last few years, the WSJ reports.<p>This sounds like some statistics manipulation. Why limit yourself to Anguilla?!
I'm always amazed by the funds that companies manage to acquire from VCs without a (developed) product. Having recently read <i>Bad Blood</i> it's horrifying to see how often similar situations arise.
Did they update their web site? Because as it stands now, it’s clear that they‘re a standard agency connecting developers to people who want work done, with some vague stuff about AI helping to match them.
They already tried this kind of thing in the late 18th century:<p><a href="https://en.wikipedia.org/wiki/The_Turk" rel="nofollow">https://en.wikipedia.org/wiki/The_Turk</a>
At GitStart we use a global pool of devs and mentioned that upfront.<p>We have still deployed a ton of models to improve quality and SLAs, but embrace our human nature upfront.<p>This is bad faith to the extreme.
Relevant xkcd: <a href="https://xkcd.com/2173/" rel="nofollow">https://xkcd.com/2173/</a><p>> "Yeah, I trained a neural net to sort the unlabeled photos into categories." [...] Engineering tip: when you do a task by hand, you can technically say you trained a neural net to do it.
Good grief.<p>When will people wake up and realize that AI today is just capable of "curve fitting"?<p>Yes, that is a bit of a simplification. But not far off.<p>Neural networks depend on back propagation. They are really just another type of optimizer for maximum likelihood, using gradient descent. They work better on high dimensional, non linear data than other methods before.<p>But if the function you are attempting to model is non differentiable, neural networks won't help you.<p>They certainly aren't capable of performing magic tricks like writing an app for you.
Anyone remember Spinvox :<p><a href="https://kernelmag.dailydot.com/features/report/2573/spinvox-the-shocking-allegations-in-full/" rel="nofollow">https://kernelmag.dailydot.com/features/report/2573/spinvox-...</a><p>My eyes popped open when I read who the author of this was ! Utterly Loathsome - but apparently doing some journalism in 2012.
AI is the biggest fraud of the 21st century. Especially Deep Learning. Deep Learning is a bubble that has no application in reality. And I mean NONE. Even in cutting edge FAANG companies that claim to use modern AI techniques, Deep Learning is barely used. Because it's simply not reliable enough for real datasets. Classical statistical techniques, along with human domain expertise are what runs the world. Not new-fangled hyped up stuff.