TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Why is AI so useless for business?

384 pointsby mebassettalmost 5 years ago

69 comments

bo1024almost 5 years ago
AI gets such a bad rap. People only think of unsolved or sort-of-solved problems as AI, and don&#x27;t give AI any credit for the problems it has solved, I guess because by definition those problems seem easy.<p>Think how much Microsoft Office and competitors have amplified business productivity over the last 20 years (yeah yeah, make your jokes too). Word and Powerpoint and Excel are full of AI whether it&#x27;s spellcheck or auto-fill, drawing algorithms like &quot;fill with color&quot;, etc. So many things that were AI research papers of the 70s, 80s, or 90s. And those innovations continue today.<p>Logistics companies rely on huge amounts of optimization and problem-solving. Finding routes for drivers and deliveries, planning schedules, optimizing store layouts, etc. -- that&#x27;s AI.<p>Employees use AI tools to improve their lives and productivity whether it&#x27;s a rideshare or maps app to get to work, speech-to-text, asking Siri for answers, translating web pages, etc. All of this comes out of research in AI or related fields.<p>How many office jobs <i>don&#x27;t</i> require someone to use a search engine to find and access information related to a query? Information retrieval is one million percent AI.<p>Robotics and automated manufacturing has been huge for a long time -- robotics is closely connected AI and related problems like control theory.<p>The best applications of AI have almost always been to support and enhance human decisionmaking, not replace it.
评论 #23312959 未加载
评论 #23314813 未加载
评论 #23314855 未加载
评论 #23317066 未加载
评论 #23313841 未加载
评论 #23313147 未加载
评论 #23325084 未加载
评论 #23317465 未加载
评论 #23316765 未加载
评论 #23315525 未加载
评论 #23313246 未加载
评论 #23324249 未加载
评论 #23313715 未加载
评论 #23313125 未加载
评论 #23312975 未加载
评论 #23318633 未加载
评论 #23317392 未加载
评论 #23313292 未加载
NalNezumialmost 5 years ago
I&#x27;m going to go against the flow of most comment here and say that it&#x27;s not always business misunderstanding AI. Bad labeled data and unclear goals&#x2F;expectations sure, but the latter one should be identifiable by a good ML&#x2F;Data scientist, if you have any insight to what you can actually deliver.<p>But most ML&#x2F;Data Science people have no proper understanding of AI&#x2F;ML, and when just traditional &quot;coding&quot; can solve the problem rather than throwing fancy statistical models and buzzword.<p>I&#x27;m not in business nor AI&#x2F;ML, but in Robotics. And as a person in Robotics it&#x27;s always the same experience working with AI&#x2F;ML engineers: They first say they require large amount of data, then give great promises. (but never specific metrics, except maybe a percentage of success) Then they deliver a module that fails outside the <i>perfect</i> scope of deployment(works only in the lab at 1pm). This is ofc never specified in the delivery. Also crucially, it does not give a good indication of failure. The amount of ad-hoc you need to add <i>after</i> the thing is delivered is just staggering.<p>On top of this reality, most ML&#x2F;Data science peoples response to this entire process is to point and blame the data, or the &quot;well you guys is expecting too much from this!&quot; when they had ample time to outline the scope, limitation and requirement <i>before</i> they even started collecting the data.
评论 #23311505 未加载
评论 #23310388 未加载
评论 #23317082 未加载
评论 #23314189 未加载
评论 #23311013 未加载
评论 #23312268 未加载
评论 #23312067 未加载
ethanbondalmost 5 years ago
I’ve been working in the “real world business processes that companies are trying to AI-ify” realm for quite a while now. Pharma, cyber security, oil and gas production, etc.<p>This article doesn’t mention a really, really straightforward factor for why AI hasn’t invaded these domains despite billions of dollars being dumped into them.<p>An automated process only has to be wrong <i>once</i> to compel human operators to double or triple check every other result it gives. This immediately destroys the upside as now you’re 1) doing the process manually anyway and 2) fighting the automated system in order to do so.<p>99% isn’t good enough for truly critical applications, especially when you don’t know for sure that it’s actually 99%; there’s no way to detect which 1% might be wrong; there’s no real path to 100%; and critically: there’s no one to hold responsible for getting it wrong.
评论 #23309723 未加载
评论 #23316268 未加载
评论 #23311031 未加载
评论 #23310989 未加载
评论 #23310824 未加载
评论 #23309742 未加载
评论 #23309792 未加载
评论 #23314501 未加载
评论 #23310088 未加载
评论 #23310323 未加载
评论 #23309788 未加载
评论 #23322690 未加载
评论 #23313901 未加载
评论 #23310244 未加载
评论 #23313136 未加载
评论 #23315974 未加载
Macuyikoalmost 5 years ago
In my opinion, most of the issues leading about AI &quot;failing&quot; in traditional organizations are due to the following:<p>(1) Inflated expectations from higher&#x2F;middle management which trickle down the organization. AI is seen as a high-profile case which has to lead to success (and a larger budget next year for my dept.)<p>(2) Data quality issues. The data itself has issues, but the key issue is lack of metadata and dispersed sources. Lack of historical labels (or them being stuck in Excel or on paper) is part of this as well. Big data without any labels is mostly useless, contrary to expectations<p>(3) Most AI or ML projects are not about ML. In fact, they&#x27;re mostly about automation or rethinking an internal or customer-facing process. In many cases, such projects could be solved much better without a predictive component at all, or by simply sourcing a 1 cent per call API. AI is somehow seen as necessary, however, without which our CX can never be improved. (&quot;We need a chatbot&quot; vs. &quot;No, you just need to think about your process flow&quot;)<p>(4) Deployment issues and no clean ways to measure ROI leads to projects being in development indefinitely without someone daring to stop them early. This is also related to orgs starting 30 projects in parallel (2m lead times with one to two data scientists for each), which end up all doing kind of the same preprocessing and all lead to kind of the same propensity model. No one dares to invest in long-term deeply-impacting projects as &quot;we want to go for the low hanging fruit first&quot;
评论 #23309508 未加载
评论 #23309591 未加载
评论 #23311460 未加载
euixalmost 5 years ago
I have been in this space in the financial sector for two years. I think this article is mostly spot on. There is one other piece, typically the places that can most benefit from innovation can get most of it just through automation and RPA as it is now called. Basically some guy filling a spreadsheet and copying it someone else, replace that with a bot.<p>But even that and other processes are difficult because a lot of these corporate enterprises have a bazillion different systems that don&#x27;t talk to one another. Forget data science or ML, you really just need a unified data view. Typically the workflow is some use case comes, somebody, an analyst manually pulls data from some system via the GUI (because that is all they interact with). A model is built based on that data set and the project stops dead in its tracks from there on because it&#x27;s impossible to get an API to query for that data from its source system. That is a technology and business process project and will rapidly blow up into a clustefuck.<p>The key competitive advantage of these so called &quot;technology&quot; companies is really this. The ability to expose any part of your data storage and pipeline to any other part of the organization as a API. Every piece of software is built with that concept in mind.
评论 #23312568 未加载
评论 #23312332 未加载
normalnormalmost 5 years ago
Because &quot;Artificial Intelligence&quot; is a label forever applied to the effort of replicating some human cognitive ability on machines. A well-known lament goes something like: &quot;once it&#x27;s possible, it&#x27;s no longer AI&quot;.<p>Business is about exploiting what exists. This is why the buzzword is &quot;innovation&quot;, not &quot;invention&quot;. Incremental improvements, not qualitative jumps. So nothing will ever be really considered &quot;Artificial Intelligence&quot; once it is boring enough for business.<p>Scheduling algorithms are incredibly useful for business. There was a time when this was considered AI, but that was the time when they didn&#x27;t work well enough to be useful.
laichzeit0almost 5 years ago
&gt; why can&#x27;t it read a PDF document and transform it into a machine-readable format?<p>&gt; why can&#x27;t I get a computer to translate my colleague&#x27;s financial spreadsheet into the format my SAP software wants?<p>Because you probably expect it to be 100% or maybe 99.999% accurate, and we can&#x27;t do that. Imagine &quot;AI&quot; translating someones financial spreadsheet into a different format and dropping a zero somewhere. Oops.. but your test set accuracy is 99.8984%. Still not good enough. Just getting 1 thing wrong breaks everything. This is fundamentally different from clicking on image search and ignoring the false positives.
评论 #23311054 未加载
评论 #23309628 未加载
评论 #23309610 未加载
评论 #23310821 未加载
评论 #23309728 未加载
tragomaskhalosalmost 5 years ago
Our immediate goal should be to set our sights lower; forget ML, instead improve and expand technologies like RPA (<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Robotic_process_automation" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Robotic_process_automation</a>), which is only &quot;AI&quot; in the narrowest sense.<p>Example: my wife is an admin in a school office, and a ludicrous amount of her and her colleagues&#x27; time is spent on replicating data entry between a multiplicity of different incompatible systems. The Rolls Royce &#x2F; engineer&#x27;s solution to this would be to provide APIs for all these disparate systems and have some orchestration propagating the data between them, except of course that&#x27;s never going to be remotely practical; instead, dumbly spoofing the typing that the workers do into the existing UIs is a far more tractable approach. My (admittedly not 1st person based) experience of these things is that they currently still require significant technical input in the form of programming and &quot;training&quot;, but this fruit has got to be hanging a lot lower than any ML-based approach.
评论 #23309863 未加载
评论 #23311362 未加载
评论 #23310993 未加载
jerzytalmost 5 years ago
A better question would be why AI works great for some business (e.g. Netflix, AirBnB, Uber, Waze, Amazon) yet fails miserably for other (JC Penney, Sears). In my view, the older companies are trying to strap on AI on top of a traditional dataset, which never collected any useful signals. The new companies designed their entire business concepts around data, and collected what&#x27;s needed from the get-go. Sears may have a 100 years worth of useless data. AirBnB has about 13, but so much more informative. Amazon applies A&#x2F;B testing all the time - would anyone at Sears even know what it is?<p>A secondary issue with business data is that vast majority of the features are categorical, for example: vendor id, client id, shipper id, etc. These usually get hot-one encoded, and you end up with hundreds of features where there&#x27;s no meaningful distance metric. Random Forest and XGB are about the only that produce somewhat rational models, but in reality, they are good because they approximate reverse engineering of business process.<p>And lastly, the hype far outweighs the possibilities, at least until the business are ready to re-engineer the processes, if it&#x27;s not too late.
评论 #23310092 未加载
评论 #23310401 未加载
评论 #23315719 未加载
评论 #23310753 未加载
saalweachteralmost 5 years ago
I like to focus on the need for a spec.<p>The hardest part about programming is that you have to say what you want to happen clearly and precisely. You can&#x27;t just say &quot;I want a text editor&quot;, you need to say all sorts of specific things about how the cursor moves through the text and how you decide what text is displayed on the screen when there is too much to show all at once and how line-wraps work and whether you acknowledge the existence of &quot;fonts&quot;, and what happens when you click randomly on every pixel of the display.<p>The program usually shouldn&#x27;t <i>be</i> the spec, but you can&#x27;t write the program without actually specifying everything that can possibly happen over the course of that program executing.<p>One of the things that makes AI&#x2F;ML so hard is that we don&#x27;t want to write a spec, most of the time. If we could write a precise spec that a computer could understand, we&#x27;ve typically already written the program we want. There are some cases where we can, like games or math, but most of the time, what we want to do is provide our AI&#x2F;ML with a bunch of data and say &quot;you figure out what I mean&quot;. &quot;Label the pictures of dogs&quot;, &quot;identify the high-risk loan applicants&quot;, and so forth.<p>Our AI&#x2F;ML is actually solving two problems: first, it has to come up with a spec on its own, and then, it has to create a solution to it.<p>And here is where things get rough: we generally don&#x27;t know what spec our AI&#x2F;ML came up with. Did we train a model to identify dogs, or to identify dog collars? Does this model find high-risk loan applicants, or people of certain ethnic backgrounds?<p>The problem with many real-world and business applications for AI&#x2F;ML is that the spec is really, really important.
评论 #23312883 未加载
评论 #23318169 未加载
syllogismalmost 5 years ago
&gt; why can&#x27;t it read a PDF document and transform it into a machine-readable format?<p>It can definitely do that, but you might not like the cost&#x2F;benefit analysis, depending on how many such documents you want to process. The costs are coming down steadily though as the tech improves. If you need to do millions of such documents, yeah a model will probably be worth it. But if you need to do a few hundred you probably should just do them manually.<p>The thing is, reality has surprisingly high resolution. When you give out a task like this to a person, they will likely come back to you for clarification about how you want to deal with some of the examples. Your initial requirements will be underspecified, or incorrect, in some details. When you are dealing with a person, these minor adjustments are pretty inconsequential, and so you don&#x27;t really notice it happening. The worker might also have enough context to guess what you want and not ask, and just tell you the summary when they deliver the work.<p>If you&#x27;re training a model, you need to work through all these annoying details about what you want, just as you would when you&#x27;re creating any other sort of program. This adds some overhead, and places a lower bound on how many examples you&#x27;ll need to have annotated -- you&#x27;ll always need enough examples of annotation to actually specify your requirements, including various corner-cases. You need enough contact with the data to realise which of your initial expectations about the task were wrong.<p>So there will always be a lower scaling limit, where the automation isn&#x27;t worthwhile for some small volume of work. The threshold is getting lower but there will always be a trade-off.
评论 #23309736 未加载
raghavaalmost 5 years ago
In fact, the author could actually dig further and look at the potential losses an &quot;AI-fied&quot; solution could bring forth.<p>1. Unexplainable algorithms that cannot demonstrate fairness and biased algorithms - causing firms to be dragged to court for discrimination - where AI was used for decision making which impacted lives&#x2F;careers (lending, credit, recruitment, medical procedure suggestion, financial modeling etc - just to name a few)<p>2. Biased algorithms resulting in small tainted outputs that could later snowball into a larger loss that get built over slow leaks over time. (Few AI based cloud app&#x2F;infra monitoring systems ending up deciding the wrong scaleout factor&#x2F;sizing - based on past history but not considering real situational context&#x2F;need - resulting in a net loss over a larger time)<p>3. Some AIfied solution just outright denying users the level of control that&#x27;s really warranted. (&quot;full automatic , no manual&quot; mode). This mostly happens where the buyer never uses it firsthand but buys based on brochure&#x2F;ppt walkthroughs, and real users are disconnected from the decision making ivory towers. The risk ibeing these systems getting into the way, instead of aiding productivity, they end up being another JIRA - a hassle one could really do without.
mark_l_watsonalmost 5 years ago
I have been an AI practitioner since the 1980s, sort of a fan! That said, I like this article on several levels most particularly for calling out possible AI products for business.<p>I lived through the first AI winter. As effective as deep learning can be, problems like model drift, lack of explainability, and getting government regulators to sign off on financial, medical, etc. models are very real problems.<p>Two years ago I was at the US Go Open and during a social break I was talking to a lawyer for the Justice Department and he was telling me how concerned they were about the legal problems of black box models.
tempodoxalmost 5 years ago
You might just as well ask why that miraculous cure for baldness is so useless. You let some used-car salesman talk you into believing that it actually works, but it doesn&#x27;t — no matter how much you want it to.
评论 #23310745 未加载
TulliusCiceroalmost 5 years ago
&gt; We&#x27;ve taught computers to beat the most advanced players in the most complex games. We&#x27;ve taught them to drive cars and create photo-realistic videos and images of people.<p>No, we haven&#x27;t. I mean, we&#x27;ve made progress in those areas, but there&#x27;s still a long way to go.<p>The best AI in Starcraft, AlphaStar, still can&#x27;t beat the strongest players without relying on simply out-clicking them.<p>Driverless cars are still in the testing and development phase, none of them are smart enough yet for widespread deployment.
dave_sullivanalmost 5 years ago
In my opinion, it&#x27;s because business operations isn&#x27;t that complicated and people don&#x27;t know what AI is.<p>By &quot;not that complicated&quot;, I mean a decent CRM system to track information about the organization is approaching peak operational efficiency for most businesses. Most inefficiency I see after that is people&#x2F;political problems.<p>By &quot;people don&#x27;t know what AI is&quot;, I mean that business owners are unable to describe their business problem as a supervised learning problem. If you can formulate your business problem as a supervised learning problem, then you can probably solve it with AI (which, yes, is really just a marketing term for supervised ML).<p>But most business problems are really &quot;order taking&quot; or &quot;production&#x2F;delivery&quot; or &quot;moving things through a funnel&quot; problems and thus AI isn&#x27;t the solution, CRM or CRUD apps are the solution.
评论 #23309694 未加载
leto_iialmost 5 years ago
One thing I haven&#x27;t seen explicitly mentioned is the (probably) intrinsic limitations of AI&#x2F;ML in classifying&#x2F;predicting human behavior. For a number of years I have worked on fraud prevention tasks where the goal was to take in some information about a payment and decide whether it was fraudulent or not.<p>Even though what we were doing was primitive and could have benefited from a lot more ML, I suspect that even then the best that you could have gotten would have been some sort of anomaly detection system that can catch a good share of the kind of fraud that you have seen in the past, but will never be very good at detecting an intrinsic change in fraudster behavior.<p>On top of this, especially when dealing with humans, you often are expected to be able to explain why a certain decision was made. Setting payments aside, think of predictive policing or sentencing decisions. In those cases ML is essentially guaranteed to build in all sorts of biases regarding somebody&#x27;s race, gender, place of living etc.
dade_almost 5 years ago
I forget the name of that silly robot in the picture, Ginger or something. Designed to take food orders and possibly deliver them to tables, they get dragged out to bank branches and need to be supervised by employees the whole time. It is bad enough that they could only provide the most trivial of information (worse than web search), IT also struggled to keep the things connected to WiFi. Multi-billion dollar corps &#x27;doing AI&#x27;.
s1t5almost 5 years ago
The article is just vaguely complaining about undefined problems which makes it very difficult to defend or argue against. Are we talking about ML? Optimization problems? Operational research in general? Automation? And which tasks in which businesses? There&#x27;s tonnes of useful stuff in each of those categories and vaguely saying that it&#x27;s all useless doesn&#x27;t get you anywhere.
评论 #23309617 未加载
rschoalmost 5 years ago
I&#x27;m at a hospital where someone at long last got authorized to try ML on the clinical database.<p>The ethical committee required that prior to using any data, you have to make a static copy in another database. Their argument is<p>1. They don&#x27;t want excel files flying around (which will happen regardless) and<p>2. To perform any analysis, you &quot;obviously&quot; have to have &quot;structured data&quot;, which &quot;obviously&quot; means that you have to extract a csv from the base system (MongoDB) and put that into a RDBMS (redcap).<p>Go figure...
评论 #23315466 未加载
评论 #23311637 未加载
andrewmutzalmost 5 years ago
I disagree with the headline of the article, but I agree with the conclusion of the article.<p>AI is in the process of having a huge effect in business software, it&#x27;s just not the type of business software most of us think of when we think of business software.<p>Many people think of horizontal business software like MS word, excel, quickbooks, salesforce, etc. Products like this will be hard to automate significantly with AI, since every company is using these products slightly differently. The products are intentionally designed to have as wide a TAM as possible, and so they are general purpose enough to do anything business related.<p>There is another very large group of business software that people don&#x27;t as readily know about, and that is vertical-specific business software. These products are not designed to have a wide TAM, but instead to be tailored to specific industries, and provide a ton of value as a result. These vertical products are a perfect fit for automation with AI. The author says &quot;Each business process is a chance for automation&quot;, and in these products, these business processes (and all their inputs and outputs) are structured and represented in software.<p>I am building AI systems at a vertical software company right now and am a big believer in the future of AI in these products. If you have ML expertise and are interested in working on such systems, feel free to email me your resume.
rb808almost 5 years ago
Most AI guys I know are just interested in playing around in pilot projects and learning stuff while they train to get a job at Google. It&#x27;ll only be a few years before managers figure out there will be very little delivered.
valinealmost 5 years ago
The problem goes much deeper than researchers not having enough PDFs. If you look at where machine learning is successful it’s usually processing spatially related data. The pixels in a photograph have a spacial component where points near each other are more related than points far apart. Same goes for audio, and even text. Words in a paragraph that are close to each other are usually more related than words far apart.<p>A spreadsheet has very little or no spacial component for a neural network to learn, and the location of an important number in a pdf probably has little to do with the number’s significance. Without a spacial component to do pattern recognition a lot of the recent advancements in machine learning like transformers or convolution get thrown out the window.<p>There are some machine learning problems that can be solved with more or better data, but I don’t think PDF to JSON is one of them.
评论 #23309910 未加载
MattGaiseralmost 5 years ago
AI is terrible at dealing with unexpected events. Games AI is good at are relatively deterministic, i.e. all possible outcomes are known. Replicating art and images is the same way.<p>If you could script new combat units in a video game on the fly or tweak the rules slightly, the human would slaughter the AI when an equally skilled human opponent would not lose so easily or even necessarily lose at all. You can see this in games like Galactic Civilizations where you can build your own units and unusual combos confuse the heck out of the computer opponent.<p>Same with cars. The entire approach is currently based around exposing the AI to every possible outcome. I remember a seminar on AI Safety where the vehicle AI had a problem with plastic bags in the air and it would swerve to avoid them. No human would have an issue with that.<p>I worked in innovation for a bank looking at automating all these kinds of things and even spent a few days doing the jobs (and this was eye-opening) and I was a developer, so not a manager looking at a job spec, but someone who would have actually done the work. 90% of the job could be automated, but 10% was dealing with wacky exceptions, many of which they had never specifically seen before. We had someone who had the job of taking PDFs and extracting tables of income and expenses. They were generally standardized PDFs, so that seems like something good to automate right?<p>Well, no. As tons of the financial advisors had added custom rows which the person doing the input had to interpret into another column. It was quite eye-opening that while the jobs were menial data entry, it was nowhere near as mundane as one might imagine as the guy was still making a judgment call on whether to classify &quot;farm income&quot; under a person&#x27;s investment income category or whether to classify it as regular income for the purpose of investment advice.<p>I have a friend currently on a robotic process automation internship with another bank. Same issue. When the RPA dev people actually go and do these jobs, they realize that is frequently deviates from the approved job spec with the people in them making small but significant judgment calls.<p>It is not a lack of knowledge about what AI can do in either of those cases. It is not a lack of data as both banks have armies of people doing it and millions of clients. It is that for AI to do the job, all manner of other things would need to be standardized and reformed and if that were done, why use AI to solve the problem in the first place as a lot of it would simply be computational.
评论 #23311859 未加载
评论 #23311512 未加载
tarsingealmost 5 years ago
The essay makes a very good point about the availability of documents&#x2F;data, but from what I have seen working on ERP projects and business processes I don&#x27;t think attacking this problem top-down from how businesses works is a good idea: a lot of documents produced can be mostly useless artifacts of human interactions, and most businesses have highly inefficient processes. Would you train an AI on current HR recruiting practices for example? At its worst you have businesses with so much sales power (a purely human process) that their internal processes can be an absolute mess and still go fine. What an AI trained on these data coud output? Meeting recaps that are never read from meetings with already questionable value in the first place? Random reports and strategies meant to praise egos?<p>A huge part of the business world is service over service over human interactions that are far removed from the core value production and are side effect of these very interactions. Sure at their core business must produce something, but apart from industrial or software processes, the enterprise world is mostly a giant social game, with success linked to the execution of sales, marketing and sometimes lobbying, so not much to AI-ify from that angle.<p>Edit: just to be clear on the tone it&#x27;s not meant to be bitter, in fact after a hard time learning this world it can even be fun.
afeller08almost 5 years ago
AI is useful for business, and it&#x27;s used in business. Hiring people to do menial mental tasks that would be particularly easy to automate is cheap. Hiring programmers and AI developers to automate those tasks is expensive. I know people who lacked a CS degree and could barely program who transitioned to work as a programmer by getting hired to do a menial task and writing mundane old-fashioned code to automate away their job because that was the only way they could find to transition from menial tasks to a highly skill occupation.<p>You don&#x27;t even need AI to automate a lot of these tasks. Good old fashioned programming can automate anything truly menial better than AI can, but if you&#x27;re going to solve a real problem through code there are only two ways to do it: 1) write the code yourself, or 2) spend millions of dollars hiring other people to do it.<p>Same is true for AI. In contrast, you can very often hire people to solve the same tasks for minimum wage, or if its a sufficiently digital task, even less than that, through a service like mechanical chimp.<p>AI isn&#x27;t used to automate away menial tasks because the economics of it doesn&#x27;t make sense. None of the problems raised by that article are difficult to overcome, it&#x27;s just expensive to hire people who solve them well.<p>This has nothing to do with technology and everything to do with the current organization of society and its economy.
paulus_magnus2almost 5 years ago
A rather lazy &#x2F; uninformed article by someone who grossly underestimates the complexity of what would it would take to completely automate business processes of a company.<p>The title question is equivalent to asking: a robot cannot build a car, it&#x27;s so useless for manufacturing?<p>Robots can build cars but we need to arrange them in an &quot;alien dreadnought&quot;.<p>Businesses are not prepared to pay the price of full automation, what they expect is to put some open source AI run by a fresh graduate and fire all office clerks the next day.
poulsbohemianalmost 5 years ago
When I think about companies I&#x27;ve encountered over the past few years, it seems to me like the AI problem has been two-fold: 1) They didn&#x27;t need AI. 2) They would have been better off listening to the human experts they already employed.<p>That is to say - when you look at business case studies of the kinds of problems that businesses perceive they are going to solve, it&#x27;s things like supply-chain &quot;We figured out when it&#x27;s going to snow, we should have snow shovels in stock!&quot; Well, of course, and there are a whole lot of humans in your company that already know this, but they aren&#x27;t being heard.<p>A lot of the places where AI has worked out, like spell checkers, various in-app automations - as the article and people in this thread indicate, are exactly the kind of problems more companies should probably instead focus their energy. For example, I think about various gyrations I&#x27;ve watched people do in order to format their data the way they needed for presentations. Not AI in the theoretical sense, but definitely time consuming tasks that exist in every business that would save gobs of time and money if they could be automated away. But, so long as their isn&#x27;t clear profit motive good luck getting your project green-lighted.
roenxialmost 5 years ago
The obvious answer to me is that the hardware is only available to handful of players and the libraries aren&#x27;t mature yet. PyTorch has been around for about 4 years; that isn&#x27;t enough time for a lot of people to have gotten comfortable with it.<p>The people who have access to people with software and hardware have found a lot of uses for the tech - I assume AI basically is Google Image Search.
michaelbuckbeealmost 5 years ago
It&#x27;s sneaking in, just not announcing itself.<p>I used to work for a really well-known medical dictation&#x2F;transcription, documentation, and coding (in the medical billing sense) company.<p>They&#x27;re using ML models all over for speech to text, document analysis, etc.<p>It enables some very real efficiency gains but it&#x27;s not positioned the same as something like IBM&#x27;s Watson and it&#x27;s somewhat ridiculous AI claims.
评论 #23311368 未加载
pjc50almost 5 years ago
Reminds me of the more general paradox: <a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Productivity_paradox" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Productivity_paradox</a><p>As one of the other comments points out, technology can only change productivity when you change the <i>process</i>, sometimes radically. And that may mean restructuring the business.
sabujpalmost 5 years ago
Because humans aren&#x27;t very good at doing what they think they do, and people who have little idea how to do it are going to make a good choice. So, it&#x27;s a good choice to have something that works on a large scale, even if it is just an AI but that doesn&#x27;t require that you use it for a lot of other things, you will probably end up with lots of other things just to make the machine go into something less complicated.<p>If the AI is so good that it can actually be useful for anything, people will probably stop giving it advice on how to do what they want from a simple and simple and simple task; this is the key difference between the AI and humans at work.<p>In fact, when I asked about what people really want from a simple task and it wasn&#x27;t really a good one, I was very disappointed because it was so complex. That is why I&#x27;ve come to believe it&#x27;s really not that useful for anything other than a little bit more automation.
StandardFuturealmost 5 years ago
&gt; No group of researchers can train a &quot;document-understanding&quot; model simply because they don&#x27;t have access to the relevant documents or appropriate training labels for them<p>This is because you could rename deep learning as &quot;over-parameterized statistics&quot;. And statistics is just about building some model of the data. That is the only thing &quot;training&quot; a model is for: discovering&#x2F;optimizing a statistical distribution (a distribution is a generalization of a function). This means the entirety of deep learning is simply building highly complicated statistical models of a bunch of data.<p>It is unlikely that this is equivalent to general intelligence found in biology.<p>We could probably solve the AI problem if the <i>entirety</i> of all of research was not directed at deep learning. And it would also likely be far more valuable to any organization or individual.<p>But, that is just the 2 cents of some random HN commenter. So, I will keep dreaming.
aSplash0fDerpalmost 5 years ago
With AI, the production of a result is not the same as an answer&#x2F;solution.<p>Instead of a know-it-all, AI comes across as a guess-it-all (narrowing all results by the criteria programed) and as comments stated, AI is good at producing 99% maybes.<p>Looking past the hype, AI has thus far just added to the cacophony of noise among so called experts. (AI is not an expert)
nostrademonsalmost 5 years ago
AI has plenty of useful applications for business. Fraud detection, forecasting, enterprise resource planning, logistics - all of these were considered AI at one point.<p>The specific problem the author cites - digitizing data in PDFs - shouldn&#x27;t even require AI. It can be solved better by just inputting data in digital form to begin with (like with a web form), and passing it around in a standardized machine-readable data format. But the commercial real estate industry is pretty backwards, and can continue to be pretty backwards because its core competency is <i>ownership of the real estate</i>, and digitization labor is round-off error compared to the profits generated by it. It&#x27;ll take a major recession (and this current coronapocalypse may qualify) to create selection pressures to weed out inefficient firms, and until that happens there&#x27;s no incentive for them to upgrade their processes.
thrower123almost 5 years ago
Mostly because business isn&#x27;t really that hard, and we&#x27;re still struggling to even define the rules it operates by.<p>AI alqays looks cool, but isn&#x27;t very useful in practice. The past ten years feels like everything is keynote-driven development; get something nifty looking that demos well and try to shoehorn it into a business case.
jqpabc123almost 5 years ago
Because at this stage of the game, &quot;artificial intelligence&quot; is still an oxymoron .<p>It&#x27;s really just a database developed through trial and error (aka &quot;training&quot;) that we &quot;hope&quot; contains enough differentiating data points to produce a reasonable, weighted &quot;best guess&quot;.
dcolkittalmost 5 years ago
I forget who first made this argument, but it basically was a response to critics of philosophy. Critics would challenge the field by asking if philosophy has made any real contributions to human knowledge. Has it actually discovered anything that&#x27;s both non-obvious and conclusively true?<p>And apologists would that philosophy has made huge, unambiguous contributions. Only once this happens, those fields tend to be no longer considered &quot;philosophy&quot;. Astronomy, physics, economics, and logic were all sub-domains of philosophy originally. Once they were formalized, with rigorous, specialized methods they moved into their own standalone fields. But it was philosophers who laid the foundation. Consequently when we think of &quot;philosophy&quot;, there&#x27;s a lot of selection bias, because it&#x27;s basically the subset of open unsolved problems that remain.<p>I think there&#x27;a a close analogy here with what we think of as generalized &quot;business problems&quot;. There are many specialized sub-fields like finance, logistics, marketing, industrial psychology, and accounting. All of those things used to be thought of as a generic part of business. But eventually domain-specific methods and technologies led to the point where specialized practitioners unambiguously out-performed generalist C-suite executives.<p>Think of techniques like Markowitz portfolio optimization, or five factory personality testing, or applying Benford&#x27;s law to profile for accounting fraud. Those are all examples where something like AI&#x2F;ML solved what at the time was a generic business problem. But afterwards it was now just considered a success of the respective sub-field that those techniques helped create.<p>The point I&#x27;m making is that formal rules-based processes (I won&#x27;t use AI&#x2F;ML here because it&#x27;s so ambiguous, especially in a historical context) have had a long history of success in business. We just don&#x27;t recognize it because we&#x27;re begging the question. What we think of as &quot;generalized business issues&quot; is mainly those open problems that haven&#x27;t yet succumbed to specialized formal techniques.
yamrzoualmost 5 years ago
Many real world business processes assume a certain knowledge of the world and the relationships between entities, and not just a limited set of data points about the task at hand. Such kind of knowledge is not yet incorporated into today&#x27;s ML systems.
sabujpalmost 5 years ago
The author suggests that AI is _something_.<p>That is, if you think you have a better idea of what you want to do that isn&#x27;t the right question. I would say that if you actually get better than an AI based version of a product then it&#x27;s actually a useful tool and it can be improved to be used.<p>For example, if you say this is AI because a particular feature that is useful does not exist then you are actually talking about AI because someone is using it, just that it&#x27;s useful.<p>But that statement only has a part of what you mean. That is, if your idea isn&#x27;t useless in the sense that it is useful in a specific way, you are simply asking how you did it.<p>So to answer the question is: Why is AI useless for business?
gwernalmost 5 years ago
Being in a human-minimum seems to be part of it. AI and software <i>could</i> do far more than they do, but the problem is that everything around it assumes human-evolved systems, which destroys the potential for software. So if you look at just what AI can be wedged into the cracks, you&#x27;ll conclude it&#x27;s largely useless, but then if you can replace whole systems, you get much larger gains: <a href="https:&#x2F;&#x2F;www.overcomingbias.com&#x2F;2019&#x2F;12&#x2F;automation-as-colonization-wave.html" rel="nofollow">https:&#x2F;&#x2F;www.overcomingbias.com&#x2F;2019&#x2F;12&#x2F;automation-as-coloniz...</a>
indymikealmost 5 years ago
People try to apply AI to high-risk problems that smart people can&#x27;t solve. When AI is applied to lower risk probelms that are usually easy for people to solve, we seem to get great results (i.e. recommendation engines).
评论 #23309830 未加载
评论 #23310172 未加载
synthcalmost 5 years ago
Machine learning requires a large high-quality dataset, which a lot of companies simply don&#x27;t have. Building one takes a lot of time and money. The gains don&#x27;t outweigh the costs in many cases.<p>Another problem is that machine learning models are never a 100% correct and not easily interpretable, so they cannot be used for some critical processes. Good luck with explaining a customer why his account blocked due to a false positive made be the AI.<p>I think there is still a lot of potential for boring symbolic AI, in a lot of domains you can get results quickly, reliably and if the AI is wrong it&#x27;s easy to debug.
评论 #23315981 未加载
resirosalmost 5 years ago
Here is a prediction: This article will not age well. Asking why &quot;AI is so useless for business&quot; in 2020 is like asking why can&#x27;t I easily order clothes from the internet in 1994. The question is simply 5-10 years too early. The AI startup rush started maybe 2 years ago. Pytorch (the Netscape of ml) has only been released 3 years ago! It&#x27;s simply too early to make any judgment.<p>Let&#x27;s wait 5 years and see. I predict that all the business processes he mentioned will be automated (maybe with mechanical Turk oversight). In 10 years, most of the menial desk jobs will not exist.
sabujpalmost 5 years ago
The best thing about technology is that it seems to be getting more and more sophisticated in the industry. It seems like there is some sort of big disruptive force working at this point, but the big innovation seems to be the technology that is being used to create and manage the most values.. For example for a video game I imagine if a team of people working on this game could create games using some AI (maybe with 3D games) to generate the most value and then get some of the benefits of those games out there and then get real value out of it.
Grimm1almost 5 years ago
Huh? Most major companies use a staggering amount of AI that makes them a butt load of money. -- Oh, the title was practically unrelated to the content of the article and was just to generate clicks, I see. Well, at least the article raises a good point about AI being used to solve menial tasks that let people focus on the larger creative aspects of their work as an assistive tool. That said, seeing ML push the boundaries of what &quot;menial&quot; (sliding goal post) tasks it solves is both massively cool and massively value generating.
plaidfujialmost 5 years ago
This article is really focused on the question “why is AI so bad at data extraction from PDFs when it can beat humans at Go?” and it does answer its own question toward the end. AI is very good at inverting simulations (chess, Go) because you can generate an infinite corpus of perfectly labeled data. It is bad at inverting document creation because there is no exhaustive MS Word simulator. Soon people will realize that applied AI is really an exercise in simulation design.
perculisalmost 5 years ago
The article fails to address the bigger issue: if enough data is provided and the information becomes clear, what then is the resultant?<p>Without a genera A.I. (which we are a long way from) how can the process work be done and what will it mean if it can be done? We’d like to think that we’d be more productive... we’d create new and better “things”... but if history is any guide, we’d simply focus on straightforward profit generating bullshit...
metreoalmost 5 years ago
At lot of this has to do with a move away from solid statistical workflows within an already trend prone field. Computer Science departments have only widened the cultural gap between their work an that of the Statisticians (you know the ones you need around to explain what your model is doing). Hiring for ML&#x2F;AI sounds better than hiring a bunch of Statisticians which cannot be expected to deliver product.
CaptainActuaryalmost 5 years ago
There is an implicit, but, in my opinion, wrong assumption here that AI should be able to do tasks like extract data from PDFs or convert Excel spreadsheets into some format. Nothing about these tasks requires intelligence - a fixed process solves the problem. Asking AI extract data from a PDF is akin to asking it to develop a process from vague inputs - a far cry from even the most advanced AI systems today.
StonyRhetoricalmost 5 years ago
This is clickbait (1) to promote his startup, Proda.<p>ML is used in business workflows all the time - to date, I have built several solutions that are being used for 53 clients, internal and external.<p>Here is what makes B2B ML hard: People have to trust it.<p>This isn&#x27;t some movie-recommendation engine, which spams you with more bank heist movies after you watch one. B2C ML systems can get it wrong, and customers are generally forgiving, because it&#x27;s a low stakes game. B2B applications are generally higher-stakes, because they impact business workflows, and if someone has decided to automate it, it&#x27;s probably a high-volume, critical workflow. It has to be extremely accurate, and demonstrably better than the equivalent human system.<p>The problem has to be well-defined enough that an ML system can act with high-accuracy, but not well-defined enough that a rule-system could replace it. Don&#x27;t use ML if a rule-system will do a better job. (For those scenarios, you can still put an ML anomaly-detection system to make sure the rule-system is still valid, and to guard against data input changes.) As just mentioned, the problem also has to be important enough and high-volume enough to warrant an ML solution. The percentage of problems that fulfill these criteria is not very large.<p>Now to actual ML development and deployment - the model is the tip of the iceberg. The rest of the iceberg is data acquisition, feature selection, data&#x2F;feature versioning, automated training, CI&#x2F;CD, model performance monitoring, et cetera. If ML is being developed inside a software development organization, this isn&#x27;t a problem, most people will understand this. If it is being developed within an embedded BI team inside a business unit - they will generally not have support&#x2F;runway needed to build the full system. The ML model might make it to production, but it will probably run naked, be brittle, and hard to retrain. A dramatic failure with business impact is just a matter of time.<p>There are a lot of low-code, no-code ML solutions that have been developed, or are being developed, and some of the supporting infrastructure as well, but, at the risk of sounding parochial&#x2F;protectionist, you need a rock-solid, end-to-end, integrated, data management system that is fully understood by whomever needs to pick up the phone at 2AM. It&#x27;s the interfaces that are hard, and chaining together a bunch of third-party black-box systems just means more interfaces and behavior you don&#x27;t control. Choose and use these systems wisely.<p>So yeah, B2B ML is hard. But it&#x27;s generally not due to lack of data, and transfer learning is generally not necessary. Understanding business processes is important, I agree, but that&#x27;s comparatively easy. It&#x27;s what consultants have been doing for decades. The hard part is choosing a problem where ML can add value, and then executing on it with enough integrity that people will actually trust it.<p>(1) Ok, clickbait might be harsh. But it is self-promotion, and the article itself is a collection of generic banalities. I feel it falls on the wrong side of the line.
throaway435912almost 5 years ago
That&#x27;s easy: the business people don&#x27;t understand &quot;AI&quot; and the &quot;AI&quot; people don&#x27;t understand the business.<p>Well the business people actually do understand AI, but their understanding of it is that it is a marketing tool they can use to sell to customers and&#x2F;or investors. And in terms of doing that, it AI works very well.
otabdeveloper4almost 5 years ago
Because the so-called &quot;AI&quot; is only good for solving classification problems. Classification problems are great for art, but useless for business.<p>Business needs to solve the prediction (i.e., regression) problem, which is a completely different kettle of fish.<p>P.S. Of course by &quot;AI&quot; I mean the 2020 definition of &quot;multilayered neural network&quot;.
sabujpalmost 5 years ago
The reason seems fairly obvious to me. It&#x27;s the same thing as how we do things and do them in a good manner - and that&#x27;s very different from other things in the industry. For example, why would a doctor call you if he says you are not doing a diagnosis, but he isn&#x27;t doing a doctor: you have to be a software engineer. A doctor could go to you and ask a question about the symptoms you are experiencing and find out if the doctor is doing a diagnosis. He has to understand that the doctors use the AI to do everything he can to avoid that problem. His job seems to be to get insurance and medication coverage.<p>You might think about this a few times - if you are able to make an AI out of the box, maybe you have all of the necessary knowledge to get it out as well. Just as a computer can make a database out of a document. A programmer, however, could do all the necessary knowledge to get that system working. But most software engineer is like a carpenter - no amount of math or programming will change that. They probably also have all of the necessary knowledge necessary to make a car, so how about a car that can take a picture, and a car, that can run the calculations of the wheels
noxford1almost 5 years ago
This reminds me of Data Robot laying off people &quot;beacause of covid&quot; right after finishing a 300 million dollar round. This AI companies have lied about their valuation for awhile and it&#x27;s catching up to them.<p>Ironically enough, after data robot did a layoff they also completed a large acquisition and hired more executives.
xondonoalmost 5 years ago
AI has become specially hard to define nowadays that we see companies using advanced ML techniques for solving issues that were perfectly solvable through linear regression modeling, just because that’s the path for that sweet investor money.
dancemethisalmost 5 years ago
Because the very concept of &quot;business&quot; is supposed to be boring and not very intelligent (which doesn&#x27;t mean it doesn&#x27;t require knowledge in its field, it&#x27;s just not... alive).
评论 #23309988 未加载
Dotnaughtalmost 5 years ago
As per the required SEC disclaimer, past performance is not indicative of future results. AI is great at spotting patterns that conform to past performance. Not so much when things change.
orionblastaralmost 5 years ago
When I did business intellegence programming for a law firm we used statistics and six sigma to figure things out. It is all about crunching numbers on spreadsheets or linear algebra.
wmnwmnalmost 5 years ago
Well, regular intelligence isn&#x27;t but so useful in business either. There are so many factors in business success, intelligence is just one, and usually not a very big one.
olloalmost 5 years ago
I am an AI researcher but and I would love to investigate useful systems for business, but I have no idea about the business processes that this article mentions.
baybal2almost 5 years ago
&gt; Why Is Artificial Intelligence So Useless for Business?<p>Because it doesn&#x27;t make money? A big enough revelation?
PaulHoulealmost 5 years ago
Just ask a business person to get you a training set and it will be a while before you hear from them.
nnqalmost 5 years ago
Unpopular opinion: AI will start being useful to business when it will start being used to re-organize and re-architect core business processes... <i>Not as part of existing business processes, the &quot;augmentation&quot; will never offer too much!</i><p>It will be when AI systems will decide <i>who to hire</i> and <i>who to fire</i> and <i>who to promote and demote</i>, or what other companies to acquire or to merge with - profit will be increased, and almost everyone will <i>hate it!</i><p>It will be when huge fusioned megacorps AI systems will gain monopolies and replace free markets with centralized planning systems that will actually outperform markets (&quot;socialist planning&quot; can&#x27;t work because it can&#x27;t work with <i>humans</i> ...bringing in &quot;other&quot; types of intelligences will change the game, and nobody will call it &quot;socialism&quot; bc it will <i>not even try to benefit the people</i> this time around - and there will be markets still, just likely HFT-style ones that will block direct human actors from playing in them) <i>...and most will hate it and likely wage war against the societies that will embrace it this way!</i><p>You&#x27;ll see AI stops being useless to business, don&#x27;t worry ...but it will come with many consequences and side effects, our society as it is <i>can&#x27;t</i> handle it!
评论 #23311638 未加载
blickentwapftalmost 5 years ago
New technologies are overestimated in the short term and underestimated in the long term.
erfghalmost 5 years ago
Because AI is not really AI as was meant in the 60&#x27;s. The capabilities of computers, algorithms and human researchers were vastly overestimated back then.<p>But nowadays we find that the AI buzzword sells really well so we decided to lower and lower the bar until almost any algorithm qualifies as AI (also, any machine qualifies as a robot).
sarthakjainalmost 5 years ago
Not trying to just put in a baseless plug, but most of what you say can be refuted if you try out our product. Go here: <a href="https:&#x2F;&#x2F;nanonets.com&#x2F;ocr-api&#x2F;" rel="nofollow">https:&#x2F;&#x2F;nanonets.com&#x2F;ocr-api&#x2F;</a>
6gvONxR4sf7oalmost 5 years ago
The entire premise up front is false and probably a primary culprit. Expecting ML to do things it can&#x27;t yet by extrapolating from what it can do today (after reading current capabilities through a filter of marketing hype):<p>&gt;Today&#x27;s work in artificial intelligence is amazing. We&#x27;ve taught computers to beat the most advanced players in the most complex games. We&#x27;ve taught them to drive cars and create photo-realistic videos and images of people. They can re-create works of fine-art and emulate the best writers.<p>Today&#x27;s work in ML <i>is</i> amazing.<p>&gt; We&#x27;ve taught computers to beat the most advanced players in the most complex games.<p>Not true. You can spend a zillion dollars on self play to get an AI superhuman at games simple enough that you can simulate at many many times real life speed, but we&#x27;re just now learning to do games like poker, which intuitively seems less intellectual than Go or Chess, but so does starcraft and that came after those other games. In ML, placing tasks in order of achievable to currently impossible can be really unintuitive for lay people.<p>&gt; We&#x27;ve taught them to drive cars and create photo-realistic videos and images of people.<p>No again. We&#x27;re getting there with cars, but it turns out that it&#x27;s really really hard. Harder than playing superhuman chess! But people who play chess better than computers can drive cars better than computers. Weird, right? Again, in ML, placing tasks in order of achievable to currently impossible can be really unintuitive for lay people.<p>We <i>can</i> make photorealistic pictures of people, but we&#x27;re sorta limited (it&#x27;s complicated) to faces at high resolutions and just really really really recently getting them without weird artifacts. But the face is the most complex part of the body, right? So the rest should be easy!<p>&gt; They can re-create works of fine-art and emulate the best writers.<p>This is soooo much of a nope, and you know what I&#x27;m going to say anways.<p>This xkcd is always relevant, even if the bar has moved. Maybe it&#x27;s even harder because the bar is moving quickly. <a href="https:&#x2F;&#x2F;xkcd.com&#x2F;1425&#x2F;" rel="nofollow">https:&#x2F;&#x2F;xkcd.com&#x2F;1425&#x2F;</a><p>&gt; In CS, it can be hard to explain the difference between the easy and the virtually impossible.<p>In ML, we&#x27;re really good at some tasks, so it seems like we should be good at adjacent tasks, but that&#x27;s not how it works.
sadmann1almost 5 years ago
The hardest things to automate are always the things that are so easy for us they don&#x27;t even register consciously
评论 #23309340 未加载
评论 #23310596 未加载