TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

How to keep up with AI/ML as a full stack dev?

79 pointsby waspight10 months ago
I am working with full stack solutions consisting of node.js and React among other things. Different app solutions both for mobile and web. I most often can’t see any use case for AI/ML in our products but I suspect that it is easier to see opportunities when you have some experience with the tools. Any ideas on how I can keep up my learning in these areas so that I stay relevant as a software engineer in the long run? I know that it is a general topic but I think that it is important to stay up to date on the subject.

24 comments

localghost300010 months ago
&gt; I most often can’t see any use case for AI&#x2F;ML<p>I&#x27;m admittedly a skeptic on all this so take what I am about to say with a grain of salt: You should trust that voice. We&#x27;re in a hype cycle. It was VR before and crypto before that. Big tech is trying _very_ hard to convince you that you need this. They need you to need this tech because they are lighting billions on fire right now trying to make it smart enough to do anything useful. Short of them coming up with a truly miraculous breakthrough in the next 12 to 24 months (very unlikely but theres always a chance) investors are gonna get fed up and turn off the money fountain.<p>It&#x27;s always a good idea to learn and grow your skillset. I am just not sure this is an investment that will pay off.
评论 #41261834 未加载
评论 #41266336 未加载
评论 #41262318 未加载
kwindla10 months ago
&quot;Generative&quot; AI&#x2F;ML is moving so fast in so many directions that keeping up is a challenge even if you&#x27;re trying really hard to stay current!<p>I&#x27;m part of a team building developer tools for real-time AI use cases (voice and video). I feel like I have three overlapping perspectives and goals re this new stuff:<p>1. To figure out what we should build I need to have a good understanding of what&#x27;s possible and useful right now.<p>2. I talk to our customers a lot. Helping <i>them</i> understand what&#x27;s possible and useful today (and what that might look like six months or a year from now) is part of my job.<p>3. I think this is a step-function change in what computers are good at, and that&#x27;s really exciting and intellectually interesting.<p>My AI information diet right now is a few podcasts, twitter, and email newsletters. A few links:<p><pre><code> - Latent space podcast and newsletter: https:&#x2F;&#x2F;www.latent.space&#x2F;podcast - Ben&#x27;s Bites newsletter: https:&#x2F;&#x2F;news.bensbites.com&#x2F; - Ethan Mollick newsletter: https:&#x2F;&#x2F;www.oneusefulthing.org&#x2F; - Zvi Mowshowitz newsletter: https:&#x2F;&#x2F;thezvi.substack.com&#x2F; - Rohan Paul twitter: https:&#x2F;&#x2F;x.com&#x2F;rohanpaul_ai</code></pre>
评论 #41226883 未加载
评论 #41265131 未加载
评论 #41262486 未加载
ynniv10 months ago
My take requires a lot of salt, but… this time it’s different.<p>Try writing single page web app or command line python app using the Claude 3.5 chat. Interact with it like you might in a pair programming session where you don’t have the keyboard. When you’ve got something interesting, have it rewrite it in another language. Complain about the bugs. Ask it what new features might are it better. Ask it to write tests. Ask it to write bash scripts to manage running it. Ask it how to deploy and monitor it. Run llama 3.1 on your laptop with ollama. Run phi3-mini on your phone.<p>The problem is that everyone says they aren’t going to get better, but <i>no one has any data to back that up</i>. If you listen carefully it&#x27;s almost always based on a lack of imagination. Data is what matters, and we have been inventing new benchmarking problems because they&#x27;re too good at the old ones. Ignore the hype, both for and against: none of that matters. Spend some time using them and decide for yourself. This time is different.
评论 #41262481 未加载
评论 #41262436 未加载
评论 #41263026 未加载
inerte10 months ago
Use ChatGPT or Claude for your day to day questions, technical or not. You&#x27;ll quickly figure out in which areas using Google is still better. ChatGPT can probably do more than you think and handle more complex request than you&#x27;re probably assuming.<p>Regarding your projects, either just brute force into an existing one, or start a new project. For the former, the purpose isn&#x27;t to make the product better (exactly) but for you to learn. For the later, OpenAI and Anthropic APIs are good enough to mess around and build a lot of different things. Don&#x27;t let analysis paralysis stop you, start messing around and finding out.
godelski10 months ago
As an ML researcher my advice for you is: don&#x27;t<p>ML moves fast, but not as fast as you probably think. There&#x27;s a difference between innovations in architectures and demonstrations of them in domains (both are useful, both are necessary research, but they are different).<p>Instead, keep up with what tools are relevant to you. If things are moving fast and aren&#x27;t sticking, then in a way they aren&#x27;t moving fast, are they? You&#x27;re just chasing hype and you&#x27;ll never keep up.<p>On the production side, I also see a common mistake of relying on benchmarks too heavily. I understand why this happens, but the truth is more nuanced than this. Just because something works well on a benchmark does not mean it will work well (or better than others) on your application. ResNet is still commonly used and still a great option for many applications. Not everything needs a 1B+ transformer. Consider your constraints: performance, compute, resource costs, inference time, and all that jazz. Right now if you have familiarity (no need for expertise) in FFNs (feed forward&#x2F;linear), CNNs, ResNets, and Transformers, you&#x27;re going to be fine. Though I&#x27;d encourage you to learn further about training procedures like GANs (commonly mistaken as an architecture), unsupervised pretraining (DINO), and tuning. It may be helpful to learn a high level of diffusion and LLMs, but it depends on your use cases. (And learn whatever you&#x27;re interested in and you find passion in! Don&#x27;t let need stop you, but if you don&#x27;t find interest in this stuff, don&#x27;t worry either. You won&#x27;t be left behind)<p>If you aren&#x27;t just integrating tools and need to tune models, then do spend time learning this and focusing on generalization. The major lessons learned here have not drastically changed for decades and it is likely to be that way. We do continue to learn and get better, but this doesn&#x27;t happen in leaps and bounds. So it is okay if you periodically revisit instead of trying to keep up in real time. Because in real time, gamechangers are infrequent (of course everyone wants to advertise being a gamechanger, but we&#x27;re not chasing every new programing language right?). Let the test of time reduce the noise for you.<p><pre><code> &gt; I most often can’t see any use case for AI&#x2F;ML in our products </code></pre> This is normal. You can hamfist AI into anything, but that doesn&#x27;t mean it is the best tool for the job. Ignore the hype and focus on the utility. there&#x27;s a lot of noise and I am extremely sympathetic to this.<p>Look to solve problems and then right tool for the problem, don&#x27;t look for problems to justify a tool (fine for educational purposes).
conwy10 months ago
If you&#x27;re looking to maximise employability &#x2F; pay scale, maybe you can do some small side projects, just enough to showcase curiosity&#x2F;open-mindedness.<p>Examples:<p>- Build a useful bash script using ChatGPT prompts and blog about it<p>- Build a text summariser component for your personal blog using Xenova &#x2F; Transformers.js<p>- Build an email reply bot generator that uses ChatGPT prompt with sentiment analysis (doesn&#x27;t have to actually send email, it could just do an API call to ChatGPT and print the message to the screen).<p>Just a few small examples and maybe a course or two (e.g. Prompt Engineering for Developers) should look great.<p>However I question how many companies really care about it right now. Most interviews I&#x27;ve done lately didn&#x27;t bring it up even once.<p>But that said, maybe in a few months or year or so it will become more essential for most engineers.
评论 #41262262 未加载
stevofolife10 months ago
Here’s the plan:<p>Run the following models:<p>- Speech-to-text - Text-to-text - Text-to-speech - Text-to-image - Image-to-text - Text-to-video - Video-to-text<p>Start by integrating third-party APIs, and later switch to open-source models.<p>Implement everything using your preferred backend language. After that, connect it to a frontend framework of your choice to create interactive interfaces.<p>You want use your own data? Put it in a database and connect it to your backend, and run these models on your database.<p>Once you’ve done this, you’ll have completed your full stack development training.
评论 #41262652 未加载
al_borland10 months ago
It sounds like you are aware of a technology and in search of a problem. Don’t force it. Most things don’t need AI. Personally, I find it to be a turnoff when a product tries to force AI into a product that doesn’t need it. It makes me write off the entire thing.<p>I am in a similar position to you… I have a job they the application of AI to that job isn’t readily apparent. My posture during all this is to use AI products as they become available that are appropriate and help me, but ultimately I’m waiting for the market to mature so I can see if and how I should move forward once the bubble pops and directions are more clear. I have little interest in running on the hamster wheel that is bleeding edge AI development, especially when I already have a job that doesn’t need it.
评论 #41262749 未加载
ilaksh10 months ago
I&#x27;m going to answer more from an applied perspective than &#x27;real&#x27; ML.<p>Hacker News is a good source for news.<p>As far as learning, you have to build something.<p>I suggest you just start with example code from the OpenAI or Anthropic website for using the chat completion API. They have Node.js code.<p>r&#x2F;locallama on reddit is interesting.<p>On Youtube, see Matt Wolfe, Matthew Berman, David Shapiro. Not really developer-focused but will mention developments.<p>You can search for terms like &#x27;AI Engineer&#x27; or &quot;agentic&quot; or &quot;LangChain&quot; on youtube also.<p>To get motivated, maybe play around with the replicate.com API. It has cut and paste examples and many interesting models.<p>More ideas: search for &quot;Crew AI&quot; on X&#x2F;Twitter.
heavyset_go10 months ago
Ignoring the hype, there are applications of ML that a suited for some general problems.<p>Would your product benefit from recommender systems, natural language input&#x2F;output, image detection, summarization, pattern matching and&#x2F;or analyzing large datasets?<p>If so, then maybe ML can help you out.<p>I&#x27;m of the opinion that if you need ML, you&#x27;ll eventually realize it because the solutions to your problem you find will be served by applications of ML.<p>That&#x27;s to say, while doing research, due diligence, etc, you will inevitably stumble upon approaches that use ML successfully or unsuccessfully.
joshvm10 months ago
The comments here are very focused on LLMs, which makes sense - that&#x27;s where the hype is. If you really don&#x27;t mind ignoring the nuts and bolts, you can treat all the large language models as black boxes that are getting incrementally better over time. They&#x27;re not difficult to interact with from a developer perspective - you send text or tokens and you get back a text response.<p>It&#x27;s definitely worth trying them out as a user just to see what they&#x27;re capable (and incapable of). There are also some pretty interesting use cases for them for tasks that would be ridiculously complicated to develop from scratch and &quot;it just works&quot; (ignoring prompt poisoning). Think parsing and summarizing. If you&#x27;re an app developer, look into edge models and what they can do.<p>Otherwise dip your toes in other model types - image classification and object recognition are also still getting better. Mobile image processing is driven by ML models at this point. This is my research domain and ResNet and UNet are still ubiquitous architectures.<p>If you want to be sceptical, ignore AI and read ML instead, and understand these algorithms are just another tool you can reach for. They&#x27;re not &quot;intelligent&quot;.
f0e4c2f710 months ago
I&#x27;m not sure why but it seems like most of the high quality AI content is on twitter. On average seems to be around ~4 months ahead of HN on AI dev approaches.<p>I would suggest following &#x2F; reading people who talk about using Claude 3.5 sonnet.<p>Lots of people developing whole apps using 3.5 sonnet and sometimes cursor or another editor integration. The models are getting quite good now at writing code once you learn how to use them right and don&#x27;t use the incorrect LLMs (a problem I often see in places other than twitter unfortunately.) They seem to get better almost weekly now too. Just yesterday Anthropic released an update where you can now store your entire codebase to call as part of the prompt at 90% token discount. Should make an already very good model much better.<p>Gumroad&#x27;s CEO has also made some good YouTube content describing a lot of these techniques, but they&#x27;re livestreams so there is a lot of dead air.<p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=1CC88QGQiEA" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=1CC88QGQiEA</a><p><a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=mY6oV7tZUi0" rel="nofollow">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=mY6oV7tZUi0</a>
kukkeliskuu10 months ago
I have been very skeptical of AI, the smell of hype is strong in this one.<p>This said, I have started some experiments and like in all hyped technologies, there is some useful substance there as well.<p>My recommendation is to start with some very simple and real, repetitive need, and do it with the assistants API.<p>I started by turning a semi-structured word document of 150 entries into a structured database table. I would have probably done it more quickly by hand, but I would not have learned anything that way.<p>I think the sweet spot for generative AI right now is not in creative jobs (creating code, creating business communication , content creation etc.) but in mundane, repetivive things, where using generative AI seems like overkill at first.
geuis10 months ago
I&#x27;m a full stack eng that recently transitioned to a full time backend role.<p>I&#x27;d suggestion learning about pgvector and text embedding models. It seems overwhelming at first but in reality the basic concepts are pretty easy to grok.<p>pgvector is a Postgres extension, so you get to work with a good traditional database plus vector database capabilities.<p>Text embeddings are easy to work with. Lots of models if you want to do it locally or adhoc, or just use OpenAI or GCP&#x27;s api if you don&#x27;t want to worry about it.<p>This combo is also compatible with multiple vendors, so it&#x27;s a good onboarding experience to scale in to.
supratims10 months ago
In our org (large US bank) there has been a huge rollout of github copilot and adoption has been very successful. For me it became essential to learn how this can help me in day to day coding&#x2F;testing&#x2F;devops etc. Right now I am churning out a python code to parse a csv and create a report. I have never learnt python before.
kredd10 months ago
I&#x27;m in a similar boat as well, and most of the time I just make sure to take the new models for a test drive. Just trying to use them here and there as well, figuring out their capabilities and shortcomings. Makes it much easier to smell the vapourware when I hear the news.
adamnemecek10 months ago
Current ML will be replaced by something fundamentally different. We are in the pre-jquery days of ML.
bityard10 months ago
Are you looking to _use_ AI&#x2F;LM or take up an interest in developing or deploying AI&#x2F;LM? Because those are very different questions.<p>Offtopic, but today I encountered my first AI-might-be-running-the-business moment. I had a helpdesk ticket open with IT for an issue with my laptop. It got assigned to a real person. After a few days of back-and-forth, the issue was resolved. I updated the ticket to the effect of, &quot;Yup, I guess we can close this ticket and I will open a new one if it crops up again. Thank you for your patience and working with me on this.&quot; A few seconds later, I get an email saying that an AI agent decided to close my ticket based on the wording of my update.<p>Which, you know, is fine I guess. The business wants to close tickets because We Have Metrics, Dammit. But if the roles were reversed and I was the help desk agent, seeing the note of gratitude and clicking that Resolved button would very likely be the only little endorphin hit that kept me plugging away on tickets. Letting AI do ONLY the easy and fun parts of my job would just be straight-up demoralizing to me.
评论 #41262779 未加载
rldjbpin10 months ago
there seems to me tons of gatekeeping in this field to put it blunt. but if you want to mix full stack with AI&#x2F;ML use cases, i think it might be a good idea to just keep track of high-level news and services that allow you to interface between the latest functionalities with an app.<p>there is enough space for creating user experiences built on top of existing work instead of learning how the sausage is made. but i pray to you to not just stick to text&#x2F;LLM
darepublic10 months ago
Try reading some material on deep learning; try out open source ai libs like detectron2 on cloud gpu servers. (ie colab). Learn some python, including env set up.
dasven10 months ago
Play first, try things out, ideas will bubble up, trust your own ingenuity
djaouen10 months ago
I find that I have no need for AI or ML. I just don&#x27;t use it.
pseudocomposer10 months ago
There&#x27;s two sides to this: using AI&#x2F;ML&#x2F;LLMs to augment your own development ability, and using AI&#x2F;ML&#x2F;LLMs <i>within</i> apps you build.<p>For the former side, Copilot-type implementations are pretty intuitively useful when used this way. I find it most useful as an autocomplete, but the chatbot functionality can also be a nice, slightly-better alternative to &quot;talk to a duck when you&#x27;re stuck.&quot; That said, I&#x27;ll focus on the latter side (using AI&#x2F;ML in your actual work) from here.<p>Generalized AI&#x2F;ML&#x2F;LLMs are really just a &quot;black box API&quot; like any of the others in our toolbelt already, be they Postgres, Redis, SSE, CUDA, Rails, hell, even things like the filesystem and C atop assembly. We don&#x27;t need to know all the inner workings of these things, just enough to see how to use the abstraction. You probably take when to use a lot of these things for granted at this point, but the reason we use any of these things is that they&#x27;re good for the specific problem at hand. And LLMs are no different!<p>What&#x27;s important to recognize is the <i>types of problems</i> that LLMs are good for, and where to integrate them into your apps. And, well, a pretty obvious class of this is parsing plain text into structured data to be used in your app. This is pretty easy to prompt an LLM to do. OpenAI and WebLLM provide a pretty straightforward common set of APIs in their NPM libraries (and other language bindings are pretty similar). It&#x27;s far from a &quot;standard,&quot; but it&#x27;s definitely worthwhile to familiarize yourself with how both of these work.<p>For an example, I&#x27;ve made use of both OpenAI and WebLLM in an &quot;Event AI&quot; offshoot to my social media app [1], parsing social media events from plaintext (like email list content, etc.); feel free to test it and view the (AGPL) source for reference as to how I&#x27;m using both those APIs to do this.<p>For projects where you actually have money to spend on your LLM boxes, you&#x27;ll probably do this work on the BE rather than the FE as demoed there, but the concepts should transfer pretty straightforwardly.<p>If you&#x27;re interested in really understanding the inner workings of LLMs, I don&#x27;t want to discourage you from that! But it does seem like really getting into that will ultimately mean a career change from full-stack software engineering into data science, just because both have such a broad base of underlying skills we need to have. I&#x27;m happy to be wrong about this, though!<p>[1] Source: <a href="https:&#x2F;&#x2F;github.com&#x2F;JonLatane&#x2F;jonline&#x2F;blob&#x2F;main&#x2F;frontends&#x2F;tamagui&#x2F;packages&#x2F;app&#x2F;features&#x2F;event&#x2F;event_ai_screen.tsx">https:&#x2F;&#x2F;github.com&#x2F;JonLatane&#x2F;jonline&#x2F;blob&#x2F;main&#x2F;frontends&#x2F;tam...</a> | Implementation: <a href="https:&#x2F;&#x2F;jonline.io&#x2F;event_ai" rel="nofollow">https:&#x2F;&#x2F;jonline.io&#x2F;event_ai</a>
hansvm10 months ago
&gt; suspect it is easier to see opportunities when you have some experience working with the tools<p>Yes, absolutely. The most effective we I know to develop that sort of intuition (not just in AI&#x2F;ML, but most subjects) is to try _and fail_ many times. You need to learn the boundaries of what works, what doesn&#x27;t, and why. Pick a framework (or, when learning, you&#x27;d ideally start with one and develop the rest of your intuition by building those parts yourself), pick a project, and try to make it work. Focus on getting the ML bits solid rather than completing products if you want to get that experience faster (unless you also have no &quot;product&quot; experience and might benefit from seeing a few things through end-to-end).<p>&gt; stay relevant in the long run<p>Outside of the mild uncertainty in AI replacing&#x2F;changing the act of programming itself (and, for that, I haven&#x27;t seen a lot of great options other than learning how to leverage those tools for yourself (keep in mind, most tasks will be slower if you do so, so you&#x27;ll have a learning curve before you&#x27;re as productive as before again; you can&#x27;t replace everything with current-gen AI), and we might be screwed anyway), I wouldn&#x27;t worry about that in the slightest unless you explicitly want to go into AI&#x2F;ML for some reason. Even in AI-heavy companies, only something like 10% of developers tangentially touch AI stuff (outside of smallish startups where small employee counts admit more variance). Those other 90% of jobs are the same as ever.<p>&gt; keep up my learning in these areas<p>In addition to the general concept of trying things and failing, which is extremely important (also a good way to learn math, programming, and linguistics), I&#x27;d advise against actively pursuing the latest trends until you have a good enough mentor or good enough intuition to have a feel for which ones are important. There are too many things happening, there&#x27;s a lot of money on the line, and there are a lot of people selling rusty pickaxes for this gold rush (many intentionally, many because they don&#x27;t know any better). It&#x27;ll take way too much time, and you&#x27;ll not have a good enough signal-to-noise ratio for it to be worth it.<p>As one concrete recommendation, start following Yannic Kilcher on YouTube. He covers most of the more important latest models, papers, and ideas. Most of his opinions in the space are decent. I don&#x27;t think he produces more than an hour per day or so of content (and with relatively slow speaking rates (the thing the normal YT audience wants), so you might get away with 2x frame rate if you want to go a bit faster). Or find any good list of &quot;foundational&quot; papers to internalize (something like 5-20). Posting those is fairly common on HN; find somebody who looks like they&#x27;ve been studying the space for awhile. Avoid advice from big-name AI celebrities. Find a mentor. The details don&#x27;t matter too much, but as much as possible you&#x27;d like to find somebody moderately trustworthy to borrow their expert knowledge to separate the wheat from the chaff, and you&#x27;ll get better results if their incentive structure is to produce good information rather than a lot of information.<p>Once you have some sort of background in what&#x27;s possible, how it works, performance characteristics, ..., it&#x27;s pretty easy to look at a new idea, new service, new business, ..., and tell if it&#x27;s definitely viable, maybe viable, or full of crap. Your choice of libraries, frameworks, network topologies, ..., then becomes fairly easy.<p>&gt;&gt; other people saying to build something simple with LLMs and brag about it<p>Maybe. Playing with a thing is a great way to build intuition. That&#x27;s not too dissimilar from what I recommended above. When it comes to what you&#x27;re telling the world about yourself though, you want to make sure to build the right impression. If you have some evidence that you can lightly productize LLMs, that&#x27;s in-demand right this second. If you publish the code to do so, that also serves as an artifact proving that you can code with some degree of competency. If you heavily advertise LLMs on your resume, that&#x27;s also a signal that you don&#x27;t have &quot;real&quot; ML experience. It&#x27;ll, ideally, be weighed against the other signals, but you&#x27;re painting a picture of yourself, and you want that picture to show the things you want shown.<p>&gt; can&#x27;t see any use case for AI&#x2F;ML<p>As a rule of thumb (not universal, but assuming you don&#x27;t build up a lot of intuition first), AI&#x2F;ML is a great solution when:<p>(1) You&#x27;re doing a lot of _something_ with complicated rules<p>(2) You have a lot of data pertaining to that _something_<p>(3) There exists some reason why you&#x27;re tolerant of errors<p>I won&#x27;t expand that into all the possible things that might mean, but I&#x27;ll highlight a couple to hopefully help start building a bit of intuition right away:<p>(a) Modern ML stuff is often written in dynamic languages and uses big models. That gives people weird impressions of what it&#x27;s capable of. At $WORK we do millions of inferences per second. At home, I used ML inside a mouse driver to solve something libinput struggled with and locked up handling. If you have lot of data (mouse drivers generate bajillions of events), and there&#x27;s some reasonable failure strategy (the mouse driver problem is just filtering out phantom events; if you reject a few real events per millisecond then your mouse just moves 0.1% slower or something, which you can adjust in your settings if you care), you can absolutely replace hysterisis and all that nonsense with a basic ML model perfectly representing your system. I&#x27;ve done tons of things beyond that, and the space of opportunities dwarfs anything I&#x27;ve written. Low-latency ML is impactful.<p>(b) Even complicated, error-prone computer-vision tasks can have some mechanism by which they&#x27;re tolerant of errors. Suppose you&#x27;re trying to trap an entire family of wild hogs at once (otherwise they&#x27;ll tend to go into hiding, produce a litter of problems, and never enter your trap again since they lost half their family in the process). You&#x27;d like a cheap way to monitor the trap over a period of time and determine which hogs are part of the family. Suppose you don&#x27;t close the trap when you should have. What happens? You try again another day; no harm, no foul. Suppose you did close it when you shouldn&#x27;t have? You&#x27;re no worse off than without the automation, and if it&#x27;s even 50-80% accurate (in practice you can do much, much better) then it saves you countless man-hours getting rid of the hogs, potentially taking a couple tries.<p>(c) Look at something like plant identification apps. They&#x27;re usually right, they crowd-source photos to go alongside predictions, they highlight poisonous lookalikes, the prediction gives lists and confidences for each prediction, and the result is something easy for a person to go investigate via more reliable sources (genus, species, descriptions of physical characteristics, ...). I&#x27;m sure there exists _somebody_ who will ignore all the warnings, never look something up, and eat poison hemlock thinking it&#x27;s a particularly un-tasty carrot, but that person probably would have been screwed by a plant identification book or a particularly helpful friend showing them what wild carrots look like, and IMO the app is much easier to use and more reliable for everyone else in the world, given that they have mechanisms in place to handle the inevitable failures.