In their conversation, they bring up mobile as a recent platform shift that caused a lot of disruption. But I think it's interesting to remember how slow that took. The iPhone was released in 2007, but the real winners of mobile were launched years later -- Uber (2010), Snapchat (2011), and TikTok (2016) -- and those winners took several more years to even start to gain true traction in the market.<p>I don't think a lot of people back in 2007 could have predicted that the biggest thing to come from mobile would be an app that let teens remix music videos and share with their friends.<p>This is why I think it is a little pointless to try and create mental models for what products and features to build to capitalize on AI (though it can be fun). It's so early that we're not capable of understanding what's possible yet. If anything, we're probably at the viral "fart app" stage that mobile was in for its first few years.
As with all Hype Cycles; stick the buzzword into your company name (see "dot com" for examples) work it into every sentence of every conversation that you have and benefit from the torrential flow of investment chasing said buzzword.<p>Ignore all else and get the company name infront of the cheque books as quickly as humanly possible. Product-market fit, MVP, bootstrapping and stealth are naughty words that have no place in Hype Cycles.<p>No further strategy required, however, for the advanced entrepreneur - be aware that all cycles have a bust phase - and this time it is <i>not</i> different.<p>(unless you wrote blog posts and "content" - in which case copy the article you wrote about <i>Product Strategy in the Age of NFTs</i> a couple of years ago and swap the crypto for ai - you will get lots of clicks and nobody will notice)
To me, these brainstorming sessions theorizing the future of AI tools always miss a key thing, which is that human beings are still human beings. They don't follow logical rules of adoption and they often rebel against the things you force them to do.<p>For example, they talk about AI-generated copies of your voice becoming the way people communicate with each other. But who wants to listen to a computer copy of someone else's voice? No one. Maybe it will replace the pizza shop guy answering the phone, but it certainly isn't going to replace real conversations between friends and family members.<p>I saw another app that uses a small number of family photos to generate the surrounding scene where the photos were taken. Again, it's just a gimmick – family photos have value because they are memories of real events, not because of the intrinsic nature of the photographic paper.<p>If I were a betting man, I would bet on a <i>major</i> backlash to this sort of "automate everything" approach and a serious counter-culture to arise in the next decade or two.
Here is the problem. Every investor is going to ask what your moat is. What differentiates your Whisper -> Llama -> Midjourney Pipeline.ai from the next one? And the answer is, if you’re just making API calls, nothing. Sorry. There’s nothing stopping Jian Yang from creating newpipeline.ai in a weekend.
Here’s a couple of things which could set you apart off the top of my head.<p>Customers. Having customers is an advantage over the next guy who doesn’t, because now you can start customizing your product for unique needs rather than having a generic crud app.<p>Custom models. A custom model means some kid can’t just replicate your app easily.<p>Unique data. Data which is infeasible for another company to acquire or replicate.<p>Special people. People who will give your startup an edge in creating all of the above.
One realization I had is that tech advantage might in fact become a disadvantage. Consider companies that have invested heavily in building a technological edge. Google Translate, for instance, faces challenges as a simple prompt can overshadow its billion-dollar product. Similarly, Grammarly's competitive edge may now rely more on its momentum and user interface than on its underlying tech. As ChatGPT introduces new capabilities, countless products see their technological edge vanish. To illustrate, the introduction of the image input feature means that, with a single prompt, it could serve as a top-tier school homework assistant, a photo-based calorie counter, and a plant identifier all at once.<p>This dynamic raised into question the viability of ML research as a core business strategy. Take Midjourney, for example. They've made significant strides and achieved dominance with their advanced text-to-image generation technology. But if a product like DALL-E 3, or its successors, could render their entire offering redundant in a few short years, than it's a tricky path for a company to take.<p>To me, this suggests that the actual "new strategy in the age of AI" is that tech companies need to transition from relying on their tech edge as their competitive advantages, to relying more on more stable moats. For example, the network effects rooted in two-sided marketplaces. It also hints that tech giants like Google, who above all relied on their tech advantage, could face existential challenges in the coming decade. A sort of a win-or-die situation. While companies like Amazon might be in a more stable ground for now.
I've been saying this a lot over the past year as people obsess over 'moats' and whether a future model will make an idea obsolete, or whether it's even worth getting into something because Big Co is already working on it, or will be working on it:<p>You learn by doing.<p>There's so much value in actually making something. People forget how much is in the details, or how much something like good design can differentiate.<p>You can sit on the sidelines forever thinking that your idea isn't 'different' enough, but the ones actually making stuff, listening to users, gaining the end-to-end experience, will actually have a larger 'luck surface area'.<p>Even if your idea gets taken, or someone comes along and does it better, or cheaper – there's value in _trying_.<p>Specifically regarding AI: the models existed for quite a while, for free, for anyone to use in OpenAI's Playground. But suddenly they hooked it up to a chat UI and it blew up completely. You never know what the key thing is going to be. But if you sit around forever, you're guaranteeing failure.
This article praises AI for improving products, but what about the jobs it might take over, like customer service roles?<p>And can AI truly understand human emotions to handle sensitive customer issues or create artwork that resonates with people on a deeper level? There might be more to consider than just the cool tech.
I am headed in the other direction. Making something AI or a computer can't do.<p>Times are changing and I can't handle another hype cycle.<p>My CTO is already running around the halls talking to people about "have you done anything with ChatGPT in our projects...you should".
ctrl-F "eval" = no hits in article.<p>The reason there is so much debate on Gen AI is due to the emergence of unpredicted abilities.<p>Yes GenAI is insanely impressive - this is not a luddite argument. I have personally spent months on it, and continue to do so. ITS AWESOME.<p>However it really isnt going to do a tenth of the things people expect it to. Those emergent properties seem like actual reasoning, planning, or analysis.<p>Get to production data though, then the emperor has no clothes. Those "emergent" skills end up showing you how much correlation in text is good enough - provided the person reviewing the text is already an expert.
> While AI can streamline tasks in SaaS categories like sales and customer service, offering relief from repetitive work, the impact on project management is more nuanced.<p>Every single AI hype article includes some version of this sentence! "While AI can be helpful for the repetitive boring work that other people do, the impact on MY work is more nuanced."<p>It's basically the face eating leopard meme. "AI wouldn't automate MY job" says president of AI job automation company.
My personal take as a PM is to find narrow opportunities for the tech where it makes most sense and poses the least risk. One project we are looking at it to tune a model against our website with URL links to make a more natural search function given the utter labyrinth of a website we currently have. Not insane given the two search giants are applying the same idea. We can reduce hallucination by validating URLs it produces before hitting the user. Also build up a list of questions and not just search queiries.<p>Other avenues are more human accelerators than replacement. I have been around long enough that I if a tool presents a risk to someones job the tool often gets thrown down the stairs "accidentally". GE bought in hard to Google glass back in the day and tried having it walk through procedures for complex repair processes. A great idea if literally anyone in the field asked for it.<p>I'm with many that the hype train hit hard for "AI" and block chain, but LLMs for me do have real value and real application for some excellent use cases. I also find it an excellent sounding board for my own ideas, though the models tend to not want to disappoint you.
Honestly the way the models are improving I don’t see a ton of “product” work needed anymore. Any UI is worse than a good AI assistant that I can chat/talk to. Why do I need your forms and data rendering if the AI assistant is smart enough to figure it out on its own?<p>I am a lot more pessimistic about the startup scene in this area.<p>Look how ChatGPT can teach languages[1], good luck building an AI powered language learning app…<p>It gets worse for startups because Google and OpenAI have a ton more context about me. For example in the language learning conversation Google can refer to my spoken samples from other places to improve the experience. And yet, no PM at Google needs to think of this, they only need to hook up the data and throw more compute at their models.<p>[1] <a href="https://twitter.com/dmvaldman/status/1707881743892746381" rel="nofollow noreferrer">https://twitter.com/dmvaldman/status/1707881743892746381</a>
It's interesting, I feel excited as I haven't felt in a long time because wherever I look, there is a possible side project where I can explore AI and LLM's. On the other hand, I would feel very scared to try to transform any of them into a business due to some of the issues discussed in the article.
They mention that “mobile-first companies killed companies that weren’t mobile-first”, can anyone point to a good example? My memory is that “mobile first” was a big hype cycle. A few years later everyone fired their native mobile teams because they realised most businesses don’t actually need a mobile app.