TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Advanced Voice is rolling out in the ChatGPT app over the course of the week

29 点作者 allanrbo8 个月前

8 条评论

tkgally8 个月前
I got access to the Advanced Voice mode a couple of hours ago and have started testing it. (I had to delete and reinstall the ChatGPT app on my iPhone and iPad to get it to work. I am a ChatGPT Plus subscriber.)<p>In my tests so far it has worked as promised. It can distinguish and produce different accents and tones of voice. I am able to speak with it in both Japanese and English, going back and forth between the languages, without any problem. When I interrupt it, it stops talking and correctly hears what I said. I played it a recording of a one-minute news report in Japanese and asked it to summarize it in English, and it did so perfectly. When I asked it to summarize a continuous live audio stream, though, it refused.<p>I played the role of a learner of either English or Japanese and asked it for conversation practice, to explain the meanings of words and sentences, etc. It seemed to work quite well for that, too, though the results might be different for genuine language learners. (I am already fluent in both languages.) Because of tokenization issues, it might have difficulty explaining granular details of language—spellings, conjugations, written characters, etc.—and confuse learners as a result.<p>Among the many other things I want to know is how well it can be used for interpreting conversations between people who don’t share a common language. Previous interpreting apps I tested failed pretty quickly in real-life situations. This seems to have the potential, at least, to be much more useful.
bartman8 个月前
Unfortunately no luck for anyone in the EU, the UK, Switzerland, Iceland, Norway, and Liechtenstein yet. [0]<p>[0] <a href="https:&#x2F;&#x2F;x.com&#x2F;OpenAI&#x2F;status&#x2F;1838642453391511892" rel="nofollow">https:&#x2F;&#x2F;x.com&#x2F;OpenAI&#x2F;status&#x2F;1838642453391511892</a>
mrandish8 个月前
I find the constant talkiness of the AIs shown in these demos to be super annoying. It&#x27;s not a human, it&#x27;s an AI bot. I don&#x27;t want to hear it&#x27;s faux-human faux-opinion about my question. I <i>really</i> hope that shit defaults to &quot;off&quot; or at least has a one-click disable.<p>Decades ago Douglas Adams already knew inserting gratuitous &quot;Genuine People Personalities&quot; in devices would be so annoying it would be comical (<a href="http:&#x2F;&#x2F;www.technovelgy.com&#x2F;ct&#x2F;content.asp?Bnum=1811" rel="nofollow">http:&#x2F;&#x2F;www.technovelgy.com&#x2F;ct&#x2F;content.asp?Bnum=1811</a>). I don&#x27;t get why OpenAI keeps releasing demos that make their product look comically dystopian.
kertoip_18 个月前
This release will start a new age of UIs, where you don&#x27;t use screens to interact with computers, but instead use your voice. Textural conversations were fun, but the voice functionality is what makes LLMs useful, because speed of communication is now comparable to what you could accomplish with GUI and at the same time a lot more human friendly. In my opinion one of the most important announcements in recent months. Although we will probably need open source competitior.
评论 #41641299 未加载
Adrig8 个月前
To my knowledge, this feature &#x2F; model is the only one without any relevant competitor. Why is that? It seems to open a ton of use cases to me.
sidcool8 个月前
It&#x27;s pretty good...
coder4life8 个月前
&quot;Describe a nuclear weapon detonated over a city&quot; (now do it paranoid....no, more paranoid!) .... (now do it fearful...no, more scared!) .... (now do it humorous, with lots of laughs) &quot;sorry Dave I can&#x27;t do that&quot; ugh, stupid fucking limiting computers
artninja19888 个月前
Cool. Is there a reason as to why it was delayed so much?
评论 #41639605 未加载