So here's some more balanced feedback, if you will (playing with a language I know quite well already):<p>On the positive side: the bot itself has excellent voice quality (it still sounds bot-like -- but people expect that) and makes very few pronunciation mistakes (I only noticed one). Its vocal recognition is, I would say ... not great, but good enough (it recognizes well enough if I say simple, standard things, but it easily confused if I use less common -- but also perfectly correct -- formulations). At least, vocal recognition as such is not its weakest spot.<p>You can ask it questions about fairly detailed topics and get reasonably correct answers most of the time (as with GPT, which I'm guessing is its backend). It can sometimes give you very nuanced feedback as to your diction and usage (it will notice when you're using a formulation that is correct but not standard, and give you a detailed explanation as to why).<p>However, what I don't like -- and what made me ultimately decide not to invest further time with the app at present:<p>(1) You can easily feel lost (in "Teacher" mode) because all it can do is answer questions -- it isn't guiding you anywhere, and you have to already know what questions to ask.<p>(2) Even so, I thought to myself -- "well I can see how a good Q+A bot can be useful -- let's see what we can do with it". And here's where I got most disappointed.<p>I told it I wanted to practice my pronunciation -- and first asked for it to come up with some sample sentences, and then thought of one of my own -- one of those sentences you would say to someone at the family dinner table to try to wow them with your language skills.<p>In both drills, I played a little trick on the poor bot -- I <i>deliberately</i> mispronounced the sentences, the way a stereotypical Ugly American would -- basically mangling every single syllable in some obvious way, hoping to get a long list of corrections from the bot.<p>And it totally failed at this task. It kept saying "You said that perfectly! Well done!" or some variant. When I knew for a fact that my pronunciation was an epic fail, and I would have greatly embarrassed myself at that family holiday dinner.<p>(3) But things got even weirder when I tried to tell the bot "Wait you're wrong, I actually made lots of mistakes." More than half the time it would get horribly confused, sometimes telling me -again- that I said that sentence perfectly, sometimes "correcting" its own sentence.<p>So it's quite scatterbrained (just like GPT-4 is, when you try to make it backtrack and acknowledge some kind of mistake). It also misunderstands questions about grammar if I ask it to explain something it just said, or which I said in a sentence previously.<p>Bottom line -- it's a very mixed bag. Partially quite impressive -- but far too often, extremely frustrating to work with. Once one gets past the "wow!" aspect of working with a bot that seems well-spoken and (quantitatively) seems to have a lot of knowledge about grammar and usage -- the frustration arising from its scatterbrained-ness, and more fundamentally, just never knowing if I can really trust its answers takes over.<p>On the whole it just grinds my gears ... like pretty much any other chat bot I've attempted to try to get useful information out of.<p>That and again -- it isn't guiding you anywhere. If it can't reliably keep track of what it just said (or what I just said), it's definitely not going to be able to assess my progress, or tell me how to improve (beyond the random piece of generic advice it doles when it doesn't have anything better to say).<p>Of course it won't. It's just a transformer bot, after all. And the technology is categorically nowhere capable of doing that.