Symbolic AI could never become great because it was missing the connection to the intricacies of the real world that you can only get from data. I think a symbioses of symbolic techniques with LLMs or generally multimodal autoregressive foundation models will lead to our first legit "AGIish" agents. The LLM takes the role of a little gremlin inside the machine that provides the magic sauce, that tiny bit of general intelligence or common sense necessary to connect arbitrary interfaces.
This is quite impressive. Especially the auto-correcting and error reporting. I always thought of Mathematica as a cool, but not worth the price technology. This LLM integration into the notebooks is totally increasing the value and making me consider giving it a try.
Nice, now we just need to integrate AR headset and we can have proper Tony Stark style dictated 'research laboratory'.<p>edit: I spoke too soon, it actually has a module to publish AR model object of the design you created. so this could potentially be directly usable with AR glasses.
Relies on GPT-4. Maybe there could be a multistage way to get similar automated API usage with smaller open source models?<p>I wonder if it would be possible to train a 7B or 13B model to generate code in just one specific programming language. Train it with example problem input/ program output pairs. Then train another small model to translate natural language in a specific domain into an input for the coder model. And maybe a third to translate that into a different real programming language.<p>The point of this being that you can use smaller GPU instances and dedicate all of the limited power of each model to narrower domain that may be more tractable for it.
Wow, I just learned about this ARObject thing. You can long press a QR code on an iPhone and see a 3d object in AR. Or without a QR code for example, if you're on an iPhone, try following this link [0] and then tapping the body of the page.<p>[0] <a href="https://www.wolframcloud.com/obj/yanz/Base/Temp/AR/201a538b-067d-4965-b012-4a367204b0c9/model.usdz" rel="nofollow noreferrer">https://www.wolframcloud.com/obj/yanz/Base/Temp/AR/201a538b-...</a>
All I actually want is an LLM which can adhere to a set of rules, like a language spec, library/framework code & then help me make stuff rather than hallucination of arbitrary version numbers & staying in an infinite loop of trying every iteration of code by myself in an IDE
I used to subscribe to Wolfram Cloud/Desktop. I liked it, but never really loved the language itself.<p>I tried a Chat Notebook on Wolfram Cloud this morning, and asked it to write a script to fetch data from DBPedia and present it. It generated Wolfram Language code, so that was very cool.
It's a bit unclear from this post: is this a locally running LLM or in the cloud and if so whose servers? Does it use an existing service like openai or a completely new model specific to wolfram?