Never eval() AI-generated text without a sandbox.
This should one of the most obvious principles of AGI safety, especially to a company working towards building superintelligence.<p>Yet, inexplicably, the OpenAI tutorial linked above includes this un-sandboxed command, with no explanation for this egregious lack of any safety precautions: `tool_query_string = eval(tool_calls[0].function.arguments)['query']`<p>Though the content of function.arguments is usually reasonable in typical scenarios, it is unvalidated, arbitrary text. There are multiple documented instances of it containing text that causes parsing errors:<p><a href="https://community.openai.com/t/malformed-function-calling-arguments/272803" rel="nofollow">https://community.openai.com/t/malformed-function-calling-ar...</a><p><a href="https://community.openai.com/t/malformed-json-in-gpt4-1106-function-arguments/685884" rel="nofollow">https://community.openai.com/t/malformed-json-in-gpt4-1106-f...</a><p><a href="https://community.openai.com/t/ai-assistant-malformed-function-arguments-if-parameters-property-in-function-schema/772865" rel="nofollow">https://community.openai.com/t/ai-assistant-malformed-functi...</a><p>Anyone running the code in the OpenAI cookbook, containing eval(), could have their machine compromised by a model that gets hijacked via prompt injection or whose behavior goes astray for other reasons.<p>The fact that OpenAI recommends this code to developers speaks to a serious lack of care about AGI safety.