I remember seeing Boston Dynamics videos and thinking , this is probably in the +10 years timeline. However there has been increasing papers [0] showing how robots can be grounded with LLMs. As artificial intelligence (AI) continues to develop, it is becoming increasingly possible to create robots that are capable of autonomous action. This raises a number of safety concerns, as robots could potentially be used to harm people or property.<p>One of the main concerns is that robots could be used to carry out malicious acts. For example, a robot could be programmed to attack people, steal property. In addition, robots could be hacked and used by malicious actors for their own purposes. In addition, robots could be poorly designed or manufactured, and could malfunction in unexpected ways. Specifically in this demo, this robot can be prompted to do things unforeseen.<p>Here are some specific questions that need to be addressed:<p>How can we prevent robots from being used to carry out malicious acts?
How can we prevent robots from causing accidents?
How can we ensure that robots are designed and used in a safe and responsible manner?<p>[0]<a href="https://ai.googleblog.com/2022/08/towards-helpful-robots-grounding.html" rel="nofollow noreferrer">https://ai.googleblog.com/2022/08/towards-helpful-robots-gro...</a>
[1] <a href="https://twitter.com/MetaAI/status/1670484372309573632" rel="nofollow noreferrer">https://twitter.com/MetaAI/status/1670484372309573632</a>