I disagree with their risk ranking matrix. The controversial cell is "Prompt Injections" / "3rd Party LLMs". It says: "Medium risk. While the risk exists, the responsibility of fixing this is on the LLM provider."<p>No. The responsibility of using a vulnerable 3rd party component is always on you, unless there is a clause in the contract that says otherwise (and even then it might not apply or can be found illegal and void). Case in point: the payment info leak from ChatGPT in Italy was entirely due to a bug in a third-party component, redis-py, used by them.<p>Also, the concept of owning the LLM is used a lot, but not explained in sufficient detail. I don't see a sufficient level of distinction between LLMs both trained and used in-house and LLMs trained by 3rd parties but with the inference going on in house.