Interesting approach to cast the net so wide in a single release.<p>I mean, I can imagine it makes a lot of sense for a company to just dump a bunch of documents into S3 and then expect an LLM to be able to answer questions on that corpus. In some sense, you don't even really care about what is happening in the background, i.e. is it RAG, fine-tuning, LoRA, etc.<p>Also, I can imagine a debugging scenario for AWS where you might want an AI assistant to have access to your Cloudwatch, ECS, EC2, etc. so you can ask questions like "X service is down, what interesting logs/metrics are worth looking at more closely". And instead of the truly terrible AI "smart" alerting solutions you can play a game of 20 questions with a GPT-3.5 level LLM.<p>These services are the tip of the iceberg compared to what will come in the next couple of years. I bet Azure will have similar offerings very soon. Maybe Amazon is working here to beat them to the punch?