It seems like AI-related orchestration better be more closely related to the application domain than to an abstract reasoning pipeline. The LLM already “knows” something about reasoning, but it doesn’t know anything at all about your private domain models, what they mean, or how to efficiently integrate your data. I think this is why LangChain tends to be abandoned when moving up from a proof of concept (where the developers are most interested in learning how to use GenAI) to a production application, where effective integration with internal services is a bigger challenge.