Very interesting. I just worked to implement a baby version of this kind of system at work. Similar to this project, our basic use case was allowing researchers to quickly/easily execute their arbitrary R&D code on cloud resources. It's difficult to know in advance what they might be doing, and we wanted to avoid a situation where they are pushing a docker container or submitting a file every time they change something. So we made it possible for them to "just" ship a single class/function without leaving their local interactive environment.<p>I see from looking at the source here, run.house is using the same approach of cloudpickling the function. That works, but one struggle we are having is it's quite brittle. It's all gravy assuming everyone is operating in perfectly fresh environments that mirror the cluster, but this is rarely the case. Even subtle changes in the execution environment locally can produce segfaults when run on the server. Very hard to debug. The code here looks a lot more mature, so I'm assuming this is more robust than what we have. But would be curious if the developers have run into similar challenges.
> Just as PyTorch lets you send a model .to("cuda"), Runhouse enables hardware heterogeneity by letting you send your code (or dataset, environment, pipeline, etc) .to(“cloud_instance”, “on_prem”, “data_store”...), all from inside a Python notebook or script. There’s no need to manually move the code and data around, package into docker containers, or translate into a pipeline DAG.<p>From an SRE perspective, this sounds like a nightmare. Controlled releases are <i>really</i> important for reliability. I definitely don't want my devs doing manual rollouts from a notebook.
Since people are suggesting alternatives, I'd like to shoutout skypilot: <a href="https://github.com/skypilot-org/skypilot">https://github.com/skypilot-org/skypilot</a><p>EDIT: looks like this actually uses it under the hood: <a href="https://github.com/run-house/runhouse/blob/main/requirements.txt#L8">https://github.com/run-house/runhouse/blob/main/requirements...</a>
This is a cool approach. I really like the notion of small, powerful components that compose well together. ML infra is sorely missing this piece. I wish you the best of luck!
> Please make sure the function does not rely on any local variables, including imports (which should be moved inside the function body)<p>This seems like a major limitation and pretty antithetical to the PyTorch approach.
Have you tired Hidet ? <a href="https://pypi.org/project/hidet/" rel="nofollow noreferrer">https://pypi.org/project/hidet/</a>