We're a boutique machine learning consultancy and use our machine learning platform, <a href="https://iko.ai" rel="nofollow">https://iko.ai</a>, in our meetings. The platform offers real-time collaborative notebooks, long-running notebook scheduling, automatic experiment tracking, model deployment or image building, and monitoring deployments. A lot of orchestration and moving parts. We use the real-time collaborative notebooks on the platform to have a notebook containing not only the agenda, but our remarks, action items, code snippets to reproduce bugs or feature proofs of concepts that we can execute in the meeting document itself.<p>There were countless times when we're discussing a bug and working on the same notebook, have a code snippet that reproduces the bug, and then propose a fix right there and then on the same notebook, and test that it works there. I love that recursion of using the tool to improve the tool itself.<p>So, that's the meta, unintended, use of the capability to improve the tool that provides it.<p>The principal reason we added that feature was to pair-program on the same notebook to refine, optimize, or troubleshoot ML code for our work for clients.<p>Soon, we'll use it in our hiring process given that the questions for some of our roles are about a machine learning problem. This way we'll see the applicant's experiments, models, iterations, etc. and they'll be able to deploy their models, monitor it, and use it seamlessly. It'll reduce friction for people with a more academic background, given this is one of the reasons we built this. We'll therefore be able to go back and forth with the candidate, see how they work through the problem.