I've been looking forward to watching the video and learn a little bit more about LASP itself. Been following your work closely on Twitter a while ago. Unfortunately, got a little bit lost about half way through. I am not exactly sure what was the point you are trying to put across and I still don't really know what LASP is. It seems that it's somehow about disseminating the state in distributed systems? However, the digs at Mesos, distributed Erlang, the use case of launching a cluster within a short period of time suggest that disseminating state isn't the core of the talk.<p>I've done some work on gossip systems in the past, <a href="http://gossiperl.com" rel="nofollow">http://gossiperl.com</a> is the result of my research. Gossiperl was based on work I've done at Technicolor Virdata (shut down nearly couple of years ago). We've built a distributed device management / IoT / data ingestion platform consisting of over 70 VMs (EC2, OpenStack, SoftLayer). That was before Docker became popular, virtually everyone was thinking in terms of instances back then. These machines would hold different components of the platform: ZooKeeper, Kafka, Cassandra, some web servers, some hadoop with Samza jobs, load balancers, Spark and such. Our problem was the following: each of these components have certain dependencies. For example, to launch Samza jobs, one needs Yarn (the Hadoop one) and Kafka, to have Kafka, one needs ZooKeeper. If we were to launch these sequentially, that would take significant amount of time considering that each node would've get bootstrapped every single time from zero (base image with some common packages installed) using Chef and installing deb / rpm packages from the repos. What we put in production was a gossip layer written in ruby, 300 lines or so. Each node would announce just a minimum set of information: what role it belongs to, what id within the role it has, the address. Each component would know the count of the dependency it requires within the overlay to bootstrap itself. For example, in EC2, we would request all these different machines at once. Kafka would be bootstrapping at the same time as ZooKeeper, Hadoop would be bootstrapping alongside. Each machine, when bootstrapped, would advertise itself in the overlay and the overlay would trigger a Chef run with a hand crafted run list for the specific role it belonged to. So each node would effectively receive a notification about every new member and decide to take an action, or not. Once 5 ZKs are up, Kafka nodes would configure themselves for ZooKeeper and launch. Eventually Kafka cluster was up. Similar process would've happen on all other systems, eventually leading to a complete cluster of over 70 VMs running (from memory) about 30 different systems being completely operational. Databases, dashboards, MQTT brokers, TLS, whatnot. We used to launch this thing at least once a day. The system would usually become operational within under half an hour, unless EC2 was slacking off. Our gossip layer was trivial. In this sort of platform there are always certain nodes that should reachable from outside: web server, load balancer, mqtt broker. Each of those would become a seed, any other node would contact one of those public nodes and start participating.<p>From the capabilities perspective, the closest thing resembling that kind infrastructure today, is HashiCorp Consul. Our gossip from Virdata is essentially what the service catalog in Consul is, our Chef triggers is what watches in Consul are. With these two things, anybody can put up a distributed platform like what you are describing in your talk and what we've built at Virdata. There are obviously dirty details like, one needs to have a clear separation of installation, configuration and run of the systems within the deployment. The packages can be installed concurrently on different machines, application of the configuration triggers the start (or restart), system becomes operational.<p>Or do I completely miss the point of the talk. I'd like to hear more about your experiences with Mesos. You're not the first person claiming that it doesn't really scale as far as the maintainers suggest.<p>By the way, HyParView, good to know, I've missed this in my own research. Maybe it's time to dust off gossiperl.<p>* edit: wording