From <a href="https://www.ottoproject.io" rel="nofollow">https://www.ottoproject.io</a><p><pre><code> It was an ambitious project, and we feel that it has not
lived up to expectations. Rather than having a public project
that does not meet HashiCorp standards, we have decided to
close source the project so we can rethink its design and
implementation. The source code is still available for
download, but Otto will no longer be actively maintained or
supported.
</code></pre>
It looks like this is the successor to Otto, just with a bit of a different architecture.<p>The thing I'm worried about in terms of considering trying/adopting it for any project is whether this will suffer the same fate, if it's not as successful as Hashicorp hopes. I don't want to learn new tools that give a small increase in efficiency if I don't have a guaranteed return on that effort and some reasonable confidence using that tool will help me in other areas too.<p>Especially if the code behind the thing has the risk of going closed-source... it's kind of the Google problem (<a href="http://killedbygoogle.com" rel="nofollow">http://killedbygoogle.com</a>), and I really hope Hashicorp can avoid that (which they have, as they continue to support well-used even if not-flashy-anymore tools like Vagrant).
Hello HN! I'm the founder of HashiCorp.<p>Waypoint is our 2nd day HashiConf announcement and I'm excited to share and talk about it! Compared to Boundary, Waypoint is definitely weirder, it's trying to do things differently. I'll be around here to answer any questions.<p>I think the most common question will be what is this and why? I cover that in detail on the intro page here so I recommend checking that out: <a href="https://www.waypointproject.io/docs/intro" rel="nofollow">https://www.waypointproject.io/docs/intro</a><p>Here are some major things Waypoint is trying to do:<p>* Make it easier to just deploy. If you're bringing a Ruby app to Kubernetes for example, Waypoint uses buildpacks and an opinionated approach to deploying that app for you to Kubernetes. You don't need to write Dockerfiles or any YAML. You can `waypoint up` and go. If you have existing workflows already, you can use a plugin that just shells out to `kubectl`. The important thing here is you get a common workflow "waypoint up" and your future apps should be much easier to deploy.<p>* Provide common debugging tools for deployments. Waypoint gives you `waypoint exec`, `waypoint logs`, etc. and these work on any platform Waypoint supports. So while K8S for example may provide similar functionality, you can now get identical behavior across K8S and serverless, or your VMs, etc.<p>* Build a common workflow that we can plugin other tools around. This is similar to Terraform circa 2015. There wasn't a consistent way then to think about "infrastructure management" outside of a single target platform's tools. With Waypoint, we're trying to do something similar but for the application deployment lifecycle.<p>As always, a disclaimer that Waypoint is a 0.1 and there is a lot we want to do! We have an exciting roadmap[1] that includes promotion, config syncing with KV stores, Vault and others, service brokering for databases, etc.<p>And also, lots of jokes about Otto in the comments here. :) I think Waypoint and Otto similarities end at "they both deploy" (and Otto with HEAVY quotes around "deploy"). They're totally different tools, one didn't inspire the other, though we did make some major changes to avoid Waypoint hitting the same pitfalls as Otto.<p>Super excited to share Waypoint today!<p>[1]: <a href="https://www.waypointproject.io/docs/roadmap" rel="nofollow">https://www.waypointproject.io/docs/roadmap</a>
I'm late in the thread and this will probably get buried but:<p>I think this looks amazing. Lot of comments here saying it's yet another useless abstraction.<p>Sure, maybe it doesn't do anything you can't already do yourself, but the experience looks like one I would enjoy and could see myself using.<p>Obviously I can't pass judgement that quickly so will reserve further comments until I can spend some time this weekend experimenting, but it's an experiment I look forward to.
I always love HashiCorp products!<p>But...<p>This feels like Otto v2 to me, and it seems like it hasn't actually solved the underlying problem with Otto: that it was simpler to just use and learn the underlying tools instead of a very specific DSL that transformed into them. If I have to look up how to use Waypoint to create Dockerfiles/Kubernetes manifests anyway, why not just learn how to use Dockerfiles and Kubernetes manifests?<p>I'm pretty excited by Boundary, but I don't really see the point of this.
Given the prevalence of Kubernetes, I am not sure more abstractions are needed. Kubernetes if anything is too complete in how it manages it's deployments, and all this does is add another level of abstraction on top of that, while encouraging people not to think about the operational part of the loop. For example, no health or readiness checks are defined as part of the deployment examples in this link. If you are still running multi-deployment with Ansible/ Puppet / Cloud formation, etc I see value, but i think most are probably better off focusing on going K8s.
As a not-that-old-but-still-grizzled desktop/embedded developer, I suggest this headline be changed to:<p>"Waypoint: Build, deploy, and release _web_ applications across any _cloud_ platform"
This is great. I'm glad a good company is backing an open source project like this. I've built abstractions like this at two companies, and I hope one day I can stop writing my own. Some comments:<p>I see "promotion" on the roadmap, so this is probably in the works. But the current "Workspaces" each have their own build. I really want to be able to promote <i>the same artifact</i> from one environment (staging) to another (production). I'd also like to be able to choose the artifact build/version to deploy, not just what happens to be in my local repo.<p>The concept of multiple environments also brings up the need to vary things by environment. App config (env vars) are obvious. But also settings like number of replicas or auto scaling min/max (auto scaling is also required).<p>The biggest thing most of the tools in the space lack, like all the ones that try to copy the docker-compose.yml syntax, is standardized reusable app settings. Imagine I have two common types of app, "API services" and "background workers". They have common settings, like maybe they all default to using /healthz for health checks and auto-scaling at 70% CPU. Then within each group they vary, "API services" use internal load balancers and "background workers" don't.<p>I don't want every individual "API service" app to have to say "my health check endpoint is /healthz" and "I run on port 8080 and need a load balancer". Those are the defaults, and if the app uses the defaults it shouldn't need to be configured. But at the same time I want the app to be able to override the defaults. Within a well standardized environment 90% of app infra settings (not env vars) are the same, or can use the same template (like the image is docker.example.com/svc/{app_name}:{build_id}). I want to be able to reuse the settings.
So that's what they've been doing with all the Heroku people they've been hiring.<p><a href="http://hashiroku.com/" rel="nofollow">http://hashiroku.com/</a>
Another tool that seems quite heavily focused around making the simplest possible cases look snappy and simple. The simple cases aren't the problem. And when it comes to the complex cases, I have <i>much</i> more faith in the extensibility and power of Nix, which is designed as an actual real programming language to give users the power to solve problems the designers had never envisioned for themselves, than HCL, which seems to be arbitrarily restricted and half-thought-out at the best of times.
I'm not buying the premise that there is a need for yet-another-abstraction over a high level tool like Kubernetes. You need to learn your underlying deployment platform anyway, introducing yet another tool feels like a distraction.
This is pretty interesting.<p>I spent the last year building an in-house, Kubernetes based PAAS that has a lot of similar functionality. What we don't do, however, is support multiple execution environments -- we're tightly coupled to Kubernetes.<p>The fact that it's OSS makes it a compelling starting point for other organizations who are beginning their own in-house PAAS efforts (or restarting them). If I were starting over I'd definitely dig into this before deciding to roll my own.<p>I know I'm certainly going to need to make some time to learn more about Waypoint. It might be too much pain to migrate to, as our in-house system is pretty mature -- but it'll at the very least serve as a source of inspiration. NIce work!<p>Do you know how you plan to monetize this in the long-run? I assume it's via a managed offering, where the end-user isn't responsible for hosting the software themselves and instead pays a monthly rate (or pays per use). Knowing what that trajectory looks like we help me feel more comfortable using it, as it'd clarify how it will / will not change.
This seems handy for teams that aren't using Kubernetes. If you're already on Kubernetes, there are better tools out there like Argo, Scaffold, Harness, etc with a tighter focus. Curious to know why `waypoint test` is missing, that could be interesting...
This is clearly the successor to Otto. I think the key point here is this:<p>> "This workflow is consistent across any platform, including Kubernetes, Nomad, EC2, Google Cloud Run, and over a dozen more at launch. Waypoint can be extended with plugins to target any build/deploy/release logic."<p>It makes a lot of sense. You provision your infrastructure with Terraform and deploy with Waypoint. This is basically Terraform but for Deployments.<p>I think that's pretty cool. I'm wondering how clever is this though. If I have something running locally a deploy with Waypoint do they figure out all the configuration automatically?<p>One big challenge that I have always had with fresh deployments into new environments it's making them work out of the box. I don't remember the last time I have deployed something without having to change a configuration or mess with the command line of my host to make it work.
If I understand how it works, there's something I find fundamentally wrong about it, which is that an application should not know where nor how it's deployed.
It has to know how it's build, of course, but it should be deployment agnostic.<p>Anyhow, I tend to like all Hashicorp projects and I'm grateful for their work.
I will drill deeper into the documentation during the weekend, but if Mitchell is still around I have a few questions about how this might work for some apps I'm building.<p>1. Is there ECS support for Fargate, specifically, and how does waypoint exec (for a shell) work if Fargate is supported? We hacked a bastion container together using SSM remote activation, and it feels so brittle.<p>2. If I want to deploy multiple versions of my app into the same VPC is that possible? My use case is having 1 giant VPC/RDS that multiple app/feature versions will take a slice off and use. Rather than setting up a VPC + all services per staging/feature environment.<p>3. Is there support for dynamic URLs using native cloud tools? For example, I'd love to use ALB auto-provisioned certs and carve off dynamic URLs on a subdomain.<p>Thanks :)
Oh man, I got excited and thought you guys were finally gonna start developing a CI product. As a Jenkins heavy user I like how extensible it is but it could be simpler and work better and easier with tools like Nomad.<p>I hope you guys hop into the CI server scene. A man can wish.
Man people are still deploying docker applications on a single server with docker-compose or docker-swarm for it's simplicity. Some of my personal projects are still deployed like this as it is really simple (it's docker with a very thin yaml), works exactly like on my machine and is against my expectations really stable.<p>Is there some plan to support docker-compose or something like this? The "magic" why it is working so good for simple solutions is the service discovery. Since version 1.10 dns resolution by container names in created docker networks is possible which would be enough to replicate most of the functionality of docker-compose. And the feature to start multiple containers for an application is required. Is there some plan to support a lot more docker configuration options? Or is the docker mode just a small neat feature which will not receive much development as kubernetes and nomad will be the main deployment systems for most of the users?
I really like the focus on dev ergonomics, but as someone who works on a crufty older application, the problem I have is not deploying, it's managing a build system that needs to work around existing application code. If I could just refactor the application to use buildpacks, I wouldn't have a problem in the first place!
The documentation says that Waypoint does not want to replace PaaS, but rather to integrate with both PaaS and lower-level infrastructure like Kubernetes.<p>However, the documentation also says that Waypoint injects a “smart entrypoint” into the containers it builds, and that entrypoiny seems very intrusive. Among other things, it calls home to the waypoint server and registers on a “URL service” which will then route traffic back to it. That is basically a PaaS service mesh.<p>So my understanding is that Waypoint, while claiming to be complementary to PaaS, is actually a trojan horse to inject its own PaaS into my infrastructure. I understand why that is valuable to Hashicorp; and I may even be interested in evaluating a Hashicorp PaaS. But I am not a fan of the way this is being snuck in. Injecting a service mesh into my stack is a major change; please be upfront about it so I can compare it to other PaaS.
What is that conf format? Looks neither YAML, nor JSON. (if your conf files are not a standard format, this project is likely not going to go anywhere IMHO).<p>edit: Apparently non-standard [0].<p>[0] <a href="https://github.com/hashicorp/hcl" rel="nofollow">https://github.com/hashicorp/hcl</a>
I’m glad to see new tooling targeting developers and the local development loop which was neglected because historically was hard to monetize. As an consultant involved in cloud migrations and CI/CD I do see the value of a common language for build/test/deploy, it’s a mess in the enterprises nowadays. I’m curious to learn what is Hashicorp’s opinion about CNAB (which somehow tries to solve the same problem). As Waypoint is complementary to CI tools, do you see this also complementary for the GitOps tooling which is gaining traction in the Kubernetes world?
Hopefully someone at Hashicorp can double check my read: I feel this could be compared to Skaffold or Tilt, but with a heavier emphasis on a plugin architecture. Is that right?
> You can use exec to open up a shell in your app<p>Ok, maybe we don't exactly speak the same language, but shouldn't that be "a shell in your server running your app".
I'm totally buying into the vision. Was discussing this with a co-worker a couple of months ago: there hasn't been much innovation w.r.t deploying apps within the last ~4 years (for on-premise at least). Still the same tools with bad UX, scripts, CLIs. Hope Waypoint can improve the status quo, it sounds like the missing glue.
Wow, this looks great! I'm a big fan of Azure DevOps for CI/CD, but I love the idea of something as powerful as Waypoint, but is open source and I can run it anywhere.<p>I wonder if it might be a good tool for deploying infrastructure too - I miss something <i>simpler</i> than Ansible.<p>I'll definitely be dabbling with it.
What I see here is that commands "logs" and "exec" are being commoditized and consumers (developers) are expecting these features out of their platform, whether it's kubernetes or anything else.
I think codified infrastructure has too much cognitive load/learning curve. It always gets messy, hard to maintain and understand, especially for new developers.<p>Building a multi cloud UI tool is the future.
In my experience with HashiCorp products, things don't work like the page says (Vault) while other's are sometimes half-finished (Terraform will let you create, but not destroy resources). Edit: To clarify, I ran into some cases where the creation code for a resource existed, but not the destroy code.<p>I might wait on this and read the Github issues for a few months before trying it