TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Go Micro – A Go microservices development framework

68 点作者 chuhnk超过 5 年前

7 条评论

kubanczyk超过 5 年前
go mod graph | wc -l<p>787<p>Oh my. And the most fascinating part of it:<p><pre><code> - google.golang.org&#x2F;grpc@v1.17.0 - google.golang.org&#x2F;grpc@v1.19.0 - google.golang.org&#x2F;grpc@v1.19.1 - google.golang.org&#x2F;grpc@v1.20.1 - google.golang.org&#x2F;grpc@v1.22.0 - google.golang.org&#x2F;grpc@v1.24.0</code></pre>
评论 #21493539 未加载
评论 #21494731 未加载
GiorgioG超过 5 年前
Not knocking the OP, or his&#x2F;her effort, but I think the number of microservice frameworks exceeds the number of organizations that actually have the problems that microservice architecture was designed to solve.<p>We&#x27;re doing it for the project I&#x27;m working on at work, and it&#x27;s my opinion a colossal waste of engineering time and effort. We&#x27;re a big company, but we&#x27;re not FAANG. Our user-base will never be even likely to be 100k users total. But hey we&#x27;re doing this in the name of industry&#x27;s current &#x27;best-practice.&#x27;<p>I can&#x27;t wait for the microservice &amp; scrum trends to die off.
评论 #21492179 未加载
评论 #21492094 未加载
评论 #21492107 未加载
评论 #21492088 未加载
评论 #21492167 未加载
评论 #21492168 未加载
评论 #21492780 未加载
评论 #21492752 未加载
Someoneelse77超过 5 年前
There are breaking changes like every second minor release, functionality gets removed, dependent repositories get deleted&#x2F;renamed&#x2F;moved by author. PRs are discussed on the wrong level and author is very opinionated.<p>Do not use this framework unless you want to end up in an inconsistent mess!
评论 #21494306 未加载
holografix超过 5 年前
Service Discovery, load balancing. Aren&#x27;t these things that should be done by the underlying platform?<p>In other words: if I&#x27;m using this with K8s doesn&#x27;t K8s do that for me? What major benefits do I still have by using Go Micro?
vemv超过 5 年前
I&#x27;d help a lot if the project showed rationale, alternatives, its taken choices and the corresponding tradeoffs.<p>Otherwise, one is essentially invited to blindly adopt someone else&#x27;s design, which is particularly reckless in distributed systems.
tedunangst超过 5 年前
Having built a micro service of sorts just yesterday with nothing more than net&#x2F;rpc, it wasn&#x27;t that bad.
jrockway超过 5 年前
I fear this does too much.<p>All applications should care about is an API to do what they want, so all you need to decide on is a messaging protocol, which is probably going to be GRPC. (Why GRPC? I picked it out of a hat. JSON is very brittle when service definitions change, so you want an IDL. Feel free to pick one and then never care about it again, it doesn&#x27;t really matter.) Then if you want publish&#x2F;subscribe, you write a publish&#x2F;subscribe service and make API calls to it. SendMessage &#x2F; WaitForMessage &#x2F; etc.<p>Service discovery and load balancing are already solved problems. Use Envoy sidecars, Istio, Linkerd, etc. for load balancing, tracing, TLS injection, all that stuff. Use your &quot;job runner&quot;&#x27;s service discovery for service discovery (think: k8s services, but feel free not to use k8s. It&#x27;s just an example.)<p>The tools you really need for success with microservices:<p>1) A way to quickly run the subset of services you need.<p>For unit tests, I prefer &quot;fake&quot; implementations of services. Often your app doesn&#x27;t need the full API surface of an upstream service. If you have a StoreKey &#x2F; RetrieveKey service, an implementation like &quot;map[key]value&quot; is good enough for tests. Make it super simple so you test your app, not the upstream app, which already has tests. (Do feel free to write some integration tests as a sanity check for CI, but keep the code&#x2F;save&#x2F;test loop fast and focused!)<p>For the &quot;try it out in the browser&quot;, I&#x27;m pretty unhappy with the available tools. You want something like docker-compose without requiring docker containers to be built. I ended up writing my own thing to do this at my last job. Each service&#x27;s directory has a YAML file describing how to run the service and what ports it needs. Then it can start up a service, with Envoy as a go-between for them. That way you get http&#x2F;2, TLS (important for web apps because some HTML features are only available from localhost or if served over https, and your phone is never going to be retrieving your app&#x27;s content from localhost), tracing, metrics, a single stream of logs, etc. I got it optimized to the point where you can just type &quot;my-thing .&quot; and have your web app working almost like production in under a second. It was great. I wish I open-sourced it.<p>2) Observability. You need to know what&#x27;s going on with every request. What&#x27;s failing, what&#x27;s slow, what&#x27;s a surprising dependency?<p>2a) Monitoring. With a fleet of applications, it&#x27;s unlikely that you&#x27;ll be seeking out failures. Rather they just happen and you don&#x27;t know how often or why. So every application needs to export metrics, and these metrics need to feed alerts so that you can be informed that something is wrong. (Alert tells you something is abnormal; the dashboard with all the metrics will let you think of some likely causes to investigate.) Just use Prometheus and Grafana. They&#x27;re pretty great.<p>2b) Distributed tracing. You don&#x27;t have an application you can set a breakpoint in to pick apart a failing request. So you need to ephemerally collect and store this information so that when something does break, you have all the information you would have manually obtained all ready for you, so you can dive in and start investigating. Just use Jaeger. It&#x27;s pretty great. (Jaeger will also give you a service dependency graph based on traces. Great for checking every once in a while to avoid things like &quot;why is the staging server talking to the production database?&quot;. We don&#x27;t know why, but at least we know that it&#x27;s happening before someone deletes production.)<p>2c) Distributed logging. You will inevitably produce a lot of interesting logs that will be like gold when you&#x27;re debugging a problem that you&#x27;ve been alerted to. These all need to be in one place, and need to be tagged so that you can look at one request all at once. The approach I&#x27;ve taken is to use elasticsearch &#x2F; fluentd &#x2F; kibana for this, with the applications emitting structured logs (bunyan for node.js, zap for go; but there are many many frameworks like this). I then instructed my frontend proxy (Envoy) to generate a unique UUID and propagate that in the HTTP headers to the backend applications, and wrote a wrapper around my logging framework to extract that from the request context and log it with every log message. (You can also use the opentracing machinery for this; I personally logged the request ID and the trace ID; that way I could easily go from looking at Jaeger to looking at logs, but traces that weren&#x27;t sampled would still have a grouping key.)<p>The deeper logs integrate into your infrastructure, the better. As an example, something I did was to include a JWT signed by the frontend SSO server with every request. Then my logging machinery could just log the (internal) username. Then when someone came to my desk and said &quot;I&#x27;m trying to foo, but I get &#x27;upstream connect error or disconnect&#x2F;reset before headers&#x27;&quot; and could just look for logs by their username. Much easier than trying to figure out what service that was, or what URL they were visiting.)<p>Anyway, sorry for the long post. My TL;DR is that you must invest in good tooling no matter what architecture you use. You will be completely unsuccessful if you attempt microservices without the right infrastructure. But all this is great for monoliths too. Less debugging, more relaxing!
评论 #21493213 未加载
评论 #21504289 未加载