TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

One Million Database Connections

119 pointsby StratusBenover 2 years ago

7 comments

jzelinskieover 2 years ago
Awesome to hear more about MySQL&#x2F;Vitess connection pooling.<p>Folks typically only consider memory usage for database connections, but we&#x27;ve also had to consider the p99 latency for establishing a connection. For SpiceDB[0] one place we&#x27;ve struggled for our MySQL backend (originally contributed by GitHub who are big Vitess users) is preemptively establishing connections in the pool so that it&#x27;s always full. PGX[1] has been fantastic for Postgres and CockroachDB, but I haven&#x27;t found something with enough control for MySQL.<p>PS: Lots of love to to all my friends at Planetscale! SpiceDB is also a big user of vtprotobuf[2] -- a great contribution to the Go gRPC ecosystem.<p>[0]: <a href="https:&#x2F;&#x2F;github.com&#x2F;authzed&#x2F;spicedb" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;authzed&#x2F;spicedb</a><p>[1]: <a href="https:&#x2F;&#x2F;github.com&#x2F;jackc&#x2F;pgx" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;jackc&#x2F;pgx</a><p>[2]: <a href="https:&#x2F;&#x2F;github.com&#x2F;planetscale&#x2F;vtprotobuf" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;planetscale&#x2F;vtprotobuf</a>
评论 #33431505 未加载
gsandersonover 2 years ago
Impressive! But I guess the trade-off of having all that power is the potentially terrifying cost. As you detail in the post, AWS Lambda comes with a default throttle (1000 concurrent) which can be adjusted. Is any throttle&#x2F;limit like that supported, or in the road-map? Only I&#x27;ve been thinking I may <i>want</i> a service to fail beyond a certain point, as that amount of load would indicate an attack, not genuine usage.
评论 #33427190 未加载
评论 #33423830 未加载
sulamover 2 years ago
On the off chance someone associated with this is reading: I’m curious about the networking stack here. Specifically TCP. Is it being used? The reason I ask is because one limit I’ve run into in the past with large scale workloads like this is exhausting the ephemeral port supply to allow connections from new clients.<p>Did you run into this? If not I’m curious why not. And if so, how did you manage it?
评论 #33427039 未加载
unilynxover 2 years ago
&gt; making MySQL live outside its means (i.e. overcommitting memory) opens the door to dangerous crashes and potential data corruption, so this is not recommended.<p>data corruption? how?<p>I&#x27;m no mySQL fan but is this FUD or referring to a real issue?
评论 #33425568 未加载
评论 #33424865 未加载
评论 #33423837 未加载
themenomenover 2 years ago
Any additional insights or information on how latencies relate to number of connections?
评论 #33425120 未加载
prithvi24over 2 years ago
Can ya&#x27;ll sign a BAA for HIPAA? Saw Soc2 - just<p>Hosted Vitess sounds amazing - love this - 0 downtime migrations w&#x2F; Percona on RDS still suck and waste a lot of time
评论 #33425870 未加载
评论 #33487221 未加载
twawaaayover 2 years ago
If you have a lot of connections doing similar things, just batch requests to get the data in bulk.<p>Scaling your database up should only be attempted once you can no longer improve efficiency of your application. It is always better to first put effort into improving efficiency than scaling it up.<p>For example, one trick that allowed me to improve throughput of one application using MongoDB as a backend by factor of 50 was capturing queries from multiple requests happening at the same time and sending them as one request (statement) into the database, then when you get the result you fan them out to the respective business logic that needs them. The application was written with Reactor which makes this much easier than a normal thread based request processing.<p>For example, if you have 500 people logging at the same time and fetching their user details, batch those requests for example every 100ms up to 100 users and fetch 100 records with a single query.<p>You will notice that executing a simple fetch by id query even for hundreds of ids will only cost couple times more than fetching a single record.<p>The application in question was able to fetch 2-3 GB of small documents per second during normal traffic (not an idealised performance test) with just couple dozen connections.
评论 #33424684 未加载
评论 #33424098 未加载
评论 #33424212 未加载
评论 #33440472 未加载
评论 #33424353 未加载
评论 #33424019 未加载