TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Is 20M of rows still a valid soft limit of MySQL table in 2023?

130 点作者 jerryjerryjerry将近 2 年前

19 条评论

marginalia_nu将近 2 年前
FWIW, Marginalia Search has had nearly a billion rows in a table. It runs on a single PC with 128 Gb RAM.<p>It&#x27;s survived several HN death hugs.<p>Though I would say there are things you can do with a 20M table that you can&#x27;t with a 1B table. If the query doesn&#x27;t have an index, it will take hours. Even SELECT COUNT(*) takes like 10 min.
评论 #36040453 未加载
evanelias将近 2 年前
In my direct experience, with small rows and a carefully optimized workload and schema, 1 to 2 <i>billion</i> rows was totally fine in a single MySQL 5.1 table (InnoDB) on SSDs <i>twelve years ago</i>. Granted, that was on bare metal. But hardware has also improved a fair bit since then...
totalhack将近 2 年前
Haven&#x27;t had trouble into the low hundreds of millions on a single RDS server with 16GB of ram. YMMV.
评论 #36039651 未加载
评论 #36038998 未加载
评论 #36039336 未加载
grumple将近 2 年前
You can safely go to a billion rows. Use indexes. Don&#x27;t do silly things like count(*) or select *, select the small data sets you need using your indexes. If you need a full table count for some reason, use a primary key that auto increments and select the max, or use information_schema for an estimate. Nobody will sweat the slight inaccuracy on a table that size.
评论 #36038674 未加载
评论 #36038644 未加载
makmanalp将近 2 年前
Just off the top of my head I can think of a few cases where we easily have 100-1000x that on a single table in shared node on kube, with an ebs volume (point being, bog standard, if a tad large, commodity hardware). If you properly index things and your workload isn&#x27;t too crazy in a way that needs access to all of this at once (e.g. point lookups with some recency bias), classic RDBMSs have this use case covered - and it&#x27;s achievable by anyone!
pollux1997将近 2 年前
Thanks for repost the blog! Some crazy guys have already told me that they have mysql table &gt;1.4B rows (~180GiB), underestimated mysql&#x27;s power lol
评论 #36038804 未加载
评论 #36039300 未加载
pbreit将近 2 年前
Isn&#x27;t 20m rows super tiny?
评论 #36038948 未加载
whalesalad将近 2 年前
very cool repo from the author full of all kinds of tests and scripts: <a href="https:&#x2F;&#x2F;github.com&#x2F;gongyisheng&#x2F;playground&#x2F;blob&#x2F;main&#x2F;mysql&#x2F;row_test&#x2F;select_test.py">https:&#x2F;&#x2F;github.com&#x2F;gongyisheng&#x2F;playground&#x2F;blob&#x2F;main&#x2F;mysql&#x2F;ro...</a><p>might need to fork&#x2F;run this on psql to compare performance.
andrewmcwatters将近 2 年前
I love real meat articles like this where software engineers actually explain why particular limits exist, or why technical things are they way they are. Good read. Thanks for sharing.
forgetfreeman将近 2 年前
When was this ever a thing?
评论 #36039439 未加载
ericbarrett将近 2 年前
Love the article, but one criticism: I&#x27;ve found gp3 to be better than gp2 in all aspects, especially consistency—and it&#x27;s cheaper. For a long time you could only use RDS with gp2 but I know of no reason to choose it in 2023.
davgoldin将近 2 年前
We&#x27;ve got MySQL 8 running on high-end-ish consumer hardware, with 150 tables ranging from thousands to billions of rows. Largest table is a bit over 5 billion rows and 520GiB. Parallel network backup takes about 3 hours.
评论 #36094981 未加载
tbrownaw将近 2 年前
...yes, performance will drop when your set of active data exceeds available memory and your system needs to reference slower storage. I assume this would apply to anything.
评论 #36038882 未加载
bagels将近 2 年前
You should be able to handle a billion for the right (simple) table and hardware. It really depends on the data and queries.
jve将近 2 年前
I see no problems of benchmark apart from you&#x27;d be avoiding table scans in prod for tables of those sizes - that kind of benchmark kind of pointless. Well, it shows a point, but not something you would want to use anyway.<p>Once data is cached, using indexed lookups are fast, 0.5ms.
encoderer将近 2 年前
We get sub-ms response time on a table with over 250 million rows. We have a covering index for all high-volume queries, running on rds (aurora) with 16gb ram.
评论 #36045693 未加载
jerryjerryjerry将近 2 年前
In general, NewSQL (distributed database) may still be a good solution to address scalability limit encountered by a single server&#x2F;node database like MySQL&#x2F;PG. Currently, there are quite a few options in this domain: Spanner, PG-compatible (CockroachDB, YugaByte), MySQL-compatible (TiDB, MariaDB xPand), and other middleware products to handle sharding.
AtlasBarfed将近 2 年前
B+ trees scale vertically very well, until they don&#x27;t.
Sparkyte将近 2 年前
Hardware has limits as well as software. You would need a completely different architecture to accomodate a 20M+ database. Usually schemeless the database overhead is typically what is responsible how many rows it handle and because your client interacts with the MySQL engine it is at limit and performance of that engine. So yeeeee-no. Time to use a different database.
评论 #36040028 未加载
评论 #36038936 未加载