TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

So you want to build your own data center

596 pointsby dban4 months ago

55 comments

motoboi4 months ago
I’m my experience and based on writeups like this: Google hates having customers.<p>Someone decided they have to have a public cloud, so they did it, but they want to keep clients away with a 3 meter pole.<p>My AWS account manager is someone I am 100% certain would roll in the mud with me if necessary. Would sleep in the floor with us if we asked in a crisis.<p>Our Google cloud representatives make me sad because I can see that they are even less loved and supported by Google than us. It’s sad seeing someone trying to convince their company to sell and actually do a good job providing service. It’s like they are setup to fail.<p>Microsoft guys are just bulletproof and excel in selling, providing a good service and squeezing all your money out of your pockets and you are mortally convinced it’s for your own good. Also have a very strange cloud… thing.<p>As for the railway company going metal, well, I have some 15 years of experience with it. I’ll never, NEVER, EVER return to it. It’s just not worth it. But I guess you’ll have to discover it by yourselves. This is the way.<p>You soon discover what in freaking world is Google having so much trouble with. Just make sure you really really love and really really want to sell service to people, instead of building borgs and artificial brains and you’ll do 100x better.
评论 #42749527 未加载
评论 #42748117 未加载
评论 #42752843 未加载
评论 #42763366 未加载
评论 #42753977 未加载
评论 #42752503 未加载
评论 #42752941 未加载
评论 #42757200 未加载
toddmorey4 months ago
Reminds me of the old Rackspace days! Boy we had some war stories:<p><pre><code> - Some EMC guys came to install a storage device for us to test... and tripped over each other and knocked out an entire Rack of servers like a comedy skit. (They uh... didn&#x27;t win the contract.) - Some poor guy driving a truck had a heart attack and the crash took our DFW datecenter offline. (There were ballards to prevent this sort of scenario, but the cement hadn&#x27;t been poured in them yet.) - At one point we temporarily laser-beamed bandwidth across the street to another building - There was one day we knocked out windows and purchased box fans because servers were literally catching on fire. </code></pre> Data center science has... well improved since the earlier days. We worked with Facebook on the OpenCompute Project that had some very forward looking infra concepts at the time.
评论 #42749547 未加载
评论 #42745719 未加载
评论 #42748445 未加载
评论 #42745385 未加载
评论 #42744904 未加载
评论 #42751611 未加载
评论 #42749131 未加载
评论 #42745862 未加载
评论 #42745196 未加载
评论 #42749468 未加载
评论 #42746580 未加载
ChuckMcM4 months ago
From the post: &quot;<i>...but also despite multi-million dollar annual spend, we get about as much support from them as you would spending $100.</i>&quot; -- Ouch! That is a pretty huge problem for Google.<p>I really enjoyed this post, mostly because we had similar adventures when setting up the infrastructure for Blekko. For Blekko, a company that had a lot of &quot;east west&quot; network traffic (that is traffic that goes between racks vs to&#x2F;from the Internet at large) having physically colocated services without competing with other servers for bandwidth was both essential and much more cost effective than paying for this special case at SoftLayer (IBM&#x27;s captive cloud).<p>There are some really cool companies that will build an enclosure for your cold isle, basically it ensures all the air coming out of the floor goes into the back of your servers and not anywhere else. It also keeps not cold air from being entrained from the sides into your servers.<p>The calculations for HVAC &#x27;CRAC&#x27; capacity in a data center are interesting too. In the first CoLo facility we had a &#x27;ROFOR&#x27; (right of first refusal) on expanding into the floor area next to our cage, but when it came time to expand the facility had no more cooling capacity left so it was meaningless.<p>Once you&#x27;ve done this exercise, looking at the 0xide solution will make a lot more sense to you.
chatmasta4 months ago
This is how you build a dominant company. Good for you ignoring the whiny conventional wisdom that keeps people stuck in the hyperscalers.<p>You’re an infrastructure company. You gotta own the metal that you sell or you’re just a middleman for the cloud, and always at risk of being undercut by a competitor on bare metal with $0 egress fees.<p>Colocation and peering for $0 egress is why Cloudflare has a free tier, and why new entrants could never compete with them by reselling cloud services.<p>In fact, for hyperscalers, bandwidth price gouging isn’t just a profit center; it’s a moat. It ensures you can’t build the next AWS on AWS, and creates an entirely new (and strategically weaker) market segment of “PaaS” on top of “IaaS.”
评论 #42744840 未加载
jdoss4 months ago
This is a pretty decent write up. One thing that comes to mind is why would you write your own internal tooling for managing a rack when Netbox exists? Netbox is fantastic and I wish I had this back in the mid 2000s when I was managing 50+ racks.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;netbox-community&#x2F;netbox">https:&#x2F;&#x2F;github.com&#x2F;netbox-community&#x2F;netbox</a>
评论 #42744318 未加载
评论 #42745234 未加载
评论 #42745416 未加载
评论 #42746836 未加载
评论 #42744734 未加载
评论 #42747639 未加载
ch33zer4 months ago
I used to work on machine repair automation at a big tech company. IMO repairs are one of the overlooked and harder things to deal with. When you run on AWS you don&#x27;t really think about broken hardware it mostly just repairs itself. When you do it yourself you don&#x27;t have that luxury. You need to have spare parts, technician to do repairs, a process for draining&#x2F;undraining jobs off hosts, testing suites, hardware monitoring tools and 1001 more things to get this right. At smaller scales you can cut corners on some of these things but they will eventually bite you. And this is just machines! Networking gear has it&#x27;s own fun set of problems, and when it fails it can take down your whole rack. How much do you trust your colos not to lose power during peak load? I hope you run disaster recovery drills to prep for these situations!<p>Wishing all the best to this team, seems like fun!
jpleger4 months ago
Makes me remember some of the days I had in my career. There were a couple really interesting datacenter things I learned by having to deploy tens of thousands of servers in the 2003-2010 timeframe.<p>Cable management and standardization was extremely important (like you couldn&#x27;t get by with shitty practices). At one place where we were deploying hundreds of servers per week, we had a menu of what ops people could choose if the server was different than one of the major clusters. We essentially had 2 chassis options, big disk servers which were 2u or 1u pizza boxes. You then could select 9&#x2F;36&#x2F;146gb SCSI drives. Everything was dual processor with the same processors and we basically had the bottom of the rack with about 10x 2u boxes and then the rest was filled with 20 or more 1u boxes.<p>If I remember correctly we had gotten such an awesome deal on the price for power, because we used facility racks in the cage or something, since I think they threw in the first 2x 30 amp (240v) circuits for free when you used their racks. IIRC we had a 10 year deal on that and there was no metering on them, so we just packed each rack as much as we could. We would put 2x 30s on one side and 2x 20s on another side. I have to think that the DC was barely breaking even because of how much heat we put out and power consumption. Maybe they were making up for it in connection &#x2F; peering fees.<p>I can&#x27;t remember the details, will have to check with one of my friends that worked there around that time.
maxclark4 months ago
There&#x27;s places where it makes sense to be on the cloud, and places where it doesn&#x27;t. The two best examples I can give are for high bandwidth, or heavy disk intensive applications.<p>Take Netflix. While almost everything is in the cloud the actual delivery of video is via their own hardware. Even at their size I doubt this business would be economically feasible if they were paying someone else for this.<p>Something I&#x27;ve seen often (some numbers changed because...)<p>20 PB Egress at $0.02&#x2F;GB = $400,000&#x2F;month<p>20 PB is roughly 67 Gbps 95th Percentile<p>It&#x27;s not hard to find 100 Gbps flat rate for $5,000&#x2F;month<p>Yes this is overly simplistic, and yes there&#x27;s a <i>ton</i> more that goes into it than this. But the delta is significant.<p>For some companies $4,680,000&#x2F;year doesn&#x27;t move the needle, for others this could mean survival.
sitkack4 months ago
It would be nice to have a lot more detail. The WTF sections are the best part. Sounds like your gear needs &quot;this side towards enemy&quot; sign and&#x2F;or the right affordances so it only goes in one way.<p>Did you standardize on layout at the rack level? What poke-yoke processes did you put into place to prevent mistakes?<p>What does your metal-&gt;boot stack look like?<p>Having worked for two different cloud providers and built my own internal clouds with PXE booted hosts, I too find this stuff fascinating.<p>Also take utmost advantage of a new DC when you are booting it to try out all the failure scenarios you can think of and the ones you can&#x27;t through randomized fault injection.
评论 #42744133 未加载
Bluecobra4 months ago
Good writeup! Google really screws you when you are looking for 100G speeds, it&#x27;s almost insulting. For example redundant 100G dedicated interconnects are about $35K per month and that doesn&#x27;t include VLAN attachments, colo x-connect fees, transit, etc. Not only that, they max out on 50G for VLAN attachments.<p>To put this cost into perspective, you can buy two brand new 32 port 100G switches from Arista for the same amount of money. In North America, you can get 100G WAN circuits (managed Wavelength) for less than $5K&#x2F;month. If it&#x27;s a local metro you can also get dark fiber for less and run whatever speed you want.
评论 #42751884 未加载
random_savv4 months ago
I guess there&#x27;s another in-between step buying your own hardware, even when merely &quot;leasing individual racks&quot;, and EC2 instances: dedicated bare metal providers like Hetzner.<p>This lets one get closer to the metal (e.g. all your data is on your specific disk, rather than an abstracted block storage, such as EBS, not shared with other users, cheaper, etc) without having to worry about the staff that installs the hardware or where&#x2F;how it fits in a rack.<p>For us, this was a way to get 6x performance for 1&#x2F;6 of the cost. (Excluding, of course our time, but we enjoyed it!)
评论 #42747940 未加载
评论 #42749728 未加载
winash834 months ago
We went down this path over the last year, lots of our devs need local and dev&#x2F;test environments and AWS was costing us a bomb, With about 7 Bare metals(Colocation) we are running about 200+ VMs and could double that number with some capacity to spare. For management, we built a simple wrapper over libvirt. I am setting up another rack in the US and will end up costing around $75Kper year for a similar capacity.<p>Our prod is on AWS but we plan to move everything else and it&#x27;s expected to save at least a quarter of a million dollars per year
评论 #42746535 未加载
dban4 months ago
This is our first post about building out data centers. If you have any questions, we&#x27;re happy to answer them here :)
评论 #42744958 未加载
评论 #42743553 未加载
评论 #42752255 未加载
blmt4 months ago
I am really thankful for this article as I finally get where my coworkers get &quot;wrong&quot; notions about three-phase power use in DC:<p>&gt;The calculations aren’t as simple as summing watts though, especially with 3-phase feeds — Cloudflare has a great blogpost covering this topic.<p>What&#x27;s written in the Cloudflare blogpost linked in the article holds true only of you can use a Delta config (as done in the US to obtain 208V) as opposed to the Wye config used in Europe. The latter does not give a substantial advantage: no sqrt(3) boost to power distribution efficiency and you end up adding Watts for three independent single phase circuits (cfr. <a href="https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Three-phase_electric_power" rel="nofollow">https:&#x2F;&#x2F;en.m.wikipedia.org&#x2F;wiki&#x2F;Three-phase_electric_power</a>).
linsomniac4 months ago
Was really hoping this was was actually about building your own data center. Our town doesn&#x27;t have a data center, we need to go an hour south or an hour north. The building that a past failed data center was in (which doesn&#x27;t bode well for a data center in town, eh?), is up for lease and I&#x27;m tempted.<p>But, I&#x27;d need to start off small, probably per-cabinet UPSes and transfer switches, smaller generators. I&#x27;ve built up cabinets and cages before, but never built up the exterior infrastructure.
评论 #42751076 未加载
Agingcoder4 months ago
They’re not building their own data center - they’re doing what lots of companies have been doing for years ( including where I work , and I specialize in hpc so this is all fairly standard ), which is buying space and power in a dc, and installing boxes in there. Yes, it’s possible to get it wrong. It is however not the same as building a DC …
nyrikki4 months ago
&gt; This will likely involve installing some overhead infrastructure and trays that let you route fiber cables from the edge of your cage to each of your racks, and to route cables between racks<p>Perhaps I am reading this wrong, as you appear to be fiber heavy and do have space on the ladder rack for copper, but if you are commingling the two, be careful. A possible future iteration, would consider a smaller panduit fiberunner setup + a wire rack.<p>Co-mingling copper and fiber, especially through the large spill-overs works until it doesn&#x27;t.<p>Depending on how adaptive you need to be with technology changes, you may run into this in a few years.<p>4x6 encourages a lot of people putting extra cable up in those runners, and sharing a spout with cat-6, cx-#, PDU serial, etc... will almost always end badly for some chunk of fiber. After those outages it also encourages people to &#x27;upgrade in place&#x27;. When you are walking to your cage look at older cages, notice the loops sticking out of the tops of the trays and some switches that look like porcupines because someone caused an outage and old cables are left in place.<p>Congrats on your new cage.
renewiltord4 months ago
More to learn from the failures than the blog haha. It tells you what the risks are with a colocation facility. There really isn&#x27;t any text on how to do this stuff. The last time I wanted to build out a rack there aren&#x27;t even any instructions on how to do cable management well. It&#x27;s sort of learned by apprenticeship and practice.
ksec4 months ago
I am just fascinated by the need of Datacenter. The scale is beyond comprehension. 10 years ago, before the word HyperScaler was even invented or popularised, I would have thought DC market to be on the decline or levelled off now or around this time. One reason being HyperScaler, AWS, Google, Microsoft, Meta, Apple, Tencent, Alibaba, to smaller ones like Oracle and IBM. They would all have their own DC, taking on much of the compute for themselves and others. While left over space would be occupied by third parties. Another reason being the compute, memory and storage density continue to increase, which means for the same amount of floor space we are offering 5 - 20x of the previous CPU &#x2F; RAM &#x2F; Storage.<p>Turns out we are building like mad and we are still not building enough.
评论 #42751950 未加载
dylan6044 months ago
My first colo box came courtesy of a friend of a friend that worked for one of the companies that did that (leaving out names to protect the innocent). It was a true frankenputer built out of whatever spare parts he had laying around. He let me come visit it, and it was an art project as much as a webserver. The mainboard was hung on the wall with some zip ties, the PSU was on the desk top, the hard drive was suspended as well. Eventually, the system was upgraded to newer hardware, put in an actual case, and then racked with an upgraded 100base-t connection. We were screaming in 1999.
pixelesque4 months ago
The date and time durations given seem a bit confusing to me...<p>&quot;we kicked off a Railway Metal project last year. Nine months later we were live with the first site in California&quot;.<p>seems inconsistent with:<p>&quot;From kicking off the Railway Metal project in October last-year, it took us five long months to get the first servers plugged in&quot;<p>The article was posted today (Jan 2025), was it maybe originally written last year and the project has been going on for more than a year, and they mean that the Railway Metal project actually started in 2023?
评论 #42744740 未加载
scarab924 months ago
Interesting that they call out the extortionate egress fees from the majors as a motivation, but are nevertheless also charging customers $0.10 per GB themselves.
评论 #42746298 未加载
评论 #42754910 未加载
esher4 months ago
I can relate.<p>We provide a small PaaS-like hosting service, kinda similar to Railway (but more niche). We have recently re-elaborated our choice for AWS (since $$$) as infra provider, but will now stick to it [1].<p>We started with colocation 20 years ago. For a tiny provider it was quite a hustle (but also an experience). We just had too many single point of failures and we found ourselves dealing with physical servers way too often. We also struggled to fade out and replace hardware.<p>Without reading all the comments thoroughly: For me, being on infra that runs on green energy is important. I think it&#x27;s also a trend with customers, there even service for this [2]. I don&#x27;t see it mentioned here.<p>[1] <a href="https:&#x2F;&#x2F;blog.fortrabbit.com&#x2F;infra-research-2024" rel="nofollow">https:&#x2F;&#x2F;blog.fortrabbit.com&#x2F;infra-research-2024</a> [2] <a href="https:&#x2F;&#x2F;www.thegreenwebfoundation.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.thegreenwebfoundation.org&#x2F;</a>
j-b4 months ago
Love these kinds of posts. Tried railway for the first time a few days ago. It was a delightful experience. Great work!
评论 #42745068 未加载
hintymad4 months ago
Per my experience with cloud, the most powerful Infra abstraction that AWS offers is actually EC2. The simplicity of getting a cluster of machines up and running with all the metadata readily available via APIs is just liberating. And it just works: the network is easy to configure, the ASGs are flexible enough to customize, and the autoscaling offers strong primitives for advanced scaling.<p>Amazingly, few companies who run their own DCs could build anything comparable to EC2, even at a smaller scale. When I worked in those companies, I sorely missed EC2. I was wondering if there&#x27;s any robust enough open-source alternatives to EC2&#x27;s control-plane software to manage baremetals and offer VMs on top them. That&#x27;ll be awesome for companies that build their own DCs.
评论 #42745835 未加载
matt-p4 months ago
If you’re using 7280-SR3 switches, they’re certainly a fine choice. However, have you considered the 7280-CR3(K) range? They&#x27;re much better $&#x2F;Gbps and more relevant edge interfaces.<p>At this scale, why did you opt for a spine-and-leaf design with 25G switches and a dedicated 32×100G spine? Did you explore just collapsing it and using 1-2 32×100G switches per rack, then employing 100G&gt;4×25G AOC breakout cables and direct 100G links for inter-switch connections and storage servers?<p>Have you also thought about creating a record on PeeringDB?<a href="https:&#x2F;&#x2F;www.peeringdb.com&#x2F;net&#x2F;400940" rel="nofollow">https:&#x2F;&#x2F;www.peeringdb.com&#x2F;net&#x2F;400940</a>.<p>By the way, I’m not convinced I’d recommend a UniFi Pro for anything, even for out-of-band management.
评论 #42744870 未加载
coolkil4 months ago
Awesome!! Hope to see more companies go this route. I had the pleasure to do something similar for a company(lot smaller scale though)<p>It was my first job out of university. I will never forget the awesome experience of walking into the datacenter and start plugging cables and stuff
ThinkBeat4 months ago
1. Is the impression they decided to use a non datacenter location to put their datacenter, If so that is not a good idea.<p>2. Geographical distanced backups, if the primary fails. Without this you are already in trouble. What happens if the buildings burns down?<p>3. Hooking up with &quot;local&quot; ISPs That seems ok. As long as ISP failing is easily and autoamically dealt with.<p>4. I am a bit confused about what happens at the edge. On the one head it seems like you have 1 datacenter, and ISPs doing routing, other places I get the impression you have compute close to the edge. Which is it?
评论 #42746288 未加载
sometalk4 months ago
I remember talking to Jake a couple of years ago when they were looking for someone with a storage background. Cool dude, and cool set of people. Really chuffed to see them doing what they believe in.
评论 #42744330 未加载
cyberax4 months ago
It looked interesting, until I got to the egress cost. Ouch. $100 per TB is way too much if you&#x27;re using bandwidth-intensive apps.<p>Meta-comment: it&#x27;s getting really hard to find hosting services that provide true unlimited bandwidth. I want to do video upload&#x2F;download in our app, and I&#x27;m struggling to find providers of managed servers that would be willing to provide me with fixed price for 10&#x2F;100GB ports.
评论 #42745066 未加载
solarkraft4 months ago
Cool post and cool to see Railway talked about more on here.<p>I‘ve used their postgres offering for a small project (crucially it was accessible from the outside) and not only was setting it up a breeze, cost was also minimal (I believe staying within the free tier). I haven’t used the rest of the platform, but my interaction with them would suggest it would probably be pretty nice.
physhster4 months ago
Having done data center builds for years, mostly on the network side but realistically with all the trades, this is a really cool article.
a1o4 months ago
Excellent write up! This is not the first blog post I see in recent times on going to owning infrastructure direction, but it is certainly well written and I liked the use of Excel in it, a good use, although visually daunting!
yread4 months ago
Useful article. I was almost planning to rent a rack somewhere but it seems there&#x27;s just too much work and too many things to go wrong and it&#x27;s better to rent cheap dedicated servers and make it somebody elses problem
__fst__4 months ago
Can anyone recommend some engineering reading for building and running DC infrastructure?
评论 #42743979 未加载
aetherspawn4 months ago
What brand of servers was used?
评论 #42744336 未加载
评论 #42744482 未加载
robertclaus4 months ago
I would be super interested to know how this stuff scales physically - how much hardware ended up in that cage (maybe in Cloud-equivalent terms), and how much does it cost to run now that it&#x27;s set up?
whalesalad4 months ago
Cliffhanger! Was mostly excited about the networking&#x2F;hypervisor setup. Curious to see the next post about the software defined networking. Had not heard of FRR or SONIC previously.
评论 #42745458 未加载
teleforce4 months ago
&gt;despite multi-million dollar annual spend, we get about as much support from them as you would spending $100<p>Is it a good or a bad thing to have the same customer support across the board?
kolanos4 months ago
As someone who lost his shirt building a data center in the early 2000s, Railway is absolutely going about this the right way with colocation.
throwaway20374 months ago
I promise my comment is not intended to troll. Why didn&#x27;t you use Oxide pre-built racks? Just the power efficiency seems like a huge win.
评论 #42749323 未加载
nextworddev4 months ago
First time checking out railway product- it seems like a “low code” and visual way to define and operate infrastructure?<p>Like, if Terraform had a nice UI?
评论 #42743873 未加载
ramon1564 months ago
weird to think my final internship was running on one of these things. thanks for all the free minutes! it was a nice experience
lifeinthevoid4 months ago
Man, I get an anxiety attack just thinking about making this stuff work. Kudos to all the people doing this.
评论 #42751988 未加载
praveen99204 months ago
Reliability stats aside, would have loved to see cost differences between on-prem and cloud.
Over2Chars4 months ago
I guess we can always try to re-hire all those &quot;Sys Admins&quot; we thought we could live without.<p>LOL?
Melatonic4 months ago
We&#x27;re back to the cycle of Mainframe&#x2F;Terminal --&gt; Personal Computer
superq4 months ago
&quot;So you want to build OUT your own data center&quot; is a better title.
enahs-sf4 months ago
Curious why California when the kwh is so high here vs Oregon or Washington
Havoc4 months ago
Surprised to see pxe. Didn’t realise that was in common use in racks
评论 #42750010 未加载
评论 #42745176 未加载
concerndc1tizen4 months ago
@railway<p>What would you say are your biggest threats?<p>Power seems to the big one, especially when the AI power and electric vehicle demand will drive up kWh prices.<p>Networking seems another one. I&#x27;m out of the loop, but it seems to me like the internet is still stuck at 2010 network capacity concepts like &quot;10Gb&quot;. If networking had progressed as compute power has (e.g. NVMe disks can provide 25GB&#x2F;s), 100Gb would be the default server interface? And the ISP uplink would be measured in terabits?<p>How is the diversity in datacenter providers? In my area, several datacenters were acquired and my instinct would be that: the &quot;move to cloud&quot; has lost smaller providers a lot of customers, and the industry consolidation has given suppliers more power in both controlling the offering and the pricing. Is it a free market with plenty of competitive pricing, or is it edging towards enshittification?
评论 #42751992 未加载
exabrial4 months ago
I&#x27;m surprised you guys are building new!<p>Tons of Colocation available nearly everywhere in the US, and in the KCMO area, there are even a few dark datacenters available for sale!<p>cool project none-the-less. Bit jealous actually :P
评论 #42744090 未加载
评论 #42743713 未加载
评论 #42743799 未加载
评论 #42743685 未加载
mirshko4 months ago
y’all really need to open source that racking modeling tool, that would save sooooo many people so much time
评论 #42745575 未加载
technick4 months ago
I&#x27;ve spent more time than I care working in data centers and can tell you that your job req is asking for one person to perform 3 different roles, maybe 4. I guarantee you&#x27;re going to find a &quot;jack of all trades&quot; and a master of none unless you break them out into these jobs.<p>Application Developer<p>DevOps Engineer<p>Site Reliability Engineer<p>Storage Engineer<p>Good luck, hope you pay them well.
评论 #42747506 未加载
jonatron4 months ago
Why would you call colocation &quot;building your own data center&quot;? You could call it &quot;colocation&quot; or &quot;renting space in a data center&quot;. What are you building? You&#x27;re racking. Can you say what you mean?
评论 #42744262 未加载
评论 #42743999 未加载
评论 #42744459 未加载
评论 #42745397 未加载
评论 #42745071 未加载
评论 #42746491 未加载
评论 #42744684 未加载
评论 #42745493 未加载
评论 #42745045 未加载
评论 #42744487 未加载
评论 #42748930 未加载