TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Don't build AI products the way everyone else is doing it

565 点作者 tortilla超过 1 年前

52 条评论

ravenstine超过 1 年前
I appreciate the overall sentiment of the post, but I can&#x27;t say I would choose anything like the implementation the author is suggesting.<p>My takeaway is to avoid relying too heavily on LLMs both in terms of the scope tasks given to them as well as relying too heavily on any specific LLM. I think this is correct for many reasons. Firstly, you probably don&#x27;t want to compete directly with ChatGPT, even if you are using OpenAI under the hood, because ChatGPT will likely end up being the better tool for very abstract interaction in the long run. For instance, if you are building an app that uses OpenAI to book hotels and flights by chatting with a bot, chances are someday either ChatGPT or something by Microsoft or Google will do that and make your puny little business totally obsolete. Secondly, relying too heavily on SDKs like the OpenAI one is, in my opinion, a waste of time. You are better off with the flexibility of making direct calls to their REST API.<p>However, should you be adding compilers to your toolchain? IMO, any time you add a compiler, you are not only liable to add a bunch of unnecessary complexity but you&#x27;re making yourself <i>dependent</i> upon some tool. What&#x27;s particulry bad about the author&#x27;s example is that it&#x27;s arguably completely unnecessary for the task at hand. What&#x27;s so bad about React or Svelte that you want to use a component cross-compiler? That&#x27;s a cool compiler, but it sounds like a complete waste of time and another thing to learn for building web apps. I think every tool has its place, but just &quot;add a compiler, bruh&quot; is terrible advice for the target audience of this blog post.<p>IMO, the final message of the article should be to create the most efficient toolchain for what you want to achieve. Throwing tools at a task doesn&#x27;t necessarily add value, nor does doing what everyone else is doing necessarily add value; and either can be counterproductive in not just working on LLM app integration but software engineering in general.<p>Kudos to the author for sharing their insight, though.
评论 #38225862 未加载
评论 #38222702 未加载
评论 #38230851 未加载
评论 #38224300 未加载
评论 #38229188 未加载
danenania超过 1 年前
This is a thought-provoking post and I agree with the &quot;avoid using AI as long as possible&quot; point. AI is best used for things that can <i>only</i> be accomplished with AI--if there&#x27;s any way to build the feature or solve the problem without it, then yeah, do that instead. Since everyone now has more or less equal access to the best models available, the best products will necessarily be defined by everything they do that&#x27;s <i>not</i> AI--workflows, UIs, UX, performance, and all that other old-fashioned stuff.<p>I&#x27;m not so sure about the &quot;train your own model&quot; advice. This sounds like a good way to set your product up for quick obsolescence. It might differentiate you for a short period of time, but within 6-12 months (if that), either OpenAI or one of its competitors with billions in funding is going to release a new model that blows yours out of the water, and your &quot;differentiated model&quot; is now a steaming pile of tech debt.<p>Trying to compete on models as a small startup seems like a huge distraction. It&#x27;s like building your own database rather than just using Postgres or MySQL. Yes, you need a moat and a product that is difficult to copy in some way, but it should be something you can realistically be the best at given your resources.
评论 #38222621 未加载
评论 #38222812 未加载
评论 #38224406 未加载
BillFranklin超过 1 年前
This is a nice post, and I think it will resonate with most new AI startups. My advice would be don&#x27;t build an AI product at all.<p>To my mind an &quot;x product&quot; is rarely the framing that will lead to value being added for customers. E.g. a web3 product, an observability product, a machine vision product, an AI product.<p>Like all decent startup ideas the obviously crucial thing is to start with a real user need rather than wanting to use an emerging technology and fit it to a problem. Developing a UI for a technology where expectations are inflated is not going to result in a user need being met. Instead, the best startups will naturally start by solving a real problem.<p>Not to hate on LLMs, since they are neat, but I think most people I know offline hate interacting with chat bots as products. This is regardless of quality, bots are rarely as good as speaking with a real human being. For instance, I recently moved house and had to interact with customer support bots for energy &#x2F; water utilities and an ISP, and they were universally terrible. So starting with &quot;gpt is cool&quot; and building a customized chatbot is to my mind not going to solve a real user need or result in a sustainable business.
评论 #38227340 未加载
评论 #38224531 未加载
评论 #38225985 未加载
评论 #38226471 未加载
评论 #38223554 未加载
评论 #38223128 未加载
评论 #38230990 未加载
评论 #38231471 未加载
评论 #38226429 未加载
评论 #38226790 未加载
评论 #38226587 未加载
评论 #38223069 未加载
评论 #38226411 未加载
zmmmmm超过 1 年前
Seems like a big tradeoff against speed to ship.<p>So when you&#x27;ve taken 6-12 months to ship and everybody already iterated twice by directly using a hosted model and is building a real customer base you are only at v0.1 with your first customers who are telling you they actually wanted something else and now you have go and not just massage some prompts but recode your compiler and tool chain and everything else up and down the stack.<p>Perhaps if you already know your customers and requirements really really well it can make a lot of sense but I&#x27;d be very sceptical about &quot;given how easy it is to do, why are you not validating your concept early with a fully general &#x2F; expensive &#x2F; hosted model&quot;. Premature optimisation being root of evil type stuff.
评论 #38227251 未加载
danielmarkbruce超过 1 年前
People are overthinking this from a competitive perspective. Create something that isn&#x27;t easy to replicate - there are several ways to do that, but it&#x27;s the only rule required from a competitive perspective.
评论 #38227066 未加载
评论 #38224214 未加载
jumploops超过 1 年前
To preface, I largely agree with the end state presented here -- we use LLMs within a state machine-esque control flow in our product. It&#x27;s great.<p>With that said, I disagree with the sentiment of the author. If you&#x27;re a developer who&#x27;s only used the ChatGPT web UI, you should 100% play with and create &quot;AI wrapper&quot; tech. It&#x27;s not until you find the limits of the best models that you start to see how and where LLMs can be used within a traditional software stack.<p>Even the author&#x27;s company seems to have followed this path, first building an LLM-based prototype that &quot;sort of&quot; worked to convert Figma -&gt; code, and then discovering all the gaps in the process.<p>Therefore, my advice is to try and build your &quot;AI-based trading card grading system&quot; (or w&#x2F;e your heart desires) with e.g. GPT-4-Vision and then figure out how to make the product actually work as a product (just like builder.io).
andix超过 1 年前
I think soon AI will be build into a lot of different software. This is when it will really get awesome and scary.<p>One simple example are e-mail clients. Somebody asks for a decision or clarification. The AI could extract those questions and just offer some radio buttons, like:<p><pre><code> Accept suggested appointment times: [Friday 10:00] [Monday 11:30] [suggest other] George whats to know if you are able to present the draft: [yes] [no] </code></pre> I think Zendesk (ticketing software for customer support) already has some AI available. A lot of support requests are probably already answered (mostly) automatic.<p>Human resources could use AI to screen job applications and let an AI resarch additional information about the applicant on the internet, and then create standardized database entries (which may be very flawed).<p>I think those kind of applications are the interesting ones. Not another ChatGPT extension&#x2F;plugin.
评论 #38222993 未加载
评论 #38227022 未加载
infixed超过 1 年前
I think the prose in the pre-amble is a bit over-flowery and heavy handed (e.g. LLMs really aren&#x27;t that expensive, I very much doubt the WSJ claim that Copilot is losing money per user, LLMs aren&#x27;t always &quot;painfully slow&quot;, etc.)<p>Having said that, the actual recommendations the article offers are pretty reasonable:<p>- Do as much as you can with code<p>- For the parts you can&#x27;t do with code, use specialized AI to solve it<p>Which is pretty reasonable? But also not particularly novel.<p>I was hoping the article would go into more depth on how to make an AI product that is actually useful and good. As far as I can tell, there have been a lot of attempts (e.g. the recent humane launch), but not a whole lot of successes yet.
tqi超过 1 年前
This post seems pretty focused on the How of building AI products, but personally I think that whether or not an &quot;AI product&quot; succeeds or fails mostly wont come down to differentiation &#x2F; cost &#x2F; speed &#x2F; model customization, but rather whether it is genuinely useful.<p>Unfortunately, most products I&#x27;ve seen so far feel like solutions in search of problems. I personally think the path companies should be taking right now is to identify the most tedious and repetitive parts of using the product and looking for ways that can be reliably simplified with AI.
hubraumhugo超过 1 年前
As in every hype cycle: When all you have is a hammer, everything looks like a nail. A little while ago the hammer was blockchain, now it&#x27;s AI.
评论 #38225074 未加载
评论 #38225211 未加载
sevensor超过 1 年前
&gt; One awesome, massive free resource for generating data is simply the Internet.<p>Isn&#x27;t that building AI products _exactly_ the way everyone else is doing it? There are things in the world the internet doesn&#x27;t know much about, like how to interpret sensor data. There are lots of transducers in the world, and the internet knows jack about most of them.
jillesvangurp超过 1 年前
The article mentions the need to differentiate, which is valid. A related concept here is negative differentiation. You can differentiate yourself negatively by not implementing certain things or doing them poorly. You always differentiate (positive or negative) relative to your competitors. If they do a better job than you, you might have a problem.<p>Adding AI and then doing a poor job isn&#x27;t necessarily creating a lot of value. So, if you follow the author&#x27;s advice, you might end up spending a lot of money on creating your own models. And they might not even be that good and differentiate you negatively.<p>A lot of companies want to add AI not just because it looks cool but because they see their competitors doing the same and don&#x27;t want to differentiate negatively.
dmezzetti超过 1 年前
There is so much available in the open model world. Take a look at the Hugging Face Hub - there are 1000s of models that can be used as-is or as a starting point.<p>And those models don&#x27;t have to be LLMs. It&#x27;s still a valid approach to use a smaller BERT model as a text classifier.
0xDEAFBEAD超过 1 年前
&gt;..when we started talking to some large and privacy-focused companies as potential early beta customers, one of the most common pieces of feedback was that they were not able to use OpenAI or any products using OpenAI.<p>It&#x27;s interesting to me that there are apparently companies that <i>won&#x27;t</i> let OpenAI see their data, but <i>will</i> let a random startup see it. What&#x27;s going on with that? Does OpenAI have a lax privacy policy or something?
评论 #38229448 未加载
own2pwn超过 1 年前
github actually denied they losing money on copilot: <a href="https:&#x2F;&#x2F;twitter.com&#x2F;natfriedman&#x2F;status&#x2F;1712140497127342404" rel="nofollow noreferrer">https:&#x2F;&#x2F;twitter.com&#x2F;natfriedman&#x2F;status&#x2F;1712140497127342404</a>
adrwz超过 1 年前
Feels like a little too much engineering for an MVP.
评论 #38223162 未加载
blackoil超过 1 年前
I would give contra advice.<p>* Never build your own model unless you have proven your model, and you have expertise to build it. Generic models will take you long way before cost&#x2F;quality becomes an issue. Just getting all the data to train an LLM will be pain. 1000s of smartest people are spending n Billions to improve upon it. Don&#x27;t compete with them. and if downstream you believe open source or your own is better use it then.<p>* Privacy is overrated. Enterprises are happy to use Google Docs, Office 365 exchange and cloud and ChatGPT itself. Unless you are in a domain where you know it will be a concern, trust Azure&#x2F;OpenAI or Google.<p>* Let it be an AI startup. It should solve some problem but if VC and customer want to hear AI and Generative, that&#x27;s what you are. Don&#x27;t try to bring sanity in hand feeding you.
mmoustafa超过 1 年前
Great tips. I tried to do this with SVG icons in <a href="https:&#x2F;&#x2F;unstock.ai" rel="nofollow noreferrer">https:&#x2F;&#x2F;unstock.ai</a> before a lot of people started creating text-to-vector solutions. You also have to keep evolving!
评论 #38222731 未加载
评论 #38228746 未加载
j45超过 1 年前
Shortcuts in early product can definitely affect flexibility as new things keep arriving to tryout and further handcuffing things.<p>I love speed and frequency of shipping but sometimes thinking about things just a bit, but not too much doesn&#x27;t always hurt.<p>Sometimes simple is using a standard to keep the innovation points for the insights to implement.<p>Otherwise innovation points can be burnt on infrastructure and maintaining it instead of building that insight that arrives.<p>Finding a sweetspot between too little, and too much tooling is akin to someone starting with vanilla javascript to learn the value of libraries, and then frameworks, in that order rather than just jump into frameworks.
cryptoz超过 1 年前
&gt; When passing an entire design specification into an LLM and receiving a new representation token by token, generating a response would take several minutes, making it impractical.<p>Woe is me, it takes minutes to go from user-designed mockup to real, high-quality code? Unacceptable, I tell you!<p>But seriously, if there are speed improvements that you can make and are on the multiple-orders-of-magnitude then I do get it, those improvements are game-changing. But also, I think we&#x27;re racing too quickly with expectations here; where minutes is unacceptable now when it used to take a human days? I mean, minutes is still pretty good! IMO.
评论 #38222898 未加载
roguas超过 1 年前
a lot of disagreement,<p>pluggable llms is not something most people look in a product - its rare case of enterprises that are afraid of everything atm, also finetuning should be considered for costs not really range (i have not seen examples of finetuning that couldnt be achieved via prompting in llms, if someone has any, would be great to hear)<p>selfdriving cars are actually going for e2e systems now, there is still infra&#x2F;architecture and perhaps even different models that comm with each other, but regardless system behaves as e2e solution that learns internal hierarchies of all the signals&#x2F;tokens<p>large supersmart llms is where the game is currently at, they will get smaller, faster and more efficient we already are on this curve - if you&#x27;re building llms i think you will fall behind<p>outer leafs of your org fueled by ai - while yes, this model is good for already established companies yet its also super important to think &quot;ai first&quot;, ai much like any prev technology can be attached to legacy solutions making them better or can be shaped into a completely new unimaginable before solution... i wouldnt give vanilla advice that discourages the latter
adriancooney超过 1 年前
With the pace of AI, that (large) investment into a custom toolchain could be obsolete in a year. It feels like ChatGPT is going to gobble up all AI applications. Data will be the only differentiator.
评论 #38222786 未加载
jongjong超过 1 年前
I built a no-code, serverless platform and intend to use AI to compose the HTML components together. ChatGPT seems to be good at this based on initial tests. It was able to build a TODO app with authentication which syncs with back end in the first try using only HTML tags. My platform allows &#x27;logic&#x27; to be fully specified declaratively in the HTML so it helps to reduce complexity and the the margin for error. The goal is to reduce app building down to its absolute bare essentials then let the AI work with that.
JSavageOne超过 1 年前
&gt; &quot;One way we explored approaching this was using puppeteer to automate opening websites in a web browser, taking a screenshot of the site, and traversing the HTML to find the img tags.<p>&gt; We then used the location of the images as the output data and the screenshot of the webpage as the input data. And now we have exactly what we need — a source image and coordinates of where all the sub-images are to train this AI model.&quot;<p>I don&#x27;t quite understand this part. How does this lead to a model that can generate code from a UI?
评论 #38222518 未加载
评论 #38224780 未加载
liuliu超过 1 年前
Very similar sentiment when AppStore built. Everyone tries to avoid host their business on someone else&#x27;s platform. Hence FB tried to do H5 with their app (so it is open-standard (a.k.a. web) based, people launches their own mobile phones etc etc.<p>At the end of the day, having app in AppStore is OK as long as you can accumulate something the platform company cannot access (social network, driver network, etc). OpenAI&#x27;s thing is too early, but similar thinking might be applicable there too.
EMM_386超过 1 年前
&gt; When passing an entire design specification into an LLM and receiving a new representation token by token, generating a response would take several minutes, making it impractical.<p>Meanwhile, we used to sit around the office while waiting on compilers, after which we could see if recent changes actually worked.<p>Now?<p>&quot;5 minutes of a spinning cursor for my design specification to result in usable software?! Ridiculous!&quot;
yagami_takayuki超过 1 年前
I feel like chat with a pdf is the easiest thing to integrate into various niches -- fitness, nutrition, so many different options
YetAnotherNick超过 1 年前
While the differentiation aspect is real, for pricing I did some calculation for self hosting and even with small models, you are likely to loose money unless you have very high rps with users could tolerate some random delay. It&#x27;s very hard to even get 7B model to be cheaper than ChatGPT API. And that was pre price reduction.
wslh超过 1 年前
I would say something more radical: the same AI product that you have in mind is being built by many companies at the same time, wait for clarity in the space. Use suspense in your favor, sometimes not doing everything is the best option. This can be applied to every hyped field but AI is specially mesmerizing all of us.
评论 #38228739 未加载
startages超过 1 年前
That&#x27;s a great post. I like the idea and was trying to do something similar myself, but it just takes so much time to write a toolset that can be easily replaced with some LLM API in 30 minutes. Still, many of the point in this post are valid and have their own use cases.
yieldcrv超过 1 年前
&gt; They use a simple technique, with a pre-trained model, which anyone can copy in a very short period of time.<p>This article acts like the risk was something the creators cared about<p>all they wanted was some paid subscribers for a couple months, or some salaries paid by VCs for the next 18 months<p>in which case, mission accomplished for everyone
m3kw9超过 1 年前
The latency is still too slow to build LLM products other than chatbots where people expects a delay. The rate limit is also a non starter. And most app ideas involving LLM only differ in how well the UI is done. That’s the differentiator right now in AI apps
orliesaurus超过 1 年前
I totally agree with this article, it&#x27;s actually not that complicated to build your own toolchain, you can use one of the many open models, and if you&#x27;re building it for profit make sure you read the ToS.<p>Build a moat y&#x27;all - or be prepared to potentially shut down!
sanitycheck超过 1 年前
The tech is moving incredibly fast, I think at the moment putting minimal effort into some sort of OAI API wrapper is precisely the right thing to do for most companies whose AI business case is 90% &quot;don&#x27;t be seen to get left behind&quot;.
atleastoptimal超过 1 年前
This makes sense because the figma -&gt; code conversion is very programmatic. For anything more semantic or more vague in approach, a heavier dependence on LLM&#x27;s might be needed until the infrastructures mature.
scaraffe超过 1 年前
Any idea of what kind of &#x27;custom-trained&#x27; model does builder.io use? Is it some kind of an rnn? they claim to have 100k context window
it超过 1 年前
To view this page without the annoying animations, I recommend printing it to PDF or paper. Safari reader mode doesn&#x27;t work on it.
mediumsmart超过 1 年前
A good start would be to build AI products for everyone else. And since this thread has defined the number one &#x27;mere puny user&#x27; problem to solve (see, that wasn&#x27;t hard was it?) the marching orders are done too. Now get to it me hearties and build something actually useful - money will of course be a collateral side effect so no need to worry about that. You can write history here instead of commentblogging around the interwebs. God speed.
digitcatphd超过 1 年前
IMO the counter argument is to initially rely on commercial models and then make it an objective to swap them out.
nothrowaways超过 1 年前
Google search dearly needs this advice.
gdiamos超过 1 年前
I thinks it’s sad that LLMs have become so hostile to builders. It doesn’t have to be this way.
aftoprokrustes超过 1 年前
&gt; That car driving itself is not one big AI brain.<p>&gt; Instead of a whole toolchain of specialized models, all connected with normal code — such as models for computer vision to find and identify objects, predictive decision-making, anticipating the actions of others, or natural language processing for understanding voice commands — all of these specialized models are combined with tons of just normal code and logic that creates the end result — a car that can drive itself.<p>Or, as I like to say it: what we now call &quot;AI&quot; actually refers to the &quot;dumb&quot; part (which does not mean easy or simple!) of the system. When we speak of an intelligent human driver, we do not mean that they are able to differentiate between a stop sign and a pigeon, or understand when their partner asks them to &quot;please stop by the bakery on the way home&quot; -- we mean that they know what decision to take based on this data in order to have the best trip possible. That is, we refer to the part done with &quot;tons of normal code&quot;, as the article puts it.<p>Needless to say, I am not impressed by the predictions of &quot;AI singularity&quot; and whatever other nonsense AI evangelists try to make us believe.
ge96超过 1 年前
Man I saw this product recently it was like &quot;Use AI for SEO, everything else sucks&quot;. $3K&#x2F;mrr I feel like people can just make things up, hype, some landing page, people buy it, get burned, that company disappears.
_pdp_超过 1 年前
Recently, I embarked on a project to create a song as a tribute to my colleagues&#x27; exceptional work in a specific domain. My tools? OpenAI for lyric generation, tailored to my specifications, and Suno for vocal and track synthesis. The resulting song was a blend of AI-driven creativity and my vision. However, as I prepared to share this creation on Slack, I pondered the nature of authorship in the AI era. Was I truly the &#x27;creator&#x27; when automated processes played a significant role?<p>This led to a broader realization: the song wouldn&#x27;t exist without my initial concept and the nuanced curation involved in its completion. It&#x27;s not merely that AI executed 90% of the work; it&#x27;s that my 10% contribution leveraged these advanced tools to achieve a 90% outcome, a testament to the power of technology in amplifying human creativity.<p>In a world where websites, businesses, and SaaS tools can be launched in mere minutes, it&#x27;s becoming increasingly clear that ideas and the ability to effectively harness technology will be paramount. This shift raises fascinating questions about the future of creativity and the evolving role of the human in the creative process.<p>My key message is this: &quot;So what if your business heavily relies on OpenAI models?&quot; The unique prompts you craft hold intrinsic value. They don&#x27;t diminish the time, expertise, and knowledge you invest in shaping the results. Take designing a 3D chair using an AI system, for instance: achieving optimal results hinges on your ability to precisely describe what you need, a skill that itself depends on your understanding and knowledge of design. In this context, delving into classics and broadening your educational horizons is more crucial than ever. It equips you with the nuanced articulation needed to harness AI&#x27;s potential fully.<p>P.S. An AI model assisted me in crafting this comment, but the experiences and insights I&#x27;ve shared are my own, as is the majority of the words in this text. The advantage I gain from AI is the better articulation of my ideas. This tool is akin to a dictionary, a grammar checking tool, or a system that translates my native tongue into English.
评论 #38228773 未加载
评论 #38228192 未加载
评论 #38243256 未加载
评论 #38228167 未加载
jmtulloss超过 1 年前
Counter point: do whatever you want
FailMore超过 1 年前
Thank you, I thought that was great
fullofdev超过 1 年前
I think in the end, it comes down to &quot;does it helpful for the customer or not&quot;
nothrowaways超过 1 年前
Please tell it to Google search.
bob1029超过 1 年前
&gt; The solution: create your own toolchain<p>No thanks. I have an actual job &amp; customer needs to tend to. I am about 80% of the way through integrating with the OAI assistant API.<p>The real secret is to already have a viable business that AI can subsequently <i>improve</i>. Making AI <i>the business</i> is a joke of a model to me. You&#x27;d have an easier time pitching javascript frameworks in our shop.<p>Our current application of AI is a 1:1 mapping between an OAI assistant thread and the comment chain for a given GitHub issue. In this context of use, latency is absolutely not a problem. We can spend 10 minutes looking for an answer and it would still feel entirely natural from the perspective of our employees and customers.
评论 #38224299 未加载
评论 #38224461 未加载
评论 #38222934 未加载
评论 #38222524 未加载
评论 #38222686 未加载
评论 #38223278 未加载
评论 #38222767 未加载
评论 #38224663 未加载
评论 #38224250 未加载
评论 #38225727 未加载
评论 #38222329 未加载
评论 #38225706 未加载
wg0超过 1 年前
I&#x27;ve survived &quot;What is your organization&#x27;s Kubernetes strategy.&quot;<p>And then came along &quot;You need to integrate Blockchain in your business processes.&quot;<p>And now is the time for &quot;make your products smart with AI&quot; season.
评论 #38223737 未加载
评论 #38224370 未加载
评论 #38225712 未加载
happytiger超过 1 年前
The issue here isn’t AI, it’s not shovels and goldrushes, and it’s not about building how others are doing it.<p>It’s fundamental value.<p>It’s who is creating value that cannot be destroyed. Who owns the house is determined by who builds the foundation first, and that means those that control the ecosystems.<p>All others will play, survive, rent, and buy inside of those ecosystems.<p>If you’re not building fundamental value, you are an intermediary, which may be huge companies, but ultimately companies built on others. If you don’t own the API <i>and</i> the customer, you’re a renter. And renters can get evicted.<p>Those opportunities may still be worth chasing, but we shouldn’t get confused or over complicate what’s going on or we risk investing and building straw houses when brick was available.<p>Nothing wrong with that. Respect to success. But let’s keep fundamental value in mind, as it’s the most important thing for first generation technology companies.
评论 #38225550 未加载
评论 #38225135 未加载
评论 #38226024 未加载
nittanymount超过 1 年前
good points ! :+1: