TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How did ChatGPT4 blew your mind lately?

44 点作者 endorphine大约 2 年前
What's the last time that ChatGPT4 blew your mind with its response?

22 条评论

lukevp大约 2 年前
I see a lot of replies here talking about asking GPT questions in a domain an individual is an expert in, and being disappointed in the results. I think this is missing the point a bit. The impressive part of GPT isn’t that it’s an expert in anything, but that it’s better than an average human on many topics. For example - someone brought up GPT not comprehending homomorphic encryption fully. Well, I have a MS in CS and have been professionally developing software for 15 years, and I’ve even READ articles on homomorphic encryption multiple times, but I can’t articulate what it is very well. I just remember it’s inefficient and infeasible at this time.<p>The thing that is mind blowing to me is that I can talk with GPT about basically any topic and it can help get me in the right direction, I can talk with it to get clarity on anything that’s confusing, and it’s like a way to collaborate on thought without a human. That’s mindblowing to me!
评论 #35845459 未加载
评论 #35847625 未加载
评论 #35845789 未加载
评论 #35848581 未加载
noud大约 2 年前
ChatGPT4 blew my mind when I asked it some specific details about my PhD thesis. It was a relative simple question, but you needed to know some domain knowledge to give the correct answer. ChatGPT not only gave a wrong answer, it completely made up a story was beyond repair. It is concerning (blew my mind) how confident ChatGPT can provide a completely wrong answer.
评论 #35844331 未加载
评论 #35844249 未加载
评论 #35843300 未加载
评论 #35847027 未加载
评论 #35846637 未加载
micahscopes大约 2 年前
I recently used it (in combination with copilot) to help me implement weighted, blended, order-independent transparency for a network visualization project I&#x27;m working on... in just one long evening.<p>Only to realize in the process that all I really needed was to modify the shader program&#x27;s depth checking behavior. Even just disabling it altogether looks fine and makes everything way faster. Never in my wildest imagination a year ago would I even dream of implementing WBOIT in one night and then just leaving it in a branch for a one line change.<p>In the process I realized a little trick on my own, using what I learned, that modifying the depth values in the fragment shader could allow selective alpha-blending&#x2F;occlusion. Basically the idea was to put stuff I wanted to blend order-independently all at an equal z position and then set the depth function to &quot;less than or equal&quot; instead of &quot;less than&quot;. I used the occluding geometry&#x27;s depth buffer as a source to modify the translucent stuff&#x27;s depth buffer in the fragment shader (with a slight offset to get it behind the occluding stuff).<p>There&#x27;s absolutely no way I&#x27;d have had the guts to try all that in one night without the robot tools.
评论 #35846298 未加载
themoonisachees大约 2 年前
I&#x27;ve been automating some of our team&#x27;s most menial tasks and since we&#x27;re in a corporate environment we don&#x27;t have control over what is and isn&#x27;t installed on our servers, so using bash is the sane default choice.<p>None of the scripts it produced are incredible or whatever (the tasks being more menial than lohic-based), but considering the intricacies of bash and how often I just faceplant into them as an experienced sysadmin, if feel as though the ability to just say &quot;write a script that compares a list of agents against every manager and outputs the ones not found anywhere&quot; and have it output basically a working script first try is impressive.
评论 #35846578 未加载
mike_hearn大约 2 年前
As a side project I&#x27;ve been writing a tool that drives GPT4 for coding purposes. It finds FIXME comments and equips the LLM with a set of high level shell commands it can use to explore the codebase and edit the files. This works well and I find it superior to the (often semi-broken) chat UI that OpenAI provide. There&#x27;s no need for copy paste, and it can be given instructions like<p><pre><code> &#x2F;&#x2F; FIXME: Implement this class paying attention to the unit tests </code></pre> at which point it will go look up the unit test class, read it, understand what the code needs to do and proceed to edit the necessary algorithms into the code. The tool is a mix of response parsing&#x2F;execution and prompt building, so it can do builds&#x2F;tests&#x2F;fix cycles. At the end you get a git branch with the work it&#x27;s done.<p>By this point I&#x27;m kinda developing an intuition for what GPT4 can and can&#x27;t do, but whilst pushing against what it <i>can</i> do I still often find myself being impressed. Maybe not &quot;mind blown&quot; anymore because it&#x27;s amazing how quickly you can get used to this stuff, but still. Not only does it do an excellent job of figuring out what to do when the instructions are clear enough, but it also has commands for adding library dependencies and will use relevant open source libs to make its job easier.<p>The hard part about using this sort of tool is that you can quickly become bottlenecked on figuring out what it is you actually want. It&#x27;s easy to run ahead of yourself and end up with a mess. This sort of very rapid &quot;painting with code&quot; feel is a bit new; probably the slowness of typing allows your subconscious to think ahead a bit when doing programming normally. Also, I&#x27;m still teaching it how to explore the codebase efficiently so things that require a lot of context aren&#x27;t quite there yet. I&#x27;ve got a whole plan for how to make that work better though.<p>I expect at some point soon some big company or some $100M &quot;seed round&quot; startup will come along and do this better, but I didn&#x27;t want to wait.
ian0大约 2 年前
I asked it to draft a legal document based on some bullet points and it created it, in a different language, in keeping with the legal norms of that country. I sent it to a local lawyer for review and they only came back with very minor things.<p>Did the reverse with some regulatory documents too, asking it to summarise in vernacular then asking specific questions to better wrap my head around the docs. I wouldn&#x27;t trust it fully for this of course, more like a supplementary learning aid.<p>I keep hitting limits with context size though. Eg. When coding it becomes cumbersome once your project hits a certain complexity. You have to carefully construct prompts to get useful outputs. Being able to add the entire source code into a prompt to have more contextual responses would really increase utility.
评论 #35844214 未加载
pcdoodle大约 2 年前
That i&#x27;m finally breaking my &quot;just google it&quot; habits for technical stuff.<p>Had it write non blocking micro-controller code for a project (about 75% of the output was put into production) it helped making variable names and general structure. The first output was bad because it blocked the button read while &quot;breathing&quot; and led but then I told it: &quot;non blocking code please&quot;.
smoldesu大约 2 年前
Honestly, it hasn&#x27;t. Last night I was trying to use it&#x2F;Bing to write a simple API request based on a JSON schema, and it failed miserably. Nothing super complicated either, it was just a poor fit. Both GPT3 and GPT4 repeatedly generated nonworking code, which surprised (and frustrated) me.
评论 #35844204 未加载
jumploops大约 2 年前
I think people are most surprised when using it outside of their own domain.<p>As an example: I recently showed my mother GPT-4. I asked her what she wanted to see, and having little knowledge of the tool&#x27;s capabilities, she asked me to surprise her. Apparently I had done this before (with GPT-3), as the poem I generated about Petoskey stones was met with little fanfair: &quot;I&#x27;ve seen it generate poems... what else can it do?&quot;<p>At that point I showed her a Python file and asked GPT to translate it to Typescript. Needless to say she&#x27;s not a poet or a programmer, but the fact it could do something like that, entirely outside of her domain, was mind-blowing.<p>For me, I&#x27;m surprised how little I use Google...
teleforce大约 2 年前
There are a few types of RF receiver architecture and I was asking ChatGPT4 to provide the codes in Mermaid for direct conversion receiver architecture since it does not has support for image generation capability. The provided codes contained some errors when copied&#x2F;pasted to Mermaid. Then I feedback the errors back to ChatGPT4 it then apologized and duely provided the correct codes that Mermaid can draw the receiver diagram. After that the diagram is fed into Krogi and draw.io for further processing resulting in the direct conversion diagram with editable boxes and texts that is ready to be used or modified with draw.io.
akasakahakada大约 2 年前
Got a novel idea about some math and GPT4 give me legit advices and implications.
heresjohnny大约 2 年前
It blew my mind when I gave it a TypeScript function with multiple nested awaits, all tangled up into a chain of multiple `map` calls. A comment I wrote above it was “TODO refactor this monstrosity.” It was clever enough to split it into a main function and multiple utils. And it added JSDocs. The result passed my existing tests immediately.
goertzen大约 2 年前
It’s extremely under utilized.<p>Everyone could be producing way more than they currently are.<p>Everyone could be generating 1 app every 3 hours.<p>Everyone could be writing entire novels.<p>Everyone could be writing meaningful Wikipedia articles.<p>DIY guides should be 100x larger .
subjectsigma大约 2 年前
This week between ChatGPT and Phind I asked an AI four questions. Two were about SMT-LIB, one was about homomorphic encryption, and one was about software-defined networking.<p>Both ChatGPT and Phind insisted on giving me example SMT problems and then incorrect solutions to them. The HE answer was so vague as to be useless, and I don’t actually know enough about SDN to verify the last one but it <i>seemed</i> useful.<p>Overall it didn’t give me a lot of confidence in the two systems.
taf2大约 2 年前
I was at the beach and used it to write a streaming C++ web server integrating whisper.cpp. I should opensource it eventually, but was like totally amazing since I used to write C++ all the time but it&#x27;s been like 15 years and was like well i&#x27;ll use c++20 and sure enough even with a few drinks and a lot of sun the code came together really nicely.
ibitto大约 2 年前
I was testing the pixie observability tool for an internal PoC. It has only Go and Python client libraries. I asked Bing AI, how to the call the API in Java. It made up an fake Java client library and gave instructions for adding maven dependency. When prompted further, it gave some sample code as well.
dsrtslnd23大约 2 年前
It really helps me to overcome inertia and unblocks my mind to get started, even if it not always get things right.
xory大约 2 年前
When 4 perfectly followed a complex prompt that I had spent hours trying (unsuccessfully) to get 3.5-turbo to follow even somewhat acceptably. Both models continue to impress me with their ability to tell me how to tune my prompts to get better results more consistently.
theRealMe大约 2 年前
Idk about “blow my mind” but I was surprised that it helped me figure out “the song at sporting events that goes “boom boom, ch”.”<p>Maybe that’s an easy thing to know, but I expected to not get a good answer.
评论 #35846425 未加载
aprdm大约 2 年前
Overall it’s been great, generating elastic search queries and pydantic to typescript
demarq大约 2 年前
It’s ability to debug IS UNPARALLELED!
评论 #35848814 未加载
1270018080大约 2 年前
What really blows my mind is that people are using a language model to do anything important with confidence.
评论 #35848723 未加载
评论 #35846921 未加载