The example here is a bit worrying for the peer review process. I am not looking forward to my "peers" reviewing my paper by putting it through LLMs and blindly copy pasting the output. I can already imagine emailing the Area Chair and saying "While reviewer 2 is detailed, the questions show a severe lack of basic understanding. We believe the contents are AI generated."<p>Then again, perhaps LLMs could simply be incorporated into the peer-review process, where after submitting your paper, you'd have to answer the AI's basic questions. As a reviewer, I could imagine a structured AI report for a paper being helpful in guiding discussion: "The paper compares to recent approaches X, Y, and Z. And the work is indeed novel."
Caution. Language models do not know what is salient to a human. They also have a strong bias toward information that they have seen frequently. Research will contain a larger amount of new information and it's that new information which is most valuable to us, but least relevant according to the models.
Ive used ChatGPT to make short factual videos for YouTube and honestly it's a bit worrying with supposed 'facts'<p>I would not suggest anyone to use ChatGPT outputs for actual knowledge at this point.
This is dangerous because people that has no knowledge of the science would blindly trust whatever it summarizes. There is no way to verify, an example is if you ask to summarize a book that you have understanding of the subject, at least you can sense some bs or open up the book at verify a few points. Here you would be at GPT3 mercy
Great work!
Glad to see innovations based on my work <a href="https://github.com/wong2/chatgpt-google-extension">https://github.com/wong2/chatgpt-google-extension</a>
That's why I open sourced the code!
I think my favorite part of this prompt is that it starts with, "Please..."<p>With this new class of products based on crafting prompts that best exploit a GPT's algorithm and training data, are we going to start seeing pull requests that tweak individual parts or words of the prompt. I'm also curious how the test suite for projects like this would look for specific facts or phrases to be contained in the responses for specific inputs.
To make it work in brave we need to turn off language based finger printing [1]. I wonder how's that related.<p>Edit: btw, congratulations on the release. This is the kind of stuff I think should be explored more using LLMs. Great choice on making a chrome extension, it's great UI for this kind of thing.<p>[1] <a href="https://github.com/hunkimForks/chatgpt-arxiv-extension#how-to-make-it-work-in-brave">https://github.com/hunkimForks/chatgpt-arxiv-extension#how-t...</a>
This isn't viable because of bias and blatant lies in LLM-outputs.<p><a href="https://huggingface.co/ml6team/keyphrase-extraction-kbir-inspec" rel="nofollow">https://huggingface.co/ml6team/keyphrase-extraction-kbir-ins...</a> is a decent tool to explore the constant stream of publications. The last mile still is left to the human.
I'm wondering about two things related to development of customer-facing programs that use paid APIs in the background:<p>Are you using your own API key and pay for the usage? How can you justify operation of programs that produce high costs but no income?
Isn't the API publicly exposed to the client-side and possible subject of theft and abuse?