TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Natural language Twitter search using Codex

5 点作者 andyk超过 2 年前
We built a structured search engine for Twitter called Bird SQL, available at <a href="https:&#x2F;&#x2F;www.perplexity.ai&#x2F;sql" rel="nofollow">https:&#x2F;&#x2F;www.perplexity.ai&#x2F;sql</a>. Our search interface uses OpenAI Codex to translate natural language to SQL. Our backend then verifies the SQL, executes it, and displays the results on the web app. This makes large structured datasets like a scrape of Twitter easy for anyone to explore.<p>As background, while working on text-to-SQL as a general problem, we came to believe one of its most powerful applications is as a search tool because:<p>- SQL is hard to write by hand and prone to errors<p>- It allows you to iterate quickly if you’re exploring a new dataset<p>- A lot of contextual information that you’d normally have to internalize (e.g. your data’s schema) can be automatically generated and offloaded to the language model<p>Using large language models (LLMs) like Codex to write the SQL for you means you don’t have to worry about the nitty gritty language details, but still benefit from the power of a language like SQL. Also, after seeing the results of the query, you can inspect (and if necessary, change) the SQL. The lack of this sort of explainability of the query result is one of the more notorious challenges of returning the output of an LLM directly to the user. Additionally, using LLMs in this way makes these kinds of queries over structured data accessible to people who know little or no SQL.<p>While Bird SQL shares significant infrastructure with our more general LLM-powered search engine over unstructured data (Ask Perplexity - <a href="https:&#x2F;&#x2F;perplexity.ai[1]" rel="nofollow">https:&#x2F;&#x2F;perplexity.ai[1]</a>), the two approaches and their respective challenges are quite different. For example, the type of models are different (GPT3.5 vs Codex), obviously the model prompts have different structure, and how to verify model output when its text vs when it&#x27;s code is different. We are currently exploring ways to combine the two approaches, such as using the results of retrieving information from a structured source (as in Bird SQL) as one of the inputs for the LLM to interpret or summarize (as in Ask Perplexity).<p>We would love to hear your questions, suggestions, and feedback!<p>[1] <a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=33897884" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=33897884</a>

暂无评论

暂无评论