TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Ask HN: How is the community using LLMs for data cleaning/enriching/structuring?

7 pointsby jarulrajover 1 year ago
Would love to learn how the community using LLMs for data wrangling and exchange prompts [1]. For example, we iterated several times on the following &quot;data structuring&quot; prompt for our Github Stargazers app [2,3]:<p>--- Prompt to GPT-3.5<p>You are given a block of disorganized text extracted from the GitHub user profile of a user using an automated web scraper. The goal is to get structured results from this data. Extract the following fields from the text: name, country, city, email, occupation, programming_languages, topics_of_interest, social_media. If some field is not found, just output fieldname: N&#x2F;A. Always return all the 8 field names. DO NOT add any additional text to your output. The topic_of_interest field must list a broad range of technical topics that are mentioned in any portion of the text. This field is the most important, so add as much information as you can. Do not add non-technical interests. The programming_languages field can contain one or more programming languages out of only the following 4 programming languages - Python, C++, JavaScript, Java. Do not include any other language outside these 4 languages in the output. If the user is not interested in any of these 4 programming languages, output N&#x2F;A. If the country is not available, use the city field to fill the country. For example, if the city is New York, fill the country as United States. If there are social media links, including personal websites, add them to the social media section. Do NOT add social media links that are not present. Here is an example (use it only for the output format, not for the content):<p><pre><code> name: Pramod Chundhuri country: United States city: Atlanta email: pramodc@gatech.edu occupation: PhD student at Georgia Tech programming_languages: Python, C++ topics_of_interest: PyTorch, Carla, Deep Reinforcement Learning, Query Optimization social_media: https:&#x2F;&#x2F;pchunduri6.github.io </code></pre> ----<p><pre><code> [1] https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Data_wrangling [2] https:&#x2F;&#x2F;github.com&#x2F;pchunduri6&#x2F;stargazers-reloaded [3] https:&#x2F;&#x2F;medium.com&#x2F;evadb-blog&#x2F;stargazers-reloaded-llm-powered-analyses-of-your-github-community-aef9288eb8a5</code></pre>

4 comments

nbradover 1 year ago
In general, just providing a schema and asking for the response in JSON with few-shot examples is extremely (99%+) reliable in my experience.<p>I&#x27;ve found GPT-3.5 more than adequate at inferring schemas and filling them for conventional use cases like chat-based forms (as an alternative to Google Forms&#x2F;TypeForm); my code and prompts available at: <a href="https:&#x2F;&#x2F;github.com&#x2F;nsbradford&#x2F;talkformai">https:&#x2F;&#x2F;github.com&#x2F;nsbradford&#x2F;talkformai</a> - i&#x27;ve also used this to extract structured data from code for LLM coding agents (e.g. &quot;return the names of every function in this file&quot;)<p>In my opinion, more and more APIs are likely to become unstructured and be reduced to LLM agents chatting with each other; I wrote a brief blog about this here: <a href="https:&#x2F;&#x2F;nickbradford.substack.com&#x2F;p&#x2F;llm-agents-behind-every-api-call" rel="nofollow noreferrer">https:&#x2F;&#x2F;nickbradford.substack.com&#x2F;p&#x2F;llm-agents-behind-every-...</a>
AlwaysNewb23over 1 year ago
I&#x27;ve tried doing things like this and found that it&#x27;s often not totally reliable. I&#x27;ve had a hard time getting a consistent output and will randomly get variations I did not expect. I&#x27;ve found it&#x27;s useful if you&#x27;re trying to clean up data as a manual task, but not trying to automate a process.
评论 #37930623 未加载
PaulHouleover 1 year ago
How successful was this effort? How did you know how successful it was?
评论 #37930593 未加载
tmalyover 1 year ago
I have had some good results processing survey data.<p>Having the LLM generalize responses, look for patterns and rank by frequency
评论 #37931453 未加载