TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Show HN: Telescope – Find the right people in seconds

8 pointsby Layvier10 months ago
Hi HN! I&#x27;m Olivier, the CTO of TryTelescope.ai. After more than 2 years and already more than 1000 companies using it, we&#x27;re very happy to finally officially launch Telescope!<p>Telescope is an app to easily find the right company and the people within those companies. Here&#x27;s how to it works: - First provide some information about the type of companies and people you&#x27;re looking for through a chat interface. This natural language interface allows for capturing advanced search criteria, that we use to show you some initial results - Fine tune the model by accepting&#x2F;rejecting lead recommendations in a &quot;Tinder-like&quot; swiping interface. Based on this feedback information, we refine the recommendations to precisely narrow down your ideal customer profile - After this fine tuning phase, you can generate lists of leads in one click, and then enrich with contact data or download the list as csv&#x2F;export to a crm.<p>This has been a complex problem to solve, mainly due to the large number of data points of possible filtering criteria - and we&#x27;re not totally there yet, but we&#x27;re supporting more and more criteria every week. The data preprocessing is particularly important, and for that we&#x27;ve built a batch inference engine running on GCP and completely integrated with Databricks DLTs. I&#x27;m particularly proud of it as we essentially have llm inferences as part of our streamed pipelines, able to scale to millions of records easily and extremely cheap as we mainly use preemptible GCP L4 instances. Happy to discuss it, I was actually thinking of writing an article about it soon.<p>For the application, we use Langchain and LangGraph for the chat and online llm inferences. The chat is still not 100% reliable as we&#x27;re packing a bit too much logic in it, but we&#x27;re working on it.<p>For the leads recommendations, we leverage hybrid and semantic search within a kind of genetic algorithm that does part of the &quot;finetuning&quot; logic. We then use an llm to evaluate potential recommendations with an adapted RAG technique, able to fetch relevant data points from external APIs dynamically.<p>Let me know if you want to know more about the tech, and any feedback or suggestions are greatly appreciated:)

no comments

no comments