TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Basilica – word2vec for anything

153 点作者 hiphipjorge超过 6 年前

15 条评论

mlucy超过 6 年前
Hey all,<p>I did a lot of the ML work for this. Let me know if you have any questions.<p>The title might be a little ambitious since we only have two embeddings right now, but it really is our goal to have embeddings for <i>everything</i>. You can see some of our upcoming embeddings at <a href="https:&#x2F;&#x2F;www.basilica.ai&#x2F;available-embeddings&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.basilica.ai&#x2F;available-embeddings&#x2F;</a>.<p>We basically want to do for these other datatypes what word2vec did for NLP. We want to turn getting good results with images, audio, etc. from a hard research problem into something you can do on your laptop with scikit.
评论 #18349140 未加载
评论 #18349876 未加载
评论 #18347371 未加载
评论 #18349004 未加载
评论 #18347671 未加载
评论 #18347686 未加载
评论 #18351421 未加载
e_ameisen超过 6 年前
Interesting idea, but seems to very much fall within the category of something you would often want to build in-house. I always imagined the right level of abstraction was closer to spacy&#x27;s, a framework that lets you easily embed all the things.<p>If you are interested in how to build and use embeddings for search and classification yourself, I wrote a completely open source tutorial here: <a href="https:&#x2F;&#x2F;blog.insightdatascience.com&#x2F;the-unreasonable-effectiveness-of-deep-learning-representations-4ce83fc663cf" rel="nofollow">https:&#x2F;&#x2F;blog.insightdatascience.com&#x2F;the-unreasonable-effecti...</a>
projectramo超过 6 年前
What is the use case for this? (And this is a general point for AI cloud APIs)<p>Specifically, I am trying to think of an example where the user cares about a vector representation of something, but doesn&#x27;t care about how that vector representation was obtained.<p>I can think of why it would be useful: the ML examples given, or perhaps a compression application.<p>However, in each of these cases, it would seem that the user has the skill to spin up their own, and a lot of motivation to do so and understand it.
评论 #18347960 未加载
评论 #18347854 未加载
ASpring超过 6 年前
How do you plan to counter the harmful societal biases that embeddings embody?<p>See Bolukbasi (<a href="https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1607.06520.pdf" rel="nofollow">https:&#x2F;&#x2F;arxiv.org&#x2F;pdf&#x2F;1607.06520.pdf</a>) and Caliskan (<a href="http:&#x2F;&#x2F;science.sciencemag.org&#x2F;content&#x2F;356&#x2F;6334&#x2F;183" rel="nofollow">http:&#x2F;&#x2F;science.sciencemag.org&#x2F;content&#x2F;356&#x2F;6334&#x2F;183</a>)<p>While these examples are solely language based, it is easy to imagine the transfer to other domains.
评论 #18348105 未加载
gugagore超过 6 年前
Aren&#x27;t these embeddings task-specific? For example a word2vec embedding is found by letting the embedder participate in a task to predict a word given words around it, on a particular corpus of text.<p>The embedding of sentences are trained on translation tasks. A embedding that works both for images and sentences is found by training for a picture captioning task.<p>The point I&#x27;m asking about is that there may be many ways to embed a &quot;data type&quot;, depending on what you might want to use the embedding for. Someone brought up board game states. You could imagine embedding images of board games directly. That embedding would only contain information about the game state if it was trained for the appropriate task.
评论 #18350021 未加载
piccolbo超过 6 年前
You quote a target of 200ms per embedding, not sure if it&#x27;s one type of embedding in particular. I am using Infersent (a sentence embedding from FAIR <a href="https:&#x2F;&#x2F;github.com&#x2F;facebookresearch&#x2F;InferSent" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;facebookresearch&#x2F;InferSent</a>) for filtering and they quote a number of 1000&#x2F;sentences per second on generic GPU. That&#x27;s 200 times faster than your number, but it is a local API so I am comparing apples to oranges. Yet it&#x27;s hard to imagine you are spending 1ms embedding and 199 on API overhead. I am sure I have missed a 0 here or there, but I don&#x27;t see where, other than theirs is a batch number (batch size 128) and maybe yours is a single embedding number. Can you please clarify? Thanks
评论 #18445569 未加载
jdoliner超过 6 年前
How much does this depend on the data type? I.e. do you need people to specify: this is an image, this is a resume, this is an English resume, etc. Could you ever get to a point where you can just feed it general data, not knowing more than that it&#x27;s 1s and 0s?
评论 #18349304 未加载
pkaye超过 6 年前
Slightly different topic but what are some approaches to categorize webpages. Like I have 1000s of web links I want to organize with tags. Is there software technique to group them by related topics?
评论 #18348930 未加载
Lerc超过 6 年前
Is this actually &#x27;for anything&#x27;? I see references to sentences and images. If I, for example, wanted to compare audio samples, how would it work?
评论 #18348849 未加载
kolleykibber超过 6 年前
Hi Lucy. Looks great. Do you have any production use cases you can tell us about? Are you a YC company?
评论 #18349948 未加载
msla超过 6 年前
So the actual code is closed-source?
评论 #18347952 未加载
captn3m0超过 6 年前
Do you think board game states might be a good target later?
评论 #18347740 未加载
评论 #18347735 未加载
asdfghjl超过 6 年前
How are you embedding images?
评论 #18347791 未加载
评论 #18349890 未加载
aaaaaaaaaab超过 6 年前
&gt;Job Candidate Clustering<p>&gt;Basilica lets you easily cluster job candidates by the text of their resumes. A number of additional features for this category are on our roadmap, including a source code embedding that will let you cluster candidates by what kind of code they write.<p>Wonderful! We were in dire need for yet another black-box criteria based on which employers can reject candidates.<p>“We’re sorry to inform you that we choose not to go on with your application. You see, for this position we’re looking for someone with a different <i>embedding</i>.”
评论 #18348035 未加载
评论 #18348236 未加载
评论 #18350181 未加载
评论 #18347969 未加载
mathena超过 6 年前
Am I really missing something here or this thing is a complete nonsense with no actual use cases what&#x27;s so ever in practice?<p>There are a number of off-the-shelf models that would give you image&#x2F;sentence embedding easily. Anyone with sufficient understanding of embedding&#x2F;word2vec would have no trouble train an embedding that is catered to the specific application, with much better quality.<p>For NLP applications, the corpus quality dictates the quality of embedding if you use simple W2V. Word2Vec trained on Google News corpus is not gonna be useful for chatbot, for instance. Different models also give different quality of embedding. As an example, if you use Google BERT (bi-directional LSTM) then you would get world-class performance in many NLP applications.<p>The embedding is so model&#x2F;application specific that I don&#x27;t see how could a generic embedding would be useful in serious applications. Training a model these days is so easy to do. Calling TensorFlow API is probably easier then calling Basilica API 99% of the case.<p>I&#x27;d be curious if the embedding is &quot;aligned&quot;, in the sense that an embedding of the word &quot;cat&quot; is close to the embedding of a picture of cat. I think that would be interesting and useful. I don&#x27;t see how Basilica solve that problem by taking the top layers off ResNet though.<p>I appreciate the developer API etc, but as an ML practitioner this feels like a troll.
评论 #18351028 未加载
评论 #18351455 未加载