TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Modeling libraries don’t matter (2020)

31 点作者 razcle将近 4 年前

3 条评论

MontyCarloHall将近 4 年前
I generally agree with the point made in this article, although I’ll point out that it’s only been true for the last couple of years. Until TensorFlow completely revamped its syntax in v2.0, scrapping the previous graph-based syntax for PyTorch-like eager execution, writing code in TF was much more time-consuming than in PyTorch, since you had to define the entire computational graph before you could execute it as a single unit. This made iterative debugging extremely painful, since you couldn’t interactively execute individual steps within the graph.<p>These days, thankfully, the choice of framework comes down mostly to (a) minor syntactic preferences and (b) specific functionality available in one framework but not another. For example, although I generally prefer PyTorch’s syntax since it’s closer to numpy’s, TF supports far more probability distributions (and operations on those distributions) than PyTorch. When working on a model in PyTorch, if I discover that I need that additional functionality, it’s easy enough to convert all my code to TF.
citilife将近 4 年前
My team and I wrote an NLP application to detect sensitive data and detect &#x2F; validate schemas, etc as well as the other items provided by pandas-profiling.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;capitalone&#x2F;DataProfiler" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;capitalone&#x2F;DataProfiler</a><p>That being said, we noted the same thing. It shouldn&#x27;t matter what modeling you use. It&#x27;s the data pipelining where 99% of the work typically is. Modeling itself always needs the same basic input -- matrix of data and outputs a matrix of data.<p>Some libraries are good at specific components. Others have improved speeds ups, etc. But it&#x27;s all so new it&#x27;s effectively going to change month-to-month. So I always tell the team to build what you can as fast as you can, with the tools you have. We can always update it later, once the pipeline is in place.
bjourne将近 4 年前
When I last investigated it a few months ago TensorFlow&#x27;s TPU support was much more mature than PyTorch&#x27;s. To get PyTorch to work on TPUs I had to download some nightly builds and ask for help on Github since the tutorials weren&#x27;t up to date. It also ran much slower than TensorFlow on Google Colab&#x27;s TPUs. There are also some features in TensorFlow 2.0 I could not find any counterparts for in PyTorch, such as recurrent dropout for LSTM layers.