TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Forecasting at Uber with RNNs

179 点作者 paladin314159将近 8 年前

6 条评论

eggie5将近 8 年前
I wish the diagrams were bigger, they are hard to read and a bit blurry.<p>One of the interesting points, that is often overlooked in ML is model deployment. They mention tensorflow, which has a model export feature that you can use as long as your client can run the tensorflow runtime. But they don&#x27;t seem to be using that b&#x2F;c they said they just exported the weights and are using it go which would seem to imply you did some type of agnostic export of raw weight values. The nice part of the TF export feature is that it can be used to recreate your architecture on the client. Bu they did mention Keras too which allows you to export your architecture in a more agnostic way as it can work on many platform such as Apples new CoreML which can run Keras models.
评论 #14535250 未加载
评论 #14535939 未加载
评论 #14540608 未加载
评论 #14535200 未加载
评论 #14535190 未加载
评论 #14535294 未加载
siliconc0w将近 8 年前
I wonder how much they could enlist others to solve this by creating something like an &#x27;Uber Auction House&#x27; to basically buy and sell the right to reap Uber&#x27;s cut for a ride. They could clean up on exchange fees while everyone solves this problem for them.
评论 #14535536 未加载
评论 #14536861 未加载
评论 #14536018 未加载
ozankabak将近 8 年前
I don&#x27;t understand if they use windowing as a fixed computational step that is active both in training and scoring time, or, if they use sliding windows only to chop up the training data.<p>Also, I wonder if they checked how a feed-forward NN that operates on the contents of a sliding window (e.g. as in the first approach above) compares with their RNN results. I am curious about this, as it would give us a hint whether the RNN&#x27;s internal state encodes something that is not a simple transformation of the window contents. If this turns out to be the case, I&#x27;d then be interested in figuring out what the internal state &quot;means&quot;; i.e. whether there is anything there that we humans can recognize.<p>[edited to increase clarity]
评论 #14537477 未加载
评论 #14537949 未加载
marksomnian将近 8 年前
Whenever I see a post or announcement by a major company that they&#x27;re using &quot;machine learning&quot;, I&#x27;m reminded of what CGP Grey said: it seems like nowadays machine learning is something you add in to your product just so you can seem hip by saying that it has machine learning, and not for a legitimate technical reason.<p>There are undoubtedly things that machine learning is right for, however to me it seems like it&#x27;s become a buzzword more than anything else.
评论 #14538331 未加载
评论 #14538279 未加载
afro88将近 8 年前
Interesting stuff, but all they&#x27;ve managed to do so far is find models that fit historical data better. Would be interested to read a follow up a year later to see how their models actually performed.
sjbp将近 8 年前
I wonder how are they quantifying uncertainty around their predictions. Having a point-estimate without some notion of confidence interval seems much less useful. Is there a natural way to do this through LSTMs?<p>Also, some actual benchmarking would be great. Say, against Facebook&#x27;s Prophet (which also deals with covariates and holiday effects).