TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

One-Shot Training of Neural Networks Using Hypercube-Based Topological Coverings

148 点作者 ghosthamlet超过 6 年前

6 条评论

rococode超过 6 年前
This seems like a really interesting approach and I think the numbers are promising. Given that it&#x27;s an entirely different type of model-building than traditional methods, I think it&#x27;s fine for it to just be up to par with a basic shallow model. If the constructive approach turns out to be comparable to current state-of-the-art models with sufficient refinement, it could be really valuable for low-compute applications like IoT devices, etc.<p>To be honest, I can&#x27;t say I know enough about the math here to do anything more than vaguely follow their explanations despite my ML&#x2F;NLP background. I&#x27;m curious - other ML researchers out there, how much of this are you able to understand? My impression is that this math is pretty far beyond what ML folks typically know, although I&#x27;m on the lower end of the spectrum as far as math knowledge goes, so I may be totally wrong (and need to spend more time reading textbooks haha). I wonder if the complexity may slow down progress if it does turn out that this kind of geometric construction can compete with iterative training. It sounds like this approach could potentially support more complex networks by working more on the geometric representation, so I certainly hope this paper serves its purpose of motivating people with the right skillsets to do further exploration.
评论 #18882267 未加载
评论 #18881854 未加载
princeofwands超过 6 年前
<p><pre><code> from sklearn.datasets import fetch_openml from sklearn.model_selection import train_test_split from sklearn.neural_network import MLPClassifier from sklearn.metrics import accuracy_score X, y = fetch_openml(&#x27;mnist_784&#x27;, version=1, return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1729, test_size=X.shape[0] - (10 * 20)) model = MLPClassifier(random_state=1729) model.fit(X_train, y_train) p = model.predict(X_test) print(accuracy_score(y_test, p)) X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1729, test_size=X.shape[0] - (10 * 200)) model = MLPClassifier(random_state=1729) model.fit(X_train, y_train) p = model.predict(X_test) print(accuracy_score(y_test, p)) </code></pre> This gets you 0.645 and 0.838 accuracy score respectively (versus 62% and 76% in the paper). Sure, different validation (I validate on all the remaining data, they do 20x 70% 30% splits on the 200 and 2000 samples, which needlessly lowers the number of training samples, fairer comparison is 0.819 with 1400 samples), but the scores seem at least comparable. Cool method though, I can dig this and look beyond benchmarks (Though Iris and Wine are really toy datasets by now.)
评论 #18882341 未加载
评论 #18881673 未加载
tdj超过 6 年前
Their hypercube covering formalism can be seen as decision tree induction with a specific partitioning rule, and terminating branching only at uniformly labeled leaves. But try are using the tree nodes as kind of an embedding to apply a softmax on. I like the connection between relus and the geometrical representation, makes it easier to think about in spatial terms.<p>Reading this I got several dejavus to my grad school classes on classical ML stuff. I like the direction but it feels like it could be better if it admitted that it&#x27;s a variant of decision tree embedding, and built on some of the massive amount of research work in that area. At least in terms of understanding.<p>I suspect doing a random forest version of this would actually help. Perhaps we will see this as a legit pre-training step.
评论 #18885592 未加载
tbenst超过 6 年前
Wonder how this compares to single shot with an SVM or nearest neighbor. 76% on MNIST is frankly embarrassing
评论 #18881698 未加载
评论 #18888608 未加载
calvinmorrison超过 6 年前
Interesting, we spent the last few days digging into the old Time Warp OS<p><a href="https:&#x2F;&#x2F;lasr.cs.ucla.edu&#x2F;reiher&#x2F;Time_Warp.html" rel="nofollow">https:&#x2F;&#x2F;lasr.cs.ucla.edu&#x2F;reiher&#x2F;Time_Warp.html</a>
stealthcat超过 6 年前
MNIST is joke. I can train linear regression and acheive 90% accuracy.<p>CIFAR-10 is the new MNIST today.