TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

AutoML-Zero: Evolving machine learning algorithms from scratch

260 点作者 lainon大约 5 年前

10 条评论

manually大约 5 年前
Next:<p>- Autosuggest database tables to use<p>- Automatically reserve parallel computing resources<p>- Autodetect data health issues and auto fix them<p>- Autodetect concept drift and auto fix it<p>- Auto engineer features and interactions<p>- Autodetect leakage and fix it<p>- Autodetect unfairness and auto fix it<p>- Autocreate more weakly-labelled training data<p>- Autocreate descriptive statistics and model eval stats<p>- Autocreate monitoring<p>- Autocreate regulations reports<p>- Autocreate a data infra pipeline<p>- Autocreate a prediction serving endpoint<p>- Auto setup a meeting with relevant stakeholders on Google Calendar<p>- Auto deploy on Google Cloud<p>- Automatically buy carbon offset<p>- Auto fire your in-house data scientists
评论 #22540355 未加载
评论 #22543926 未加载
评论 #22540142 未加载
评论 #22539871 未加载
评论 #22541023 未加载
TaylorAlexander大约 5 年前
Shouldn’t this link directly to the Readme?<p><a href="https:&#x2F;&#x2F;github.com&#x2F;google-research&#x2F;google-research&#x2F;blob&#x2F;master&#x2F;automl_zero&#x2F;README.md" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;google-research&#x2F;google-research&#x2F;blob&#x2F;mast...</a>
lokimedes大约 5 年前
Reminds me of <a href="https:&#x2F;&#x2F;www.nutonian.com&#x2F;products&#x2F;eureqa&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.nutonian.com&#x2F;products&#x2F;eureqa&#x2F;</a> which I used quite productively to model multivariate distributions from data back in the 2000’s. Funny how everything stays the same, but with a new set of players on the bandwagon.
评论 #22540874 未加载
joe_the_user大约 5 年前
<i>AutoML-Zero aims to automatically discover computer programs that can solve machine learning tasks, starting from empty or random programs and using only basic math operations.</i><p>If this system is not using human bias, who is it choosing what good program is? Surely, human labeling data involves humans adding their bias to the data?<p>It seems like AlphaGoZero was able to do just end-to-end ML because it was able to use a very clear and &quot;objective&quot; standard, whether a program wins or loses at the game of Go.<p>Would this approach only deal with similarly unambiguous problems?<p>Edit: also, AlphaGoZero was one of the most ML ever created (at least at the time of its creation). How much computing resources would this require for more fully general learning? Will there be a limit to such an approach?
评论 #22539976 未加载
评论 #22541058 未加载
mark_l_watson大约 5 年前
This reminds me of John Koza’s Genetic Programming, a technique for evolving small programs. There is an old Common Lisp library to play with it.
评论 #22543521 未加载
tmpmov大约 5 年前
For those interested AutoML-Zero cites &quot;Evolving neural networks through augmenting topologies&quot; (2002) among other &quot;learning to learn&quot; papers and is worth a read if you have time and inclination.<p>For those with more background and time, would any mind bridging the 18 year gap succinctly? A quick look at the paper reveals solution space constraints (assuming for speed), discovering better optimizers, and specific to the AutoML-Zero paper: symbolic discovery.
ypcx大约 5 年前
Now, can we evolve a ML algorithm that would in turn produce a better AutoML? Ladies and gentlemen, the Singularity Toolkit v[quickly changing digits here].
jxcole大约 5 年前
Interesting, but how does it perform on standard benchmarks like image net and MNIST?
评论 #22541098 未加载
评论 #22539787 未加载
manthideaal大约 5 年前
if AutoML-Zero is going to be more than a grid-like method then I think it should try to learn a probabilistic distribution over (method, problem, efficiency) and use it to discover features for problems using an auto-encoder in which the loss function is a metric over the (method,efficiency) space. That means using transfer-learning from related problems in which the similarity of problems is based of the (method,efficiency) differency.<p>Problem P1 is locally similar to P2 if (method,efficiency,P1) meassured in computation time is similar to (method,efficiency,P2) for method in a local space of methods. The method should learn to classify both problem and methods, that&#x27;s similar to learning words and context words in NLP or matrix factorization in recommendation systems. To sample the (space,method,efficiency) space one need huge resources.<p>Added: To compare a pair of (method,problem) some stardardization should be used, for linear problems related to solving linear systems the condition number of the coefficiency matrix should be used as a feature for standardization and, for example in SAT an heuristic using the number of clauses and variables should be used for estimating the complexity and normalization of problems. So the preprocessing step should use the best known heuristic for solving the problem and estimating its complexity as both a feature and a method for normalization. Heuristic and DL for TSP is approaching SOTA (but concord is better yet).<p>Finally perhaps some encoding about how the heuristic was obtained could be used as a feature of the problem (heuristic from minimum spanning tree, branch and bound, dynamic programming, recurrence, memoization, hill climbing, ...) as an enumerative type.<p>So some problems for preprocessing are: 1) What is a good heuristic for solving this problem. 2) What is a good heuristic for bounding or estimating its complexity. 3) How can you use those heuristics to standardize or normalize its complexity. 4) How big should be the problem so that the assymptotic complexity takes over the noise of small problems. 5) How do you encode the different types of heuristics. 6) How do you value the sequential versus parallel method for solving the problem.<p>Finally, I wonder if once a problem is autoencoded then if some kind of curvature could be defined, that curvature should be related to the average complexity of a local space of problems, also transitions like in graph problems should be feautured. The idea is using gems of features to allow the system to combine those or discover new better features. Curvature could be used for clustering problem that is for classification of types of problems. For example all preprocessed problems for solving a linear system should be normalize to have similar efficiency when using the family F of learning methods otherwise a feature is introduced for further normalization. For example some problems could require to estimate the number of local extrema and the flat (zero curvature extend of those zones)
评论 #22570817 未加载
nobodywillobsrv大约 5 年前
Basic Tech Bros still don&#x27;t get it. This is cool but real problem is finding&#x2F;defining the problem. And you don&#x27;t get a million guesses.<p>Here is a simple test: get me data to predict the future. Can an algo like this learn to read APIs, build scripts, sign up and pay fees, collect data (laying down a lineage for causal prediction), set up accounts, figure out how account actions work and then take actions profitably without going bust?<p>If it can even do the first part of this I am in. But I doubt it. This is still just at the level of &quot;cool! Your dog can play mini golf.&quot;
评论 #22544393 未加载
评论 #22544662 未加载