I've only glanced at the paper, but from that glance it seems that it does not "solve and explain" these problems in any thing like the sense that this would mean for a human student doing the problems.<p>Take the first example in Figure 4: "Find the derivative of the function using the definition of a derivative. f(x) = (x**2-1) / (2*x-3)". The "solution" produced is to just use a symbolic math package's 'diff' function to find the derivative. I assume that the actual intent of the question is for the student to use the definition of a derivative: f'(x) = limit of (f(x+e)-f(x))/e as e goes to zero, to find the derivative of this function, by directly finding this limit.<p>The "answers" for other questions similarly miss the point. For example, convergence of a series is determined by just asking a symbolic math package whether it converges, not by any actual reasoning, as would be expected of a student. And the question asking for the Type I error probability of a statistical test is "solved" using a simulation program, whereas I expect a human student is expected to get the exact answer by analytical calculation.
New method exploits few shot learning and program synthesis to automatically solve university math problems and produce explanations with 10x the accuracy of previous methods.
If a program can now solve these problems better than most people can, even after studying the relevant subjects, does that mean people should spend less time learning how to solve these problems?