To understand Feyerabend, you have to understand the project of 20th-century philosophy of science, which was to figure out the secret sauce, the one true scientific method, so that it could then be more rigorously applied and policed in various sciences and well-maybe-sort-of-sciences, and serve to separate science from pseudoscience such as psychoanalysis. In that context, Feyerabend was perceived as a total loon for proposing that there is no such thing as a universal one-size-fits-all scientific method:<p>> Our sophistication increases with every choice we make, and so do our standards. Standards compete just as theories compete and we choose the standards most appropriate to the historical situation in which the choice occurs. [...] It forces our mind to make imaginative choices and thus makes it grow.<p>He often gets lumped together with continental thinkers and post-modernists like Foucault that he has nothing to do with.<p>Against Method is a very short and simple book and I suspect that if you'd get a physicist, a chemist, a linguist, an engineer, a mathematician, an economist and so on to read it, they'd all be extremely underwhelmed and would just say "yeah, sounds about right, what's all the fuss about and why is this even considered interesting or provocative?"<p>I also don't understand the other comments who say it's full of sophistry. There's a couple of "discussion" chapters at the end that maybe you will like or maybe you won't, but the bulk of the book is a thorough analysis of famous theories and experiments in physics such as those of Galileo, which he handles with much more attention to detail than the idealized versions you get from Popper and the like. He has a completely fascinating account of why the church didn't like Galileo, which had as much to do with his orneriness as with his science.
I think I remember Imre Lakatos (another 20th century philosopher of science) saying that one application of philosophy of science is to determine which 'research programs' should (continue to) be funded.<p>Later in life, after becoming a software engineer, it occurred to me that point of view has some resemblance to managers trying to determine whether a software engineer or a team of engineers are doing good work. If you apply a method too rigorously, you'll end up rewarding the wrong people.<p>It's been ages since I read these philosophers but in my mind Feyerabend's position sort of boiled down to 'at the forefront of any specialization only the experts are able to judge which investigations are worth pursuing further'. With the corollary that experts sometimes disagree among themselves.<p>In the field of software engineering I've encountered several cases where new engineers are onboarded and they promptly decide that the codebase is unmaintainable and should be rewritten from scratch. I usually don't give up on legacy code so easily, but there was one project where I did genuinely held the opinion that rewriting it would have been more efficient than refactoring. It occurred to me, though, that when a software engineer says a particular piece of codebase is crap, there usually is no good way for outsiders to tell whether that's true or not.<p>Incidentally, Feyerabend's Against Method originated out of a challenge by Lakatos to copublish a book in which they debate various ideas. That's a useful thing to keep in mind when reading Against Method. Later someone did publish a book titled For And Against Method [1], in which writings of both Lakatos and Feyerabend are juxtaposed.<p>[1] <a href="https://press.uchicago.edu/ucp/books/book/chicago/F/bo3629717.html" rel="nofollow">https://press.uchicago.edu/ucp/books/book/chicago/F/bo362971...</a>
I think ultimately the biggest problem science has is the type of thinking that fuel's things like "Occam's razor". It's the same reason why things like Euclid's 5th Postulate (the Parallel postulate) puzzled so many mathematicians for so many years. It wasn't till the 19th century that non-Euclidean geometries were finally considered, but it seems pretty obvious in hindsight given how rare flat 2d planes actually are in nature<p>The common metaphor given for Occam's razor is a field with some random dots plotted. Those points are "evidence" and drawing a shape around them is a "theory" or hypothesis. Then a shape that encapsulates those dots is said to be the most preferred theory when compared to something like a rabbit or some other arbitrary shape<p>But there is an inherent assumption there about what the plane looks like. It's entirely possible that the geometry on which those points of evidence lie actually lends itself to where drawing a rabbit around all those points actually IS the "simplest" assumption<p>The are known knowns, known unknowns, and unknown unknowns. But there's a fourth category: ideology. The unknown knowns<p>In my view, it's this 4th category that ultimately dooms science. Science is ultimately cultural and there's no way around that. Our institutional science is always looking at analyzing outwardly: gathering more and more data; but just as important is analyzing inwardly. Being self-critical about our invisible assumptions.<p>We can never fully absolve ourselves of unknown knowns, but I do believe in a "more perfect" mission. One in which we always accept we're imperfect but working towards a closer vision. But to work towards this, we need to not only analyze the dots, but also the geometries on which we place those dots
This resonates well, and has parallels to business orgs and startup culture: <a href="https://www.slideshare.net/reed2001/culture-1798664" rel="nofollow">https://www.slideshare.net/reed2001/culture-1798664</a> (slide #48)<p>Procedure is created as the percentage of high functioning workers decreases relative to the amount of work output necessary for the system to survive.<p>These procedures are for sure stifling. I recently read “From Atomos to Atom” and one of the things that stood out to me was the approach most philosophers who made substantial progress on the atom took: everything is false. To make progress, they started from a position of assuming everything humans knew about this domain was incorrect; they then systematically proved _to themselves_ why each step in human thinking was correct.<p>I’m starting to wonder if there are two distinct concepts that we’ve conflated in the term “Science.”<p>One science refers to humanity’s collective constructs, the things we catalogue and teach and reconcile the world with in mass. This is a deeply social philosophy that is based on trust and not personal rigor. The scale of our collective constructs is too massive for any one person to tear apart and prove to themselves, so we substitute trust for rigor - we trust that someone else has been rigorous in generating these constructs.<p>The other science refers to practicing the generation of those constructs. This is a deeply personal philosophy in which there is no trust. It dictates our personal relationship and understanding of the world we inhabit. Here, science is something we personally practice in developing that understanding. We trust nothing, validate everything, and build up our own understanding in the domain.<p>With both, the defining trait is that reality is the final arbiter of truth.<p>To tie it back into business org and startup culture, not everyone in science is working towards paradigm shifts. The processes we have in place for indexing, compiling, and reconciling the constructs of science are likely sufficient. But they’re likely insufficient for generating a paradigm shift.<p>That being said, I find it unlikely that you’d have to tell someone working towards a paradigm shift that they should shun procedure. Many seem to share the trait of insatiable curiosity, where they’re going to build a construct against reality regardless of protocol.
An interesting book for pointing out that actual scientific progress sometimes doesn't follow strict methodology (see e.g. Mendel's approximations). Alas, like most writing from the 60's/70's, it overstates its case through heavy sophistry that may go well with polemists and (pseudo-science) hacks, but not in real life.
Related reading which I found very interesting is this in-progress essay/book:<p><a href="https://metarationality.com/" rel="nofollow">https://metarationality.com/</a><p>It’s mostly a kind of applied epistemology. It also asks the question “how do we move past postmodernism?”. It accepts that tools like the scientific method have obvious limits and are not foolproof recipes for knowledge. This is what Feyerabend argues, and I pretty much agree with him.<p>However, it rejects the postmodern idea (post-Feyerabend) that this makes rationality useless or wrong. The idea that all truth is subjective or that truth is not a useful concept. Instead he argues for embedding the tools of rationality into a larger framework that he calls “meta-rationality”.<p>I think there is not really anything “new” in this analysis — he is in some ways just describing how applied rationality already works in practice. I have nonetheless personally found the ideas very clarifying.
I keep thinking how modern philosophers of science, who talked about how science is done, would have a field day outlining recent social changes in sciences, with various shifting incentives, pressures from various media, strings attached to sources of financing, and competing interests.
Does anyone have a link to the original 1975 version, preferably a digital one? In the foreword to the new edition it says that the publisher now provides it online, but it seems to have vanished since then.
I've recently liked the metaphor of science as a four stroke engine.<p>Unstructured observations -> hypotheses -> structured observations (experiments) -> confirmed hypotheses.<p>I think that unstructured observations of new phenomenon doesn't get enough credit in general, thought some fields seem to be all phenomenology and little theorizing. But in most it's hard to write grants for unstructured observation of a phenomenon and you have to pretend to be doing some specific experiment to get the experience necessary to be putting forward real hypotheses.
You can't bring up Feyerabend without mentioning Imre Lakatos. The "methodology of scientific research programmes" is another fun read that takes the matter of demarcation a bit more seriously than AM.<p>For example, Lakatos isn't satisfied with "anything goes" because it fails to consider the political and social consequences of being unable to recognize science from pseudoscience. For Lakatos, demarcation is necessary to maintain a "standard of objective honesty", and avoid falling into an "intellectual decay".<p>Overall, Lakatos is much less provocative than Feyerabend, but is equally invested in picking apart the historical nuances of scientific progress brought into question by Popper and Kuhn.
> The author argues that science should become an anarchic enterprise, not a nomic (customary) one;[1] in the context of the work, the term "anarchy" refers to epistemological anarchy, which does not remain within one single prescriptive scientific method on the grounds that any such method would restrict scientific progress.<p>So sometimes we can have randomized controlled trials to really understand the effectiveness of drugs, but we can't RCT climate change or the big bang, so we have to use simulations and models. That doesn't seem like "anything goes", more like a response to that one guy in my econ class who always ranted that if it wasn't backed by a RCT, all theories are BS.
Somehow (I still wonder) I got to spend an afternoon as a fly-on-the-wall (what <i>could</i> I say?) at a seminar which included Feyerabend and Feigl. Feyerabend was a very down-to-earth fella, with a unique history, and well-armed to defend his thesis. (I know of no Feynman/Feyerabend discussion, but there'd be much common ground.)<p>Polanyi -was- a scientist, and his recognition of the influence of tacit knowledge ... 'an understanding that defies articulation' is equally essential. We all know things we cannot tell.
This is probably wrong, interesting and useful.<p>Useful to train for critical thinking.<p>Nothing should be considered sacred, it's OK to be wrong when exploring new ideas.
This summary reminds my of Jerome Ravetz’s advocacy of “Post-Normal Science”, to fix the current problems in scientific discourse. <a href="https://youtu.be/qVLpbtkqERY" rel="nofollow">https://youtu.be/qVLpbtkqERY</a>
How does his argument work with the current belief that objectivity and the scientific method is white supremacist oppression? Does his argument support the "alternate ways of knowing" that are being pushed as "indigenous science"?
This is just a bad take which was artificially pushed for pseudo-political reasons.<p>That we have no hard and fast rules for what is good science does not mean that anything goes. This is like saying that because it's impossible to write perfectly safe C++ programs, then we should just use raw pointers. Imprecise methods that work with a certain probability still have value.<p>Leaps of logic like that are popular even more generally and they leave me speachless.
I struggle with Feyerabend a bit. First, a big part of that book is case studies and the problem with case studies is that they are not representative of the underlying population. It’s very easy to simply cherry pick examples that advance your thesis, whereas what we really care about is population averages.<p>Second, let’s assume there is no method for arriving at truth. Well then how do we verify that some discovery is actually a discovery? Presumably if there was a way to do this, we could incorporate it into our method, thus undermining Feyerabend’s thesis. Well it turns out that his answer to this is to say that evaluation should consist entirely in the contribution of the discovery to happiness and flourishing. This could be useful at a general level, but seems useless when it comes to judging between theories and research paths within science.