To understand the fundamental problem with this paper and the "CD" measure of "disruptiveness", one need only look at Figure 1a, where three Nobel Prize winning papers and three patents are ranked on the "CD5" scale. Apparently, there are more disruptive discoveries than Watson and Crick's discovery of the double-stranded structure of DNA (CD5 0.62 on a scale of -1.0 to 1.0), and Baltimore's discovery of reverse transcription is -0.55. One might argue the ranks of these two discoveries should be reversed, since DNA had already been shown to be the genetic material, but no one imagined that RNA could be converted to DNA. Likewise, the Wigler patent for transformation into eukaryotic cells (CD5=0.70) has certainly has had less effect on the world economy than Monsanto's patent on glyphosphate resistant plants (CD= -0.85).<p>It's easy to make up a measure and then develop a story that explains why that measure is useful. Was Newton's theory of gravity disruptive, or consolidating. I'm thinking consolidating, since it allowed a lot of other things to make sense. Likewise, perhaps quantum mechanics was disruptive -- it sounds disruptive -- but it also consolidated a lot of confusing observations.<p>(And lets not even mention the problem that when there are 1,000-times as many papers between 1940 and 2010, we might not expect 1,000-times as many disruptions.)<p>This paper has a measure that sounds useful, and is certainly good for headlines, but it is very unclear that it provides any insight into the progress of science.
When I was in grad school I had done measurements of the magnetic anisotropy of amorphous compounds. I had found an interesting link between the structure and the magnetic properties which is surprising since the material is amorphous, the material properties should be isotropic yet these had a preferred magnetization axis. During the discussions of these results with my advisor I was told in very clear terms that he would not let me publish results that contradicted a paper of his from the 1980s.<p>I didn't push back, because I had no power and I just wanted to get the fuck out of there and start making money after being scammed out of 5 years of my life where I poured a lot of effort and got shit pay. I lost respect for my advisor and I was stunned by how un-scientific his response was when it came to correcting his 1980s results. My measurements were more accurate and were strongly supported by structural measurements he never did and couldn't do back then. But he was more interested in protecting his "legacy" of his paper with 13 citations.<p>So petty and small. I dropped those efforts and was able to get out completely obviating those results, never mentioned them in my thesis even.<p>Anyway, my point is that I'm not surprised that scientific results are lame, vanilla, uninspired. When the committees that decide on what gets funded are all 65+ dinosaurs stuck in their ways, very few disruptive ideas will be founded because you're disrupting the very research that got the committee members clout and recognition.<p>As they say, change comes a funeral at a time.
That timeline correlates well with the corporatization of academic research timeline, which got started in the late 1970s, boomed in the 1980s and has steadily risen ever since. The emphasis in applied science is now on patents and return-on-investment, aka safe bets. This is certainly the direction that much of NIH fundings, as well as DOE funding, has taken. Part of this is that entrenched industries (natural gas and coal power, say), don't want federal funding going to disruptive competition (monocrystalline silicon PV development, say). This is reflected in how federal funds are distributed as well as what research programs are supported at the university level. For example, count the number of renewable energy departments in the USA academic system: miniscule funding -> no programs.<p>It can't all be blamed on corporatization, however. The control of funding agencies at the federal level by vested interests seems to be a major problem, i.e. innovative lines of research that threaten the status quo could threaten those cabals that control fund disbursement, so such efforts don't get funded. The stagnation of Alzheimer's research and the focus on the amyloid hypothesis seems to fall into this category. This kind of bureaucratic ossification is not new; the classic example is how Trofim Lysenko in the Soviet Union controlled the direction of agricultural research for about three decades, much to the detriment of the understanding of plant genetics and crop development research in the USSR over that period.<p>So it can be attributed to the rise of monopolistic corporatization on university campuses on one hand, and the growth of entrenched bureaucracies at the federal funding level on the other. Neither group wants the rug pulled out from under them by 'disruptive innovation'. There are many historical antecedents for this kind of situation, incidentally.
I wonder if this is to do with major vs minor breakthrough.<p>Take physics. As I read it, modern fundamental physics is mostly some combination of general relativity and quantum mechanics, which are both early 20th century inventions.<p>They weren't necessarily genius in particular in-depth analysis, rather they were genius new concepts that created a whole new blank canvas to fill in.<p>What did the field look like just before them? I always forget the details, but was it lord Kelvin who said, physics just needs to figure out the ultraviolet catastrophe and it's done? I wonder if it looked similarly uninnovative. The canvas they had was full, and they needed a new canvas for true innovation.<p>Now the thing is, I think true disruption, like GR and quantum physics, is not limited by the number of postdocs, but by truly genius insights. Einstein wasn't even working as an academic when he came up with Special Relativity. So the number of papers written isn't a proxy for pace of revolution.<p>Not to mention, since number of published papers has become a <i>target</i>, it has become useless as a <i>measure</i> of progress.
A big part is that science has become very “sanitized”. Work on the wrong issues and you won’t get funding or will even get ridiculed out of a career.<p>It’s not even just going against headline consensus issues that will get you in trouble. Any findings that overturn what a large group of scientists have spent their careers working on won’t be well received.
I blame the grant funding process. What gets published is what gets funded when written up as a grant proposal. What gets written up as a grant proposal is what is called for by grant funding agencies for their own research initiatives. If you want disruptive science, fund it then. That's all everyone wants to do in science, but the bills need to be payed somehow, so you play the game the funding agencies want you to play.
I think that the expectation that "ground-breaking" discoveries in science should follow some linear or predictable timetable is not reasonable. Is the time period between "ground-breaking" discoveries supposed to be every 5 years, every 10 years?<p>The volume of papers has increased significantly, and publish or perish kinda stinks, but it's not just an issue of funding, but some (many?) researchers publish and exaggerate about the importance and difficulty of their research to receive an "I am smart" badge on social media. Although, despite this increase in volume of crap tier papers, the article seems to think it's not correlated:<p>"Declines in disruptiveness are also not attributable to changing publication, citation or authorship practices..."
Alan Kay wrote an essay on how large scientific breakthroughs where done [1], further discussed in [2]. The funders and the systems of funding of science are to blame for the deciline, for example they won't fund problem finding anymore.<p>[1] <a href="http://www.vpri.org/pdf/Kay_How.pdf" rel="nofollow">http://www.vpri.org/pdf/Kay_How.pdf</a><p>[2] <a href="https://m.youtube.com/watch?v=j9ZGFaIHegE">https://m.youtube.com/watch?v=j9ZGFaIHegE</a>
I get that in this context "patents" should be seen as the announcement of a new innovation, a surrogate of the innovation itself if you will, but setting that aside for a moment I'm struggling to imagine how patents could ever do anything but slow down disruption.<p>Take any revolutionary discovery. Compare making it freely available to anyone, or limiting the availability to those holding the patents. EDIT: And perhaps more importantly give people the freedom to innovate on top versus stopping anyone who does. Shouldn't the former always be more disruptive of the status quo? I can imagine this is an oversimplification though so I'm open to hearing counterintuitive examples where the opposite is true.<p>And to be clear, either way I'm not saying this is good or bad in and of itself - in some cases it might slow down essential improvements, in others it might be good to have speed limits for change so that things don't break.
Ok, so the average has declined - has the raw # of disruptive studies increased? The definition of 'disruptive' here is based on the ratio of citations of [study] vs citations of [study's references]. If the number of papers increases, do we want a proportional increase in studies that "send a field in a new direction?" Or do we want incremental studies to be improving rigor, comprehensiveness, and clarity?<p>IMO, it'd be much better to look at the Avg. CD of the top N% of CD scores over time - that'd give me a much better idea of if disruptiveness is changing, as disruptiveness is a "best paper" phenomenon anyway.
I'm left a bit conflicted by what conclusions to draw from this effort.<p>How much of this is due to how much easier it is to publish something these days? Every statistics package spits out publication-ready charts and tables, Latex/Word has automated much of the typesetter's art, drafts can be communicated instantly via email, there are instantly searchable and linked databases of papers, etc, etc.<p>Think about what it was like in the 1950's, when many scientists were hiring typists to convert their longhand manuscripts into something an editor could read, attaching hand drawn figures, then heading down to the post office to mail the draft in. That draft then had to be copied and mailed to reviewers, who then mailed their opinions back, every step taking at least a few days of communication overhead. It was a different world and it gradually changed with widespread adoption of different technologies.<p>Also how does this interact with another pair of common suggestions for improving science: publishing negative results and doing more replications? Is this a sign that those two good practices are already occurring? Is it possible that the higher rate of disruption was due to shoddier work in the past?
From my training as a computer scientist, I am used to the idea that some problems are easy and some are plainly intractable. Perhaps that's also true in science and technology, and we are starting to exhaust all the easy directions.<p>Still, many less techy problems remain, at least climate change and wealth inequality come to mind. Perhaps we'll see the biggest innovations in solving those in the future.
- Grant funding<p>- Tenure requirements<p>- Job duties (both requirements and limitations)<p>- Elimination of corporate skunk works like PARC<p>- Credentialism<p>- A "Pain point" focus would tend to be more incremental than truly disruptive<p>On the good side there may be some decrease of disruptive inventions and discoveries because we are also focusing on eliminating the externalities of current and previous inventions and discoveries. It's a lot easier to make wildly new things if you aren't worried about their side effects.
If you're really smart, enough to make disruptive scientific discoveries, why bother doing that when you could learn programming and work at a FAANG for $400k, or work in the finance sector and make possibly more? If you stay in science, you'll earn peanuts and not even have any career stability since the academic track is so bad these days.
When I worked at IBM, we were given generous cash incentives for filing patents. I am named as an inventor on about 40 patents. To be honest, only one of them was a genuine invention in my opinion.
Doesn't it just stand to reason that there's less to discover, that what there is mostly isn't the low-hanging fruit (and unlike actual fruit, it's not regrowing), over time?
I think the same processes that turn nimble and innovative startups into large and heavy enterprises apply to countries and institutions as well. As academia and research institutions develop, they become more complex, valuing process over result and consensus over conflict. Increasingly, disruption of consensus is perceived as disruption of the institution itself and risk-aversion and incrementalism are rewarded.<p>I hate to be a doomsayer, but this seems to be a general trajectory that most of the western world is on.
There’s an intuition that recent products and inventions are more incremental and that historical innovations are more revolutionary- in the sense of lacking precedence.<p>But this is an effect of the simplification of history, and when you read the detailed technical histories you can see how remarkably incremental the old innovations were.<p>We can easily see the precedents of recent innovations but not for older ones.<p>This is pointed out in the book I am reading on the development of the turbojet, which only looks more unprecedented than it really was.
> The intuition is that if a paper or patent is disruptive, the subsequent work that cites it is less likely to also cite its predecessors<p>This intuition seems to overlook the culture of citations in the patent space. There, the incentive is for applicants to be over inclusive with citations. This is combined with the increasing ease with which references can be found.
So, as fastaguy88 already described in another comment, this paper is a fairly arbitrary quantification of “disruptive”. <i>(Also, can we just stop using that word? Such a though-terminating cliche at this point.)</i><p>But let’s accept that we are seeing less fundamental breakthroughs in science. Is that a failure of the system…or, depressingly, what if we really are hitting the limits of what we can model and describe?<p>In a roughly 200 year timespan, we figured out the at least the rough outlines of…wow, where even to start…electromagnetism, general and special relativity, quantum mechanics as it relates to field theories and chemistry, genetics, and computability. We really picked a lot of the low-hanging fruit, and what if there just isn’t any fruit left to easily get at? There’s certainly things like dark energy that we don’t have particularly satisfying theories for at all, but, the basics? We seem to have good models.
Its just the evolution vs revolution periods...
In any field 99% of the time advancements are slow and incremental, than a new idea sparks a revolution and you get a lot of advancements in a short period of time. This is just the nature of scientific progress.<p>In the early 20 century new ideas in physics and mathematics sparked a revolution and we are slowly unraveling the benefits of it ever since.
For 40 years we had evolution in the AI research field, now we have a revolution, I do not know when it will die down but after a while we will just start to build over the ideas we discovered and a long period will pass before we do another jump.
Same with psychology, economics, biology...
And yet I've been trying to find $$ for one of the most disruptive agriculture technologies that could propel to sustainable ag to acreage unheard of. Nobody wants to take a chance.<p><i>shrug</i>
The science knowledge base grows monotonically but not necessarily at a steady rate. Actually far from it.<p>There was a time in the 19th century when people thought physics discoveries were complete. A few decades later all hell broke lose.<p>The problem we have right now is that we are stymied on the complexity front (materials, chemistry, biology, medicine etc)<p>Our mental / mathematical models that we use to understand the world and even enable us to ask original scientific questions are losing their edge when things get complex. These tools are 19th century stuff (including all the AI junk) and we have milked them for all they are worth<p>I would hypothesise that a new period of discovery will correlate with some breakthrough in the mathematics of complexity. We had a few false dawns but not the real thing
‘Disruptive’ science has declined — and no one knows why<p><a href="https://www.nature.com/articles/d41586-022-04577-5" rel="nofollow">https://www.nature.com/articles/d41586-022-04577-5</a>
May be relevant: <a href="https://www.city-journal.org/nih-blocks-access-to-genetics-database" rel="nofollow">https://www.city-journal.org/nih-blocks-access-to-genetics-d...</a>
The first half of the 20th century had a lot of "low hanging fruit" in physics and chemistry that basically came out of the enlightenment. Things like electromagnetism, quantum mechanics. I wouldn't guess that's always the steady state for research.<p>The war may have had something to do with it too, I know a lot of advances (like radio / microwave engineering) have roots in the war efforts. But I suspect it's more of just a dry spell because we pushed forward in the 18th-20th century and ran out of ideas
Nice to see numbers put to this. I remember thinking through a bunch of tech/science that I was familiar with, and realizing that most (not all) of the fundamentals in every case were known by the early 60's, and we've been working out the (very important) details ever since. Nuclear, electronics, chemistry, biology, etc.<p>plenty of counter-examples, I'm sure, but like once you know the structure of DNA and it's fundamental operations, the remaining discoveries will have a smaller impact.
Maybe it's wrong. At least in the chart, they are missing the Bitcoin invention in 2009 (technology). There was the last "disruptive" science in the area.
While there is no doubt that 'CD Index' has been declining over time, it is debatable whether it truly indicates a drop in 'disruption'.<p>Let's take AI for example. Research papers that were published in 2019-20 have become obsolete within a year or less, and consequently get cited less and less, as more recent and derivative papers introduced better models. Does that make the older papers any less 'disruptive'? No!
On the paper front, I have a number of published scientific papers with citations. My 3 most important papers—that will go down as turning points in the history of computer science—have a combined 0 citations, and 2 of them not only were not in a journal, but they were even flagged and removed from arxiv.<p>On the patent front, well patents are for parasites.
Patents went from real innovation to Legalese BS and market protection in the past decades.<p>Also I suspect most innovation are being done outside of papers, especially when you have "reviewer 2" ready to be nitpicky at every minor detail, so you make it more bland and maybe skip one juicy part or another
I've heard another interesting argument, at least for disruptive science/research from industry.<p>Lower corporate taxation incentivizes the hoarding of cash. Prior, corporations were incentivized to allocate more funding towards their industry research (IBM, Bell Labs, etc).<p>Would love some counterarguments, here!
Related ongoing thread:<p><i>How to slow down scientific progress, according to Leo Szilard</i> - <a href="https://news.ycombinator.com/item?id=34264436" rel="nofollow">https://news.ycombinator.com/item?id=34264436</a> - Jan 2023 (20 comments)
"People talk glibly about science. What is science? People coming out of a university with a master's degree or a PhD, you take them into the field and they literally don't believe anything unless it's a peer-reviewed paper. It's the only thing they accept and you say to them 'But let's observe. Let's think. Let's discuss.' They don't do it. It's just, 'Is it in a peer-reviewed paper or not?' That's their view of science. I think it's pathetic.<p>Gone into universities as bright young people. They come out of them brain dead, not even knowing what science means. They think it means peer-reviewed papers etc. No, that's academia. And if a paper is peer-reviewed it means everybody thought the same therefore they approved it.<p>An unintended consequence is that when new knowledge emerges, new scientific insights, they can never ever be peer-reviewed. So we're blocking all new advances in science that are big advances. If you look at the breakthroughs in science almost always they don't come from the center of that profession, they come from the fringe. The finest candle makers in the world couldn't even think of electric lights. They don't come from within they often come from outside the bricks. We're going to kill ourselves because of stupidity."<p>— Allan Savory, Zimbabwean ecologist, research biologist, livestock farmer, and president and co-founder of the Savory Institute. He originated the Holistic management of agriculture to fight desertification.<p><a href="https://www.youtube.com/watch?v=qkXZ3_ZmKzw">https://www.youtube.com/watch?v=qkXZ3_ZmKzw</a>
Every time I read one of these editorials on the topic of lack of available research, I just look at the huge list of significant problems that still exist for life on earth and wonder: “what are they thinking?”
Another pertinent paper from 2021:
<a href="https://arxiv.org/abs/2106.11184" rel="nofollow">https://arxiv.org/abs/2106.11184</a>
Theory of Scientific Revolution by Thomas Kuhn.<p>Major breakthrough that disrupts the paradime. Followed by normal science with diminishing returns until the next paradime.
Hardly surprising. Besides fraud, in science you're not overturning past experimental data, you are adding to it, forming an ever more complete and accurate view of reality. Disruption really just means stumbling upon something unexpected. Once you know there is something there, it opens up a whole world of new avenues for studying in that space, but it doesn't really make it any easier to find something unexpected again.<p>Consider for example deep sea archaeology. When you find a shipwreck, there's going to be a flurry of activity as people study the shipwreck, and the more they answer the questions they started with, the more new questions they are going to want to ask, so even long after the initial discovery there's going to be a continuous string of studies refining knowledge about that wreck. And this is true of every wreck. But finding one shipwreck doesn't generally make it easier to find the next, assuming the same resources, new wrecks are going to be discovered at a pretty steady rate. So the number of discovered wrecks grows linearly, but with each being revisited regularly, the cumulative amount of research done on all wrecks grows geometrically.<p>Similarly discovering a new chemical opens up many possibilities for studying that chemical, likewise for a new phylum of bacteria, a new type of star, you can only see something for the first time once, but you can take closer looks forever. We should expect the ratio of new things to look like a graph of 1/x.<p>Perhaps it's weird that the absolute number of disruptive papers is holding steady. While there are competing forces at work - on the one hand we have more scientists with better instruments, but on the other hand the low hanging fruit is gone and they must go ever further to find something new - there's no fundamental reason these two forces should perfectly counterbalance. I might posit our society has a certain appetite for discovery and we adjust our allocation of resources to science and innovation to maintain this continuous stream of disruptions, but I at least feel that scientific breakthroughs tend to excite people about science and desire to double down on science, not pull back. Perhaps more realistically we just move the goal posts as to what we consider disruptive such that the top X discoveries of the year are always "groundbreaking." Regardless of the why, this definitely the more interesting part of the story.
There are just too many gatekeepers and too many people who are used to doing things a certain way. The world is just more populous now. We also need to make sure that people can speak their minds freely and write freely without worrying about getting cancelled.
>and no one knows why<p>Really?<p>Too much talent going into trivial tech ventures.
The shift towards rentier capitalism in the US, especially in terms of cheap money/QE being used for share buybacks instead of R&D.
The colossal cost of higher education in the US, with grad stipends barely budging until literally this month.
Fewer people willing to tolerate the (mostly) toxic culture of academic advisors overseeing Phds and post-grads. Google Scholar skewing papers to exist more for popularity/citation prevalence. The Internet as the biggest distraction in the history of humanity.
etc. etc.