TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

What makes software engineering for climate models different? (2010)

12 点作者 dodders超过 10 年前

3 条评论

greenyoda超过 10 年前
Make sure to read the thoughtful criticism in the comments below the article, which add a lot of perspective to the original article. The people who wrote the comments seem to know more about software engineering than the original author, and argue convincingly that climate model code is no different from other code: it needs to be modular and well-written so that it can be understood and maintained; it needs to be tested to insure that it&#x27;s working correctly; etc.<p>The author seems to have a rather cavalier attitude about the correctness of code, which one of the commenters (George Crews) picked up on:<p><i>&quot;Then there is the statement: &#x27;The software has huge societal importance, but the impact of software errors is very limited.&#x27; I don’t see how it can be both ways. How can something be of great importance whether or not it is correct? IMHO, the most serious consequence of a climate software being defective would be to then use it to make a defective political decision costing trillions of dollars to society.&quot;</i>
评论 #8982364 未加载
评论 #8982328 未加载
评论 #8982412 未加载
sfrechtling超过 10 年前
The context is also highly political. The results can be misconstrued and used to fit a storyline. The author states it has a huge societal impact, which is true - but I feel like climate change models are more important to politians than the majority of citizens.
EpicEng超过 10 年前
“developers are domain experts – they do not delegate programming tasks to programmers, which means they avoid the misunderstandings of the requirements common in many software projects”<p>This also likely means that these systems are poorly designed, poorly documented, poorly implemented, and poorly understood.<p>This is only my assumption based upon my reading of this article, but this assumption does come with some experience. I have worked in biotech my entire career, often alongside domain experts who write code. This code is typically, to be kind, not very good. It is rife with bugs, makes too many assumptions, is difficult to understand (and, as a consequence, its limitations and assumptions are not well understood by its users), full of copied code, etc. etc. etc. I have rewritten a few such systems.<p>Also, comments like the following are scary and I can only hope that they are not indicative of the general attitude in the field (though, based upon my own experience, may very well be):<p>&quot;The software has huge societal importance, but the impact of software errors is very limited.&quot;<p>Yikes. The entire statement seems like a non-sequitur to me, but that attitude leads down a dangerous road. So we have potentially buggy systems which output data used in studies which have &quot;huge societal impact&quot;? How can the author make the claim that software errors don&#x27;t have an appreciable impact on the result if the system was not developed using standard, accepted engineering practices? How do the users know how to correctly interpret the data?<p>As an analogy, I recently rewrote an imaging and image processing system used by my company. This system was designed and implemented by academics, and exhibits all of the problems we in the software industry typically associate with such code.<p>While rewriting it from scratch, I had no documentation to rely upon. I found many implicit and explicit assumptions that the users where not aware of. Most importantly, the system was originally designed for enumeration of certain types of cells, but not for any sort of quantitative interpretation.<p>However, down the road, the users in the lab realized that they raw output of the image analysis process could contain useful information. So, they started mining it. They began comparing samples using various measurements taken during analysis. They began making even more assumptions about what that data meant, but they were often wrong.<p>On the surface, it seemed as though their work made sense, but only if one did not understand <i>how</i> those numbers were gathered and <i>under what circumstances</i> their interpretation was valid. Some of the statements made by the author show a striking resemblance to the opinions of the original authors of the system I had to rewrite.<p>These people were smart, very smart, but not engineers. They didn&#x27;t have the discipline, training, or experience required to write a system that would stand up to scrutiny. It was a research vehicle, and it did what it was originally intended to do, but as time passed, warts appeared.<p>I find it very hard to believe that climate model programming has even one single characteristic which would cause an engineer to think that a different engineering model was required or even warranted. To me this sounds like people in the research&#x2F;academic camp making statements about an aspect of engineering that they do not understand.