TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: Given two repos of code, what criteria would you use to grade quality?

13 点作者 devrob超过 2 年前
I&#x27;m curious, given two repos containing code that accomplish the same goal and produce the same correct results per requirements, what quantitive and qualitative criteria would you use to grade reach repo and the engineer&#x27;s skills that produced them?<p>For e.g. some folks say &quot;readability&quot; and &quot;understandability&quot;, but how do you define that and how do you think that would translate to a quantitative metric (if possible)?

18 条评论

eternityforest超过 2 年前
Time from zero to being able to make a change and compile it would probably be the most objective metric I&#x27;d be looking at.<p>I&#x27;d also be looking at density of interesting ideas(As per the &quot;An engineer&#x27;s job is to solve a problem with a minimum of new ideas&quot; metric). If I see a lot of novelty then I would start questions whether the codebase was being treated like a sandbox to try out random stuff.<p>Number of obsolete technologies used would be a big one. If there&#x27;s 50 things about to be no longer supported, there&#x27;s a maintenance nightmare waiting to happen.<p>I&#x27;m assuming the black box behavior of both is truly identical in all cases, but in the real world it probably wouldn&#x27;t be and edge case handling and unnecessary disk writes and such would be a high priority for me.
ZugZug2超过 2 年前
Shorter code, but there&#x27;s a lower bound at which things get cryptic and ridiculous. You&#x27;d want to distinguish between code golf and regular code.<p>A common metric is cyclomatic complexity, for which lower is usually better (but you&#x27;ll find exceptions as soon as you enforce upper bounds in a large project). There are also metrics for modularity and cohesion that might be useful, tho I think it depends on the size of the codebase you&#x27;re comparing.<p>I would say smaller executable would tend to be better too, but for short programs you might not see a ton of difference. Variable names&#x2F;comments that match the intent and semantics... Good luck automating that without replacing humans as programmers. Tho for well defined requirements maybe a language model could correlate them somewhat.
solomatov超过 2 年前
Also, if you have time, implement the same small feature in both codebases. You will quickly understand which code is easier to work with.
评论 #33619139 未加载
评论 #33619033 未加载
评论 #33629071 未加载
silisili超过 2 年前
Not in any particular order<p>1 - Safety, does it handle errors, etc.<p>2 - Readability. Did the author make it easy for me to read. Includes comments.<p>3 - LOC. The shorter the better. Too many people overengineer things. YAGNI.<p>4 - Dependencies. Are they reasonable? Too many is code smell, to me.<p>5 - Performance, if applicable.
tacostakohashi超过 2 年前
There are lots of metrics &#x2F; static analysis tools for this like cyclometric complexity, coverity, sonarqube, findbugs, etc.<p>They all make sense to a point... but can also be gamed and misused once they start being tracked.
kypro超过 2 年前
Given the complexity involved in grading code quality I&#x27;m not convinced you should be using quantitative measures.<p>Writing good code is often about making good trade-offs depending on the requirements. For example, using a 3rd party library might be quicker and make your code more concise, but there are security, extensibility and maintenance concerns that typically need to be balanced when doing so. And how you balance those will largely depend on your requirements. For example for security-critical code you might not want to use 3rd party libraries at all.<p>There are also times when I&#x27;ve written awful code on purpose. If I give a developer a task in which the code they&#x27;re writing will be thrown away then I&#x27;d probably want them to prioritise writing that code quickly rather than worrying too much about how clean and maintainable the code is.<p>I think what you&#x27;re asking is kind of like asking, &quot;how do I grade writing quality&quot;? Well what kind of text are grading? Is it a kids book? A shopping list? A textbook?<p>Just a thought anyway, I think I might be overcomplicating things to be honest.
simne超过 2 年前
Business value for exact now and to how much it could improved in 2 working days if first is zero.<p>Imagine, you (co)owner of business, planning make sells on black Friday, sure you are not interested on coverage&#x2F;traceability&#x2F;etc, you need to proceed client requests, the more share of success, better, but anyway, 10% better than 0%, even if 0% code have 50% coverage and 10% code have 0% coverage.
评论 #33628780 未加载
dakiol超过 2 年前
1. Directory structure. A thousand files within one folder? Bad (no matter how good the code is). Folder X (very low level stuff like operating with bits and bytes) at the same tree level than folder Y (critical business logic in terms of high level domain objects)? Bad.<p>2. Readme file. Buildme file. Can I understand what the repo is about just by reading the readme file? Good. Can I run the repo locally just by following the Buildme file? Good<p>3. Data examples alongside code
throwaway0asd超过 2 年前
I am a big fan of forced simplicity. In practice that means (dramatically) less code, less abstraction, single paradigm, less choices. People tend to find the code extremely readable and easy to follow but yet somehow exotically foreign and thus grade it poorly. To me that sounds like selection bias. So I have stopped worrying how people might grade my code and instead attempt to prove everything with numbers, such as execution speed or time to add new features.
bjourne超过 2 年前
I would download both repos and get them running. If one repo takes me longer than the other to build and run due to non-existent or poor README file, out of date or esoteric dependencies, badly written build scripts, etc, then I would prefer the other one. Bonus points to the repo that is truly cross-platform because getting the same codebase to build on Windows, OSX, and Linux can be a significant challenge.
hayst4ck超过 2 年前
Here is a good start: <a href="http:&#x2F;&#x2F;misko.hevery.com&#x2F;code-reviewers-guide&#x2F;" rel="nofollow">http:&#x2F;&#x2F;misko.hevery.com&#x2F;code-reviewers-guide&#x2F;</a><p>The first thing I would do is look at usage of global state. Are there data objects used from the global scope? Are dependencies imported and then used directly from the global scope rather than being passed to a constructor? Are there any mutable singletons?<p>The second thing I would do is look at constructors. Are constructors calling a lot of functions (implication of global state usage), are they doing much besides taking arguments and storing them as object state? Do objects require an initialization method be called?<p>The third thing I would do is look for &quot;law of Demeter&quot; violations (<a href="https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Law_of_Demeter" rel="nofollow">https:&#x2F;&#x2F;en.wikipedia.org&#x2F;wiki&#x2F;Law_of_Demeter</a>):<p><pre><code> Explicit violation: doesSomething.callSomethingElse().thatCallsSomethingElse().thatCallsEvenAnotherThing() Not any less of a violation but looks better: a = doesSomething.CallSomethingElse() b = a.thatCallsSomethingElse() c = b.thatCallsEvenAnotherThing() </code></pre> Next I would make percentiles of lines of code per scope. How many lines of code does the average function have? How many lines of code does the average class have? As well as look at un scoped functions (implies global state usage) and the outliers for lines per scope (what does the class with the most lines of code in scope look like, what does the longest funciton look like)?<p>I would probably keep raw counts of the number of functions, classes, imports, if&#x27;s, and loops themselves.<p>Obviously anywhere you see lines of code, it might make more sense to look at number of ifs and loops since that is probably a more accurate measure of complexity.<p>I would definitely (and actually probably first) look at the database tables&#x2F;how they are represented in the code.<p>More qualitatively, I would look at what kind of logging and time series data are exported.<p>I would look at how exceptions are handled in the main loop.<p>I would look at separation of business logic from server logic.<p>I would look for strong layers (business logic probably shouldn&#x27;t be intermingled with presentation logic).<p>I would probably grep a sample of TODOs and grep a sample of comments.<p>I would look at test coverage and test implementation (unit and integration).<p>I would look at test run time.<p>I would look at build time.<p>I would try to look at a dependency graph.<p>I might look at git blame to see how many people edit how many different files.<p>I would look for bazel&#x2F;build logic.<p>I would look at the build script itself to see how assets are generated and stored.
评论 #33618308 未加载
karmakaze超过 2 年前
Code size per function point. And defect rate or time to add new funtionality.
评论 #33628815 未加载
fawazali超过 2 年前
&gt; clean code (human readable) &gt; time and space efficiency &gt; faster execution of the objective &gt; less pings, memory space or call backs
solomatov超过 2 年前
Total size + how many units are affected by fixes for found issues. The smaller the size the better. The less coupled code the better.
jstx1超过 2 年前
Qualitative - how easy it is to make changes.
dossy超过 2 年前
Most algorithms are judged on time (runtime performance) and space (memory requirements). That&#x27;s a good place to start.
giantg2超过 2 年前
1. Does it work as intended 2. Is it secure
acranox超过 2 年前
Comments and commit messages.