TE
TechEcho
Home24h TopNewestBestAskShowJobs
GitHubTwitter
Home

TechEcho

A tech news platform built with Next.js, providing global tech news and discussions.

GitHubTwitter

Home

HomeNewestBestAskShowJobs

Resources

HackerNews APIOriginal HackerNewsNext.js

© 2025 TechEcho. All rights reserved.

Functional languages should be so much better at mutation than they are

140 pointsby injuly10 months ago

21 comments

Leftium10 months ago
&gt; ...functional programming is mostly about avoiding mutation at all costs<p>Slightly different perspective from Grokking Simplicity[1]: functional programming is not about avoiding mutation because &quot;mutation is bad.&quot; In fact, mutation is usually the desired result, but care must be taken because mutation depends on when and how many times it&#x27;s called.<p>So good FP isn&#x27;t about avoiding impure functions; instead it&#x27;s about giving <i>extra</i> care to them. After all, the purpose of all software is to cause some type of mutation&#x2F;effect (flip pixels on a screen, save bits to storage, send email, etc). Impure functions like these depend on the time they are called, so they are the most difficult to get right.<p>So Grokking Simplicity would probably say this:<p>1. Avoid pre-mature optimization. The overhead from FP is usually not significant, given the speed of today&#x27;s computers. Also performance gains unlocked by FP may counter any performance losses.<p>2. If optimization via mutation is required, push it as far outside and as late as possible, keeping the &quot;core&quot; functionally pure and immutable.<p>This is similar to Functional Core, Imperative Shell[2]; and perhaps similar to options 1 or 2 from the article.<p>[1]: <a href="https:&#x2F;&#x2F;www.manning.com&#x2F;books&#x2F;grokking-simplicity" rel="nofollow">https:&#x2F;&#x2F;www.manning.com&#x2F;books&#x2F;grokking-simplicity</a><p>[2]: <a href="https:&#x2F;&#x2F;hw.leftium.com&#x2F;#&#x2F;item&#x2F;18043058" rel="nofollow">https:&#x2F;&#x2F;hw.leftium.com&#x2F;#&#x2F;item&#x2F;18043058</a>
评论 #41126037 未加载
评论 #41123980 未加载
jmull10 months ago
&gt; A lot of people think that functional programming is mostly about avoiding mutation at all costs.<p>People should try to stop thinking of mutation as something to be avoided, and start thinking of it as something to be managed.<p>Mutating state is good. That&#x27;s usually the whole point.<p>What&#x27;s bad is when you create &quot;accidental state&quot; that goes unmanaged or that requires an infeasible effort to maintain. What you want is a source of truth for any given bit of mutable state, plus a mechanism to update anything that depends on that state when it is changed.<p>The second part is where functional programming shines -- you have a simple way to just recompute all the derived things. And since it&#x27;s presumably the same way the derived things were computed in the first place, you don&#x27;t have to worry about its logic getting out-of-sync.
评论 #41121680 未加载
评论 #41120632 未加载
评论 #41124284 未加载
评论 #41126064 未加载
pornel10 months ago
&gt; Rust&#x27;s shared XOR mutable references […] makes linearity nearly useless or inevitably creates a parallel, incomplete universe of functions that also work on linear values.<p>Yup. Rust can&#x27;t abstract over mutability. For owned values, it defaults to exclusive ownership, and needs explicit Rc&lt;T&gt; and clone() to share them. For references, in practice it requires making separate `foo()` and `foo_mut()` functions for each type of the loan.<p>Even though this sounds horribly clunky, it works okay in practice. Ability to temporarily borrow exclusively owned objects as either shared or exclusive-mutable adds enough flexibility.<p>Rust is known for being difficult, but I think a lot of that is due to lack of GC. Rust can&#x27;t make any reference live longer, and programmers have to manually get scopes of loans right, or manually use Rc&lt;RefCell&gt; to have a mini DIY GC. Perhaps a shared XOR mutable with a GC could be the best of both?
评论 #41115264 未加载
评论 #41116267 未加载
评论 #41115136 未加载
RandomThoughts310 months ago
The article utterly falls apart in its first paragraph where it itself acknowledges that the whole ML family including Ocaml has perfect support for mutation, rightfully assume most Ocaml programmers would choose to not use it most of the time but then assume incorrectly that it’s because the language makes it somehow uneasy. It’s not. It’s just that mutation is very rarely optimal. Even the exemple given fails:<p>&gt; For example, let&#x27;s say you&#x27;re iterating over some structure and collecting your results in a sequence. The most efficient data structure to use here would be a mutable dynamic array and in an imperative language that&#x27;s what pretty much everyone would use.<p>Well, no, this is straight confusion between what’s expressed by the program and what’s compiled. The idiomatic code in Ocaml will end up generating machine code which is as performant than using mutable array.<p>The fact that most programming languages don’t give enough semantic information for their compiler to do a good job doesn’t mean it necessary has to be so. Functional programmers just trust that their compiler will properly optimize their code.<p>It gets fairly obvious when you realise that most Ocaml developers switch to using array when they want to benefit from unboxed floats.<p>The whole article is secretly about Haskell and fails to come to the obvious conclusion: Haskell choice of segregating mutations in special types and use monads was an interesting and fruitful research topic but ultimately proved to be a terrible choice when it comes to language design (my opinion obviously not some absolute truth but I think the many fairly convoluted tricks haskellers pull to somehow reintroduce mutations support it). The solution is simple: stop using Haskell.
评论 #41116686 未加载
评论 #41114124 未加载
评论 #41114098 未加载
评论 #41113798 未加载
评论 #41114947 未加载
评论 #41114981 未加载
评论 #41117896 未加载
评论 #41122528 未加载
评论 #41113703 未加载
评论 #41114444 未加载
评论 #41115071 未加载
评论 #41113670 未加载
tome10 months ago
I&#x27;m not convinced about the dismissal of option 2. I agree ST is clunky but not for the reasons given. It&#x27;s clunky because it&#x27;s impossible to mix with other effects. What if I want ST <i>and</i> exceptions, for example, and I want the presence of both to be tracked in the type signature? ST can&#x27;t do that. But my effect system, Bluefin, can. In fact it can mix not only state references and exceptions, but arbitrary other effects such as streams and IO.<p>* <a href="https:&#x2F;&#x2F;hackage.haskell.org&#x2F;package&#x2F;bluefin-0.0.2.0&#x2F;docs&#x2F;Bluefin-State.html" rel="nofollow">https:&#x2F;&#x2F;hackage.haskell.org&#x2F;package&#x2F;bluefin-0.0.2.0&#x2F;docs&#x2F;Blu...</a><p>* <a href="https:&#x2F;&#x2F;hackage.haskell.org&#x2F;package&#x2F;bluefin-0.0.6.0&#x2F;docs&#x2F;Bluefin-Exception.html" rel="nofollow">https:&#x2F;&#x2F;hackage.haskell.org&#x2F;package&#x2F;bluefin-0.0.6.0&#x2F;docs&#x2F;Blu...</a>
评论 #41114744 未加载
评论 #41114698 未加载
nmadden10 months ago
I don’t know how Swift and Koka handle things, but I’ve written a lot of Tcl that uses the same CoW reference-counting trick. (Tcl is an under-appreciated FP language: everything is a string, and strings are immutable, so it has had efficient purely declarative data structures for decades).<p>The downside in Tcl is that if you refactor some code suddenly you can add a new reference and drop into accidentally quadratic territory because now everything is being copied. This leads to some cute&#x2F;ugly hacks that are only explainable in terms of interpreter implementation details, purely to reduce a refcount in the right spot.
评论 #41115280 未加载
dave442010 months ago
A variant of option 4 is to keep track of references you know cannot possibly be shared, and update those by mutation. Compared to reference counting, it misses some opportunities for mutation, but avoids the false sharing.<p>I think Roc is doing this.
评论 #41113390 未加载
评论 #41114562 未加载
评论 #41114223 未加载
anfelor10 months ago
Disclosure: I work on Koka&#x27;s FBIP optimization (Option 4).<p>&gt; The most efficient data structure to use here would be a mutable dynamic array and in an imperative language that&#x27;s what pretty much everyone would use. But if you asked an OCaml programmer, they would almost certainly use a linked list instead.<p>I agree with this sentiment. However, OCaml does have mutable arrays that are both efficient and convenient to use. Why would a programmer prefer a list over them? In my opinion, the main benefit of lists in this context is that they allow pattern matching and inductive reasoning. To make functional programming languages more suited for array programming, we would thus need something like View Patterns for arrays.<p>A related issue is that mutation can actually be slower than fresh allocations in OCaml. The reason for this is that the garbage collector is optimized for immutable datastructures and has both a very fast minor heap that makes allocations cheap and expensive tracking for references that do not go from younger to older elements. See: <a href="https:&#x2F;&#x2F;dev.realworldocaml.org&#x2F;garbage-collector.html#scrollNav-4-5" rel="nofollow">https:&#x2F;&#x2F;dev.realworldocaml.org&#x2F;garbage-collector.html#scroll...</a><p>&gt; Unfortunately, this makes it impossible to use any standard functions like map on linear values and either makes linearity nearly useless or inevitably creates a parallel, incomplete universe of functions that also work on linear values.<p>You can implement polymorphism over linearity: this is done in Frank Pfenning&#x27;s SNAX language and planned for the uniqueness types in a branch of OCaml.<p>&gt; This might sound a little dangerous since accidentally holding on to a reference could turn a linear time algorithm quadratic<p>No, the in-place reuse optimization does not affect the asymptotic time complexity. But it can indeed change the performance drastically if a value is no longer shared since copies are needed then.<p>&gt; A tracing garbage collector just doesn&#x27;t give you this sort of information.<p>It is possible to add One-bit Reference Counts to a garbage collector, see <a href="https:&#x2F;&#x2F;gitlab.haskell.org&#x2F;ghc&#x2F;ghc&#x2F;-&#x2F;issues&#x2F;23943" rel="nofollow">https:&#x2F;&#x2F;gitlab.haskell.org&#x2F;ghc&#x2F;ghc&#x2F;-&#x2F;issues&#x2F;23943</a><p>&gt; for now even these struggle to keep up with tracing garbage collectors even when factoring in automatic reuse analysis.<p>I investigated the linked benchmarks for a while. The gap between Koka and Haskell is smaller than described in that initial comment, but a tuned GHC is indeed a bit faster than Koka on that benchmark.
kccqzy10 months ago
The author didn&#x27;t write a good objection to option 2. Both the ST monad (real mutations) and the variety of State monads (simulated mutations) work fine in practice. What&#x27;s even better is the STM monad, the software transactional memory monad that is not only about mutations but also solves synchronization between threads in a way that&#x27;s intuitive and easy to use. But let&#x27;s stick to the ST monad. Has the author looked at how hash maps and hash sets are implemented in Haskell? It&#x27;s arrays and mutation!<p>&gt; And if you only need mutation locally inside a function, using ST makes your code fundamentally more imperative in a way that really forces you to change your programming style. This isn&#x27;t great either and doesn&#x27;t exactly help with readability, so the mental overhead is rarely worth it.<p>What?? You are explicitly opting into writing mutation code. Of course that&#x27;s going to change your programming code. It is <i>expected</i> to be different. Readability is increased because it clearly delineates a different coding style. And it&#x27;s not even that different from other monadic code, even when you compare to non-mutating monadic code.<p>Terrible article.
评论 #41116665 未加载
评论 #41113771 未加载
throwaway8152310 months ago
Ben Lippmeier&#x27;s Disciplined Disciple Compiler (DDC), for his language later called Discus, was interesting. It was&#x2F;is an experimental language that managed mutation through an effect typing system. In the intro to his thesis he talks about it some.<p>Discus language: <a href="http:&#x2F;&#x2F;discus-lang.org&#x2F;" rel="nofollow">http:&#x2F;&#x2F;discus-lang.org&#x2F;</a><p>Thesis: <a href="https:&#x2F;&#x2F;benl.ouroborus.net&#x2F;papers&#x2F;2010-impure&#x2F;lippmeier-impure-world.pdf" rel="nofollow">https:&#x2F;&#x2F;benl.ouroborus.net&#x2F;papers&#x2F;2010-impure&#x2F;lippmeier-impu...</a><p>The thesis is the more interesting of those two links IMHO. The intro is chapter 1 that starts at page 17 of the pdf. It has one of the better critiques of Haskell that I&#x27;ve seen, and explains why uncontrolled mutation is not the answer. Reference types ala ML aren&#x27;t the answer either, in his view.
jonathanyc10 months ago
I’m not even a big OCaml fan (you can use Algolia on my comment history…), but this article is just factually wrong.<p>&gt; For example, let&#x27;s say you&#x27;re iterating over some structure and collecting your results in a sequence. The most efficient data structure to use here would be a mutable dynamic array and in an imperative language that&#x27;s what pretty much everyone would use.<p>&gt; But if you asked an OCaml programmer, they would almost certainly use a linked list instead.<p>What? One of OCaml’s most notable features as a functional programming language is how it was designed to support mutation (see “the value restriction”, e.g. <a href="https:&#x2F;&#x2F;stackoverflow.com&#x2F;questions&#x2F;22507448&#x2F;the-value-restriction" rel="nofollow">https:&#x2F;&#x2F;stackoverflow.com&#x2F;questions&#x2F;22507448&#x2F;the-value-restr...</a>) In my own OCaml programs I used mutation whenever appropriate (my only complaint would be that I wish there were a little more syntactic sugar around e.g. hash table access).<p>I wanted to like this post but it seems like low-effort clickbait.
rocqua10 months ago
How come the CoW method requires runtime reference counting? A lot of the same benefit (but not all) should be available based on static analysis right?<p>Especially if the approach isn&#x27;t really Copy on Write, but Copy only when someone might want to use the old value. Default to trying to mutate in place, if you can prove that is safe.<p>For most locals, that should be rather doable, and it would be a pretty big gain. For function parameters it probably gets hairy though.
评论 #41116940 未加载
Mathnerd31410 months ago
There is a way to do FBIP without reference counting - use immutable store semantics (always copy). At that point you are doing a lot of copying but Haskell etc. are actually pretty good at managing this sort of profligate duplication. And of course it is possible to use RC-at-compile-time and other analyses to optimize away the copying - the difference is that <i>runtime</i> RC is not required like it is in Koka. There is even a static analysis algorithm from 1985 <a href="https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;abs&#x2F;10.1145&#x2F;318593.318660" rel="nofollow">https:&#x2F;&#x2F;dl.acm.org&#x2F;doi&#x2F;abs&#x2F;10.1145&#x2F;318593.318660</a> (never implemented in Haskell because they went with the ST monad) There is a theorem in the paper that for &quot;natural&quot; translations of imperative programs into FP, their algorithm will optimize the FP back to the imperative program or better.
评论 #41123133 未加载
PaulHoule10 months ago
I learned programming in the 1980s based on examples from the 1970s and I would see (and write JNI wrappers for) FORTRAN codes in the 1990s that were built around algorithms that could work on data in place with minimal duplication of data structures such as sorts, FFTs, even when it wasn&#x27;t obvious that they could do so.
ChadNauseam10 months ago
I enjoyed this article. As someone who has written too much haskell and ocaml, and now writes mostly Rust, I am biased but I think this problem is mostly solved by rust. (The author mentions rust in option 3, but I think underappreaciates it.)<p>The author mentions linear types. This is a bit of a pet peeve of mine because, while very useful, linear types are not the concept that many people think they are and they are not implemented in rust (and neither are affine types). What rust implements is referred to as a &quot;uniqueness type&quot;.<p>The difference has to do with how they deal with what linear-types-people call &quot;exponentials&quot;. A linear type, as the article mentions, is the type of a value must be &quot;consumed&quot; exactly once (where consuming means passing it into a function, returning it, or sometimes destructuring it). Of course, in this mad world of ours we sometimes need to consume a value more than once, and indeed a language with only linear types would not be turing-complete. This escape hatch is called an &quot;exponential&quot;, I guess because exponential is kind of like the opposite of linear. A value with an exponential type can be used as often as you want. It is essentially most types in most programming languages.<p>IF a function expects a value with a linear type, can you pass an a value with an exponential type to it? The answer is that you can. Try this in linear haskell if you don&#x27;t believe me. A function taking a value with a linear type just says &quot;I consume this value exactly once, but you can pass me whatever you want&quot;. The restriction is that values with linear types can only be passed to functions that expect linear types. A value with a linear type must be consumed exactly once, so you certainly can&#x27;t pass it to a function that expects a value with an exponential type, because it might use that value twice. In other words, a linear type is a restriction on the callee.<p>Those familiar with rust will notice that this is not how rust works. If a function takes T, and you have &amp;T, you just cannot call that function. (Ignore Clone for now.) However, in the world of linear types, this would be allowed. This makes linear types not useful for the article&#x27;s purposes, although they are still very useful for other things.<p>What rust wants is a constraint not provided by linear types. Where linear types are a restriction on the callee, rust wants to be able to restrict the caller. It wants to be able to restrict the caller in such a way that it can know that there are no other references to a variable that&#x27;s been passed into a function. People call this a &quot;uniqueness type&quot; because you can say &quot;I want this type to be &#x27;unique&#x27;&quot; (where &#x27;unique&#x27; means not-aliased). Honestly this name doesn&#x27;t make a lot of sense, but it makes a little more sense when you think of it in terms of references. If a reference is unique, then it means that no other reference that points to the same object (which is the requirement rust imposes on mutable references). So while a linear type allows you to pass a non-linear variable to a function that expects a linear one, rust doesn&#x27;t allow you to pass a non-unique variable to a function that expects a unique one.<p>And adding this requirement to mutations resolves 90% of the issues that make mutability annoying. Mutability becomes challenging to understand when:<p>1. You have multiple references pointing to the same data in memory. 2. You change the data using one of these references. 3. As a result, the data appears to have changed when accessed through any of the other references.<p>This simply cannot happen when mutability requires values to not be aliased.
评论 #41114727 未加载
classified10 months ago
A comment from the article page:<p><i>I blame Haskell and the sudden fixation on absolute purity that manifested just as the VC-startup set decided it was the next secret sauce, to the point of literally redefining what &quot;functional programming&quot; meant in the first place.<p>I think that fixation has forced a lot of focus on solving for &quot;how do we make Haskell do real work&quot; instead of &quot;how do we make programming in general more predictable and functional&quot;, and so the latter fight got lost to &quot;well we bolted lambdas into Java somehow, so it&#x27;s functional now&quot;.</i><p>Bullseye.
woctordho10 months ago
The lazy clone in HVM may be a viable approach to mutation
评论 #41136944 未加载
freeduck10 months ago
What category are <a href="https:&#x2F;&#x2F;clojure.org&#x2F;reference&#x2F;data_structures" rel="nofollow">https:&#x2F;&#x2F;clojure.org&#x2F;reference&#x2F;data_structures</a> in?
narski10 months ago
I recently ran into this issue when trying to memoize a simple numerical sequence in Hoon (yes, <i>that</i> Hoon. I know, I know...).<p>Let&#x27;s use the fibonacci sequence as an example. Let&#x27;s write it the classic, elegant way: f(n) = f(n-1) + f(n-2). Gorgeous. It&#x27;s the sum of the two previous. With the caveat that f(n=0|1) = n. In Python:<p><pre><code> # fib for basic b&#x27;s def fib(n): ## Base case if n == 0 or n == 1: return n return fib(n-1) + fib(n-2) </code></pre> Right off the bat, performance is O(n)=n*2. Every call to f(n-1) will <i>also</i> need to compute f(n-2) anyways! It&#x27;s a mess. But since Python passes arrays and dictionaries as pointers (<i>cough</i>, sorry! I meant to say <i>references</i>) it&#x27;s super easy to memoize:<p><pre><code> # optimize-pilled memoize-chad version def fib(n, saved={}): if n in saved: return saved[n] if n == 0 or n == 1: saved[n] = n else: saved[n] = fib(n-1) + fib(n-2) return saved[n] </code></pre> Okay, now our version is nearly as fast as the iterative approach.<p>This is the normal pattern in most languages, memoizing otherwise &quot;pure&quot; functions is easy because you can reference a shared object using references, right? Even with multithreading, we&#x27;re fine, since we have shared memory.<p>Okay, but in Hoon, there are no pointers! Well, there kinda are. The operating system lets you update the &quot;subject&quot; of your Urbit (the context in which your programs run), and you can do this via the filesystem (Clay) or daemons (Gall agents, which have their own state kind of).<p>But to do this within a simple function, not relying on fancy OS features? It&#x27;s totally possible, but a huge pain the Aslan.<p>First, here&#x27;s our bog-standard fib in Hoon:<p><pre><code> |= n=@ud ?: (lte n 1) n %+ add $(n (dec n)) $(n (sub n 2)) </code></pre> Now, I memoize on the way down, by calculating just f(n-1) and memoizing those values, to acquire f(n-2):<p><pre><code> :- %say |= [* [n=@ud ~] [cache=(map @ud @ud) ~]] :- %noun ^- [sum=@ud cache=(map @ud @ud)] =&#x2F; has-n (~(get by cache) n) ?~ has-n ?: (lte n 1) [n (~(put by cache) n n)] =&#x2F; minus-1 $(n (dec n)) =&#x2F; minus-2 =&#x2F; search (~(get by cache.minus-1) (sub n 2)) ?~ search 0 (need search) :- (add sum.minus-1 minus-2) (~(put by cache.minus-1) n (add sum.minus-1 minus-2)) [(need has-n) cache] </code></pre> and that works in the Dojo:<p><pre><code> &gt; =fib-8 +fib 8 &gt; sum.fib-8 21 </code></pre> but it sure is easier in Python! And I&#x27;m not picking on Hoon here, it&#x27;s just pure functional programming that makes you think this way - which as a hacker is fun, but in practice is kinda inconvenient.<p>I even wonder how much faster I actually made things. Let&#x27;s see:<p><pre><code> &gt; =old now &gt; =res +fib 18 &gt; sum.res 2.584 &gt; (sub now old) 1.688.849.860.263.936 :: now with the non-memoized code... &gt; =before now &gt; +fib 18 2.584 &gt; (sub now before) 1.125.899.906.842.624 </code></pre> Ha! My super improved memoized code is actually slower! That&#x27;s because computing the copies of the map costs more than just recurring a bunch. This math should change if I try to compute a bigger fib number...<p>Wait. Nevermind. My memoized version is faster. I tested it with the Unix time command. It&#x27;s just that Urbit Dojo has a wierd way of handling time that doesn&#x27;t match my intuition. Oh well, I guess I can learn how that works. But my point is, thinking is hard, and in Python or JS or C I only have to think in terms of values and pointers. And yes, that comes with subtle bugs where you think you have a value but you really have a pointer! But most of the time it&#x27;s pretty easy.<p>Btw sorry for rambling on with this trivial nonsense - I&#x27;m a devops guy so this is probably super boring and basic for all you master hn swe&#x27;s. But it&#x27;s just a tiny example of the constant frustrations I&#x27;ve had trying to do things that would be super simple if I could just grab a reference and modify something in memory, which for better or worse, is how every imperative language implicitly does things.
评论 #41115371 未加载
评论 #41118500 未加载
评论 #41114756 未加载
评论 #41124678 未加载
scrubs10 months ago
This post is great! And so are the comments. Another reason hn rocks.
cryptica10 months ago
IMO, Functional Programming was a zero interest rate phenomenon. Some mathematicians suffering from professional deformation believed that programming should adhere to the purity of mathematical conventions... Meanwhile, there was no proof whatsoever to support the hypothesis that constraining programming to such conventions would be beneficial in a practical sense.<p>FP proponents spotted a small number of problems which arose in certain specific OOP implementations and stubbornly decided that OOP itself was to blame.<p>Some FP proponents have pointed out that passing around instance references to different parts of the code can lead to &#x27;spooky action at a distance.&#x27; For example, if references to the same child instance are held by two different parent modules and they both invoke state-mutating methods on the child instance, it becomes difficult to know which of the two parent modules was responsible for the mutation of state within the child instance...<p>The mistake of FP proponents is that they failed to recognize the real culprit of the flaws that they&#x27;ve identified. In the case of &#x27;spooky action at a distance&#x27;, the culprit is pass-by-reference.<p>If you keep that in mind and consider what OOP pioneers such as Alan Kay have said about OOP &quot;It&#x27;s about messaging,&quot; it becomes an irrefutable fact that this flaw has nothing to do with OOP but is merely a flaw in specific implementations of OOP... Flawed implementations which neglected the messaging aspect of OOP.<p>To summarize it simply; with OOP, you&#x27;re not supposed to pass around instances to each other, especially not references. What you&#x27;re supposed to pass around are messages. The state should be fully encapsulated. Messages are not state, instances are state. Instances shouldn&#x27;t be moved around across multiple modules, messages should.<p>If you approach OOP with these principles in mind, and make the effort to architect your systems in such a way, you will see it solves all the problems that FP claims to solve abd it doesn&#x27;t introduce any of its problems... Which are numerous and would require an entire article to enumerate.
评论 #41119717 未加载