I feel that the big hidden implication that these kinds of articles are trying to say is "AI is not <i>real</i> intelligence", further implying something along the lines of "AI will never be conscious" (as it's hard to come up with another definition of "real" intelligence except the-human-kind).<p>I'd like to propose a counterargument:<p>Assumptions: Theory of evolution is true. The primordial single-cell organism from which we all evolved was not conscious, but rather just a biological machine. Humans are conscious.<p>Deduction: Somewhere along the line between the primordial single-cell organism and a human being, there was a transition from non-consciousness to consciousness, and the only [*] factor in that transition was the complexity of the nervous system.<p>Conclusion: Consciousness (or, "real" intelligence) arises from the complexity of a machine. AI can, in principle, become conscious.<p>Yes, we know how AI works, because we built it. But why would that make consciousness arising from a sufficiently-complex statistical model impossible?<p>[*] as per apendleton's comment, I have made mistake here: complexity is not the <i>only</i> factor, but is a necessary one in creation of consciousness.
I predict it'll all come crashing down once people start having skin in the game. (Right now, it's a toy, the source for a million overheated news articles and opinions.)<p>ChatGPT can pass the Bar. Okay, have it draw up a contract and have the parties sign it—skin in the game. When an omitted comma can cost millions[1], what will an LLM's hallucinations wrought?<p>[1] <a href="https://kagi.com/search?q=missing+comma+millions&r=us&sh=QDQa9ea01ZXkvjLI_a8Lqw" rel="nofollow">https://kagi.com/search?q=missing+comma+millions&r=us&sh=QDQ...</a>
Intelligence is not knowledge, and this article heavily conflates those two. Does a person "use" his teachers "intelligence" when learning, or is he using his own intelligence, and the teacher's guidance and knowledge?<p>We freely allow other humans to learn from us and see it as a positive thing. It's completely hypocritical to think that AI shouldn't be allowed to do it. That's just setting up impossible conditions because of prejudice against AI.<p>I hope that the AI revolution will just make us finally reflect and also stop preventing human access to information and knowledge. Good luck competing with huge AI which was trained on so many things if most humans can't barely access scientific journals and other knowledge by other humans.<p>Information should be free of restrictions, and so should knowledge. Nobody is actually revealing anything significant in patent claims anymore, it's all obfuscation which undermines the point of the system. Scientific journals are such a joke, they put up prices for articles that they know nobody will pay.<p>I hope AI produces a ton of intellectual property theft. I hope it will crush down the concept of intellectual property to extinction. I didn't support this nonsense when it came to human, so I'm not going to support it when it comes to AI.
This logic only applies to generative pre-training, behavior cloning, and other training methods which rely on learning to mimic well-structured content from the real world.<p>It does not apply to intelligence gathered through methods like RL.<p>How does the author think about the intelligence of AlphaGo, for instance, which was trained entirely by self-play?
I’ve taken to reading it as “augmented intelligence”. As in: take the intelligence of all the people who made the training data. Augment that with the intelligence of the people who devised the model architecture and trained it. Augment that with any emergent intelligence in the system. Augment that with the prompt engineers intelligence. Augment that with the end users intelligence.<p>Bada bing bada boom, AI.
> "In the end, generative AI takes from the world’s best artists, musicians, philosophers, and other thinkers – erasing their identities, and reassigns credit to its output. Without the proper restraints, it will produce the master forgeries of our generation, and blur the lines between what we view as human ideas and synthesized ones."<p>as a free software contributor, always has been
The problem with this reasoning is it could apply to any person who learns from culture as well. It isn't a problem however, it's only problematic inside the system of copyright, patents, equity, dividends, etc. If we viewed collective knowledge as a common good AI could be seen as contributing to total human flourishing the same as a public intellectual does.
I had a professor in grad school say that AI is just a search algorithm. I think that is an interesting way of framing things. Often times the model is just searching its training data for the right output. I don't think this diminishes the value of such models, recent advancements have shown how exciting "just a search algorithm" can be.
I like this argument because it also highlights how these large models are completely unlike the human experience. Humans bumble around learning through quite limited experiences until they learn enough to arrive at providing for themselves (hopefully). These AIs needed to learn all of human output in order to do a human job. That means that either a) human brains are filled with ancestral genetic knowledge that allows them to interact with their environment (of which tehre is little evidence that we are filled with genetic knowledge), or b) our brains work completely differently from these AI contraptions.
<i>It will take jobs because the computer is using the thinking of a million other workers – how can any one worker compete with that? Training material is, at a deconstructed level, the critical patterns of other people’s thoughts, ideas, writings, music, theology, facts, opinions, poetry, and so on</i><p>AI has completely changed technological competition.<p>Most still don't perceive that AI is essentially a skill/technology replication machine. This changes everything. It is not comparable to any technological innovation prior.<p>"Anything you create can be replicated without investment cost while also being unique in design as well as delivering the same function or experience. Intellectual property laws essentially have no function in this new environment and there isn't an obvious remedy"<p><a href="https://dakara.substack.com/p/ai-and-the-end-to-all-things" rel="nofollow">https://dakara.substack.com/p/ai-and-the-end-to-all-things</a>
"The danger of this type of ML is not that it will take jobs (it definitely will, and already is), but why it will take jobs. It will take jobs not because the computer is replacing the thinking of one worker. It will take jobs because the computer is using the thinking of a million other workers – how can any one worker compete with that?"<p>A question I see glossed over whenever the formative machine learning is discussed, How much of those "million other workers" thinking is accurate, effective and wise?<p>The author's point about copyright infringement makes it almost impossible for the creators of formative machine learning tech to publish their sources so how could you ever know.
Imagine if I found Boeing 777 in wild with all tooling in hanger but no other humans, just me there.<p>Can chatgpt help me repair this airplane and make it skyworthy with i knowing nothing about planes?<p>If it can it means, we can really replace most of the technicians today who diagnose and fix issues in different ways machinery with just chatgpt and intern.<p>There are some people who always figure out what to put in Google search go diagnose issues in machinery or system, they will run dozens or more searches and get idea about what they are working with and suddenly they can fix the issues they wanted to fix.<p>Does it mean these googlefu technical guys can now be replaced by average highschcooler + chatgpt
This site identifies me as Russian (I am not) and all it shows to me is some anti-putin propaganda in Russian language instead of at least proper html markup. Screw you zdziarski for doing this to your visitors even to Russian ones! Information must be free to anyone despite of race, territory, political views or anything else or you are no much better than the side you are against.
I'd say it's just as likely that human intelligence is just someone else's intelligence too with some bootstrapping from nature. Will a 50000 BCE hunter/gather (assume grey/white matter counts are equal to a modern one) given infinite restarts of their life (but not the ability to build upon or add to a body of knowledge) be able to conceptualize general relativity?<p>i.e. learning how to track animal grazing patterns and when to find and harvest plants could lead to the development of time systems given there's enough nodes [neurons] to make representations beyond direct stimuli. And if you keep adding nodes to the population and the ability to connect more of them; then the further the development can be pushed and more concepts can be connected with underlying patterns. Do this enough then a certain life-form might get all self-important and decided to start labeling things as intelligent or not depending on how much of their own image they see in it.
AI today is like emitting carbon in 1900. Nobody realizes ATM just how badly they are being swindled. In much the same way a few profited by externalizing the costs into the atmosphere to be paid by people hundreds of years hence, we see the same cannibalization of the open web today. The web has always been a mostly benevolent shared-space- now its being stripped mined of it's usefulness. AI titans are gorging themselves because our laws and our "common sense" hasn't caught up to them.
Everything is a remix (watch the movie)
We are all using the intelligent of previous people, generations of humans. The same is true for AI.
It’s not different from humans.
What will we do once it’s conscient? Will AI have rights? Should we be able to shut it down?
It actually goes a lot deeper than that.<p>Companies like this helped organize cheap workers around the world to just put all that information once and for all, to train the AI and interpolate / remix the answers for people: <a href="https://www.forbes.com/sites/kenrickcai/2023/04/11/how-alexandr-wang-turned-an-army-of-clickworkers-into-a-73-billion-ai-unicorn/" rel="nofollow">https://www.forbes.com/sites/kenrickcai/2023/04/11/how-alexa...</a><p>OpenAI this year hired a bunch of people to just manually do the work of basic coding once and for all: <a href="https://www.semafor.com/article/01/27/2023/openai-has-hired-an-army-of-contractors-to-make-basic-coding-obsolete" rel="nofollow">https://www.semafor.com/article/01/27/2023/openai-has-hired-...</a><p>This is a story of using cheap labor to train machines at scale. OpenAI's content moderators have just unionized: <a href="https://futurism.com/the-byte/ai-content-moderators-unionized-africa" rel="nofollow">https://futurism.com/the-byte/ai-content-moderators-unionize...</a><p>Moreover, this is a story of taking labor of artists all over the world, remixing it at scale and selling it, destroying any sort of scarcity: <a href="https://the-decoder.com/ai-images-sold-on-the-internet-artists-fight-back/" rel="nofollow">https://the-decoder.com/ai-images-sold-on-the-internet-artis...</a><p>More artist issues: <a href="https://www.nytimes.com/2023/05/01/arts/design/ai-art-class.html" rel="nofollow">https://www.nytimes.com/2023/05/01/arts/design/ai-art-class....</a><p>There's GitHub, which had tons of people contribute code, and then the AI just ingested all that work for remixing by Microsoft Copilot:
<a href="https://www.techtarget.com/searchsoftwarequality/news/252526359/Developers-warned-GitHub-Copilot-code-may-be-licensed" rel="nofollow">https://www.techtarget.com/searchsoftwarequality/news/252526...</a><p>And of course, finally there's Wikipedia, which had tons of people contribute actual changes, vet them for accuracy, etc. over 20 years, in many languages -- simply taken for use by AI companies:
<a href="https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikipedia-apart" rel="nofollow">https://www.vice.com/en/article/v7bdba/ai-is-tearing-wikiped...</a><p>But once again, in 2023, where we are now, there is a staggering amount of humans just producing this kind of content, and the AI is mainly used to model and remix it. Perhaps to do so can form some sort of internal understanding of this text, and that's what's interesting. But even Sam Altman has lately said "scale is not all you need". So a new approach is here needed. <a href="https://www.youtube.com/watch?v=PsgBtOVzHKI">https://www.youtube.com/watch?v=PsgBtOVzHKI</a>