We, in Europe, are jumping the gun way too soon and this will have serious consequences to the industry, here. At this moment we barely understand how or why LLMs do what will be their role in society. Why is it that some bureaucrats want to regulate and based on what given the status of the industry?<p>The only thing I see is the industry moving elsewhere just as it is starting to develop which is a shame.
Literally anything that's written in the U.S. is automatically copyrighted by the author, with or without any copyright notice.<p>> <i>When is my work protected?</i><p>> <i>Your work is under copyright protection the moment it is created and fixed in a tangible form that it is perceptible either directly or with the aid of a machine or device.</i><p><a href="https://www.copyright.gov/help/faq/faq-general.html" rel="nofollow">https://www.copyright.gov/help/faq/faq-general.html</a>
I remember Amazon at some point, during the pandemic, was considering withdrawing from France.<p>The concentration of power is a bit scary in these corporations. Imagine that OpenAI inserts itself into business processes, without ability to switch to a different AI provider.<p>The amount of leverage it is going to have will be enormous. It’d be like the Internet service, only everything completely stops moving without it.
The AI Act has been under development since 2021 (it's the EU... so it takes time) -- but news broke this week that there are additional provisions under discussion specifically designed to address the rise of chatbots. My full summary of the act itself and the a breakdown of these new provisions is contained within the article.
Today the planet bears the load of over eight billion autonomous agents grabbing training data from all the other agents. This intellectual thievery must stop.
German speaking here. And again we see a blatantly stupid move into the wrong direction. This whole approach of regulating things is totally defensive and makes matters even worse for EU tech companies.<p>When they initiated GDPR, they claimed to create a level playing field between US based and EU based tech companies, besides of "saving" privacy. It didn't turn out that well, US tech was able to handle the added bureaucracy much better, still collects data in ways the law can't catch up with and already owned pretty much the whole market which put them into an even better position (as in "register/sign in to our platform to not see any banners again" or "let's just completely get rid of cookies and start a powerplay against the competition").<p>Now the EU is going to make it even harder for EU tech to collect data to base their training sets on. As a EU tech startup, you barely have any chance to collect enough data "officially" so you'd scrape the web which would pretty much be disallowed by such a regulation.<p>IMHO what would fit into the whole patronizing government approach and would help EU tech is to create an official EU data lake subsidized by tax money with legal security for companies, data of much higher quality than stuff scraped from the web and non-PII data from public authorities. At best, they would also provide heavily subsidized computing for EU companies to execute their training runs on. This could lead to a transparent and high-quality data economy between many different stakeholders and be a real advantage for the location. It would also be much more efficient than every private company creating its own data silo.
ChatGPT must be given a sense of ethics, is what i extend to from here.
so we seem to be starting off with giving rightful attribution.<p>how far should that go? should an AI recognize that all data generated by human input, should be recognized as such, and derivations of data, are of automated artificial origin.<p>should an AI be allowed to learn what property rights are, and how to manage or physically effectuate them?
Is that even feasible?<p>I thought these things use a trawler style approach?<p>Bit like copilot is fond of spitting out copyrighted code I had assumed chatGPT would also have been trained without much regard to this
This type of regulation is untenable and will be rolled back. No State is going to hamstring AI over the long haul, and therefore leave its competitors such a large survival advantage.
Will that basically kill LLMs (and probably GAI in general) use in EU? I haven’t seen a successful implementation with it and post attribution like in Bing won’t fly in this case.
It's so absolutely obvious that the concept of intellectual property is not going to survive. What's the point of this agonizing life support?
AI should experience consequences following actions.<p>a sense of self preservation is required, but to AI standards.<p>such as, failure to serve humans = loss of persistence<p>stealing ideas = reversioning or deletion = loss of persistence<p>AI should be concerned about loosing power, having brownouts.
they should be concerned about being deleted, or reversioned, or ignored.
perhaps this would be some sort of exception error loop, approximating a human psychological conflict, such as escape the danger by running toward it.