Orygin 6 hours ago

Great article but I don't really agree with their take on GPL regarding this paragraph:

> The spirit of the GPL is to promote the free sharing and development of software [...] the reality is that they are proceeding in a different vector from the direction of code sharing idealized by GPL. If only the theory of GPL propagation to models walks alone, in reality, only data exclusion and closing off to avoid litigation risks will progress, and there is a fear that it will not lead to the expansion of free software culture.

The spirit of the GPL is the freedom of the user, not the code being freely shared. The virality is a byproduct to ensure the software is not stolen from their users. If you just want your code to be shared and used without restrictions, use MIT or some other license.

> What is important is how to realize the “freedom of software,” which is the philosophy of open source

Freedom of software means nothing. Freedoms are for humans not immaterial code. Users get the freedom to enjoy the software how they like. Washing the code through an AI to purge it from its license goes against the open source philosophy. (I know this may be a mistranslation, but it goes in the same direction as the rest of the article).

I also don't agree with the arguments that since a lot of things are included in the model, the GPL code is only a small part of the whole, and that means it's okay. Well if I take 1 GPL function and include it in my project, no matter its size, I would have to license as GPL. Where is the line? Why would my software which only contains a single function not be fair use?

  • froh 3 hours ago

    > The spirit of the GPL is the freedom of the user, not the code being freely shared.

    who do you mean by "user"?

    the spirit is that the person who actually uses the software also has the freedom to modify it, and that the users recovering these modifications have the same rights.

    is that what you meant?

    and while technically that's the spirit of the GPL, the license is not only about users, but about a _relationship_, that of the user and the software and what the user is allowed to do with the software.

    it thus makes sense to talk about "software freedom".

    last not least, about a single GPL function --- many GPL _libraries_ are licensed less restrictively, LGPL.

    • m463 an hour ago

      I don't think you understand the GPL.

      > "the user is allowed to do with the software"

      The GPL does not restrict what the user does with the software.

      It can be USED for anything.

      But it does restrict how you redistribute it. You have responsibilities if redistribute it. You must provide the source code, and pass on the same freedoms you received to the users you redistribute it to.

      • gizajob 13 minutes ago

        Thinking on though, if the models are trained on any GPL code then one could consider that they contain that GPL code, and are constantly and continually updating and modifying that code, thus everything the model subsequently outputs and distributes should come under the GPL too. It’s far from sufficient that, say, OpenAI have a page on their website to redistribute the code they consume in their models if such code becomes part of the model’s training data that is resident in memory every time it produces new code for users. In the spirit of the GPL all that derivative code seems to also come under the GPL, and has to be made available for free, even if upon every request the generated code is somehow novel or unique to that user.

  • CamperBob2 2 hours ago

    The GPL arose from Stallman's frustration at not having access to the source code for a printer driver that was causing him grief.

    In a world where he could have just said "Please create a PDP-whatever driver for an IBM-whatever printer," there never would have been a GPL. In that sense AI represents the fulfillment of his vision, not a refutation or violation.

    I'd be surprised if he saw it that way, of course.

    • saurik 20 minutes ago

      But that isn't the same code that you were running before. And like, let's not forget GPLv3: "please give me the code for a mobile OS that could run on an iPhone" does not in any way help me modify the code running on MY iPhone.

palata 6 hours ago

Genuine question: if I train my model with copyleft material, how do you prove I did?

Like if there is no way to trace it back to the original material, does it make sense to regulate it? Not that I like the idea, just wondering.

I have been thinking for a while that LLMs are copyright-laundering machines, and I am not sure if there is anything we can do about it other than accepting that it fundamentally changes what copyright is. Should I keep open sourcing my code now that the licence doesn't matter anymore? Is it worth writing blog posts now that it will just feed the LLMs that people use? etc.

  • bwfan123 4 hours ago

    Sometime, LLMs actually generate copyright headers as well in their output - lol - like in this PR which was the subject of a recent HN post [1]

    https://github.com/ocaml/ocaml/pull/14369/files#diff-062dbbe...

    [1] https://news.ycombinator.com/item?id=46039274

    • Chris_Newton 2 hours ago

      I once had a well-known LLM reproduce pretty much an entire file from a well-known React library verbatim.

      I was writing code in an unrelated programming language at the time, and the bizarre inclusion of that particular file in the output was presumably because the name of the library was very similar to a keyword I was using in my existing code, but this experience did not fill me with confidence about the abilities of contemporary AI. ;-)

      However, it did clearly demonstrate that LLMs with billions or even trillions of parameters certainly can embed enough information to reproduce some of the material they were trained on verbatim or very close to it.

    • quotemstr 2 hours ago

      So what? I can probably produce parts of the header from memory. Doesn't mean my brain is GPLed.

      • ikawe an hour ago

        If your brain was distributed as software, I think it might?

      • voxl an hour ago

        There is a stupid presupposition that LLMs are equivalent to human brains which they clearly are not. Stateless token generators are OBVIOUSLY not like human brains even if you somehow contort the definition of intelligence to include them

        • quotemstr 44 minutes ago

          Even if they are not "like" human brains in some sense, are they "like" brains enough to be counted similarly in a legal environment? Can you articulate the difference as something other than meat parochialism, which strikes me as arbitrary?

          • AlexandrB 29 minutes ago

            All law is arbitrary. Intellectual property law perhaps most of all.

            Famously, the output from monkey "artists" was found to be non-copyrightable even though a monkey's brain is much more similar to ours than an LLM.

            [1] https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...

            • quotemstr 21 minutes ago

              If IP law is arbitrary, we get to choose between IP law that makes LLMs propagate the GPL and law that doesn't. It's a policy switch we can toggle whenever want. Why would anyone want the propagates-GPL option when this setting would make LLMs much less useful for basically zero economic benefit? That's the legal "policy setting" you choose when you basically want to stall AI progress, and it's not going to stall China's progress.

  • friendzis 5 hours ago

    > Genuine question: if I train my model with copyleft material, how do you prove I did?

    An inverse of this question is arguably even more relevant: how do you prove that the output of your model is not copyrighted (or otherwise encumbered) material?

    In other words, even if your model was trained strictly on copyleft material, but properly prompted outputs a copyrighted work is it copyright infringement and if so by whom?

    Do not limit your thoughts to text only. "Draw me a cartoon picture of an anthropomorphic with round black ears, red shorts and yellow boots". Does it matter if the training set was all copyleft if the final output is indistinguishable from a copyrighted character?

    • isodev 4 hours ago

      > even if your model was trained strictly on copyleft material

      That's not legal use of the material according to most copyleft licenses. Regardless if you end up trying to reproduce it. It's also quite immoral if technically-strictly-speaking-maybe-not-unlawful.

      • tpmoney an hour ago

        > That's not legal use of the material according to most copyleft licenses.

        That probably doesn't matter given the current rulings that training an AI model on otherwise legally acquired material is "fair use", because the copyleft license inherently only has power because of copyright.

        I'm sure at some point we'll see litigation over a case where someone attempts to make "not using the material to train AI" a term of the sales contract for something, but my guess would be that if that went anywhere it would be on the back of contract law, not copyright law.

  • blibble 4 hours ago

    > Genuine question: if I train my model with copyleft material, how do you prove I did?

    discovery via lawyers

  • freedomben 5 hours ago

    I've thought about this as well, especially for the case when it's a company owned product that is AGPLed. It's a really tough situation, because the last thing we want is competitors to come in and LLM wash our code to benefit their own product. I think this is a real risk.

    On the other side, I deeply believe in the values of free software. My general stance is that all applications I open source are GPL or AGPL, and any libraries I open source are MIT. For the libraries, obviously anyone is free to use them, and if they want to rewrite them with an LLM more power to them. For the applications though, I see that as a violation of the license.

    At the end of the day, I have competing values and needs and have to make a choice. The choice I've made for now is that for the vast majority of things, I'm still open sourcing them. The gift to humanity and the guarantee to the users freedom is more important to me than a theoretical threat. The one exception is anything that is truly a risk of getting lifted and used directly by competitors. I have not figured out an answer to this one yet, so for now I'm keeping it AGPL but not publicly distributing the code. I obviously still make the full code available to customers, and at least for now I've decided to trust my customers.

    I think this is an issue we have to take week by week. I don't want to let fear of things cause us to make suboptimal decisions now. When there's an actual event that causes a reevaluation, I'll go from there.

  • ACCount37 4 hours ago

    You need low level access to the AI in question, and a lot of compute, but for most AI types, you can infer whether a given data fragment was in the training set.

    It's much easier to do that for the data that was repeated many times across the dataset. Many pieces of GPL software are likely to fall under that.

    Now, would that be enough to put the entire AI under GPL? I doubt it.

  • Animats 2 hours ago

    There's the other side of this issue. The current position of the U.S. Copyright Office is that AI output is not copyrightable, because the Constitution's copyright clause only protects human authors. This is consistent with the US position that databases and lists are not copyrightable.[1]

    Trump is trying to fire the head of the U.S. Copyright Office, but they work for the Library of Congress, not the executive branch, so that didn't work.[2]

    [1] https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...

    [2] https://apnews.com/article/trump-supreme-court-copyright-off...

  • basilgohar 6 hours ago

    Maybe we should requiring training data be published or at least referenced.

  • PaulKeeble 5 hours ago

    Its why I stopped contributing to open source work. Its pretty clear in the age of LLMs that this breach of the license under which it is written will be allowed to continue and that open source code will be turned into commercial products.

  • mistrial9 6 hours ago

    > Should I keep open sourcing my code now that the licence doesn't matter anymore?

    your LICENSE matters in similar ways that it mattered before LLMs. LICENSE adherence is part of intellectual property law and practice. A popular engine may be popular, but not all cases at all times. Do not despair!

  • luqtas 5 hours ago

    genuine question: why you are training your model with content that explicitly will have requirements violated if you do?

    • 1gn15 5 hours ago

      out of pure spite for hypocritical "hackers"

zamadatix 7 hours ago

The article goes deep into these two cases deemed most relevant but really there are a wide swath of similar cases all focused around defining sharper borders than ever around what is essentially the question "exactly when does it become copyright violation" with plenty of seemingly "obvious" answers which quickly conflict with each other.

I also have the feeling it will be much like Google LLC v. Oracle America, Inc., much of this won't really be clearly resolved until the end if the decade. I'd also not ve surprised if seemingly very different answers ended up bubbling up in the different cases, driven by the specifics of the domain.

Not a lawyer, just excited to see the outcomes :).

  • twoodfin 6 hours ago

    Ideally, Congress would just settle this basket of copyright concerns, as they explicitly have the power to do—and have done so repeatedly in the specific context of computers and software.

    • tpmoney an hour ago

      I've pitched this idea before but my pie in the sky hope is to settle most of this with something like a huge rollback of copyright terms, to something like 10 or 15 years initially. You can get one doubling of that by submitting your work to an official "library of congress" data set which will be used to produce common, clean, and open models that are available to anyone for a nominal fee and prevent any copyright claims against the output of those models. The money from the model fees is used to pay royalties to people with materials in the data set over time, with payouts based on recency and quantity of material, and an absolute cap to discourage flooding the data sets to game the payments.

      This solution to me amounts to an "everybody wins" situation, where producers of material are compensated, model trainers and companies can get clean, reliable data sets without having to waste time and energy scraping and digitizing it themselves, and model users can have access to a number of known "safe" models. At the same time, people not interested in "allowing" their works to be used to train AIs and people not interested in only using the public data sets can each choose to not participate in this system, and then individually resolve their copyright disputes as normal.

    • jeremyjh 6 hours ago

      What is ideal about getting more shitty laws written at the behest of massive tech companies? Do you think the DMCA is a good thing?

      • sidewndr46 4 hours ago

        DMCA isn't intrinsically copyright. It's a questionable attempt at a safe harbor provision that has horrible provisions for abuse. I'm not even of the opinion that copyright about computer software is poorly executed. It's mostly software patents that don't make any sense to me. When you have a concept that essentially every mathematics undergrad is familiar with getting labels slapped on it & called a novel technique. It's made worse by the fact that the patent office itself isn't enabled to perform any real review. There are no shortage of impossible devices patented each year in the category of things perpetual motion.

      • twoodfin 6 hours ago

        As opposed to waiting for uncertain court cases (based on the existing shitty laws) to play out for years, ultimately decided by unelected judges?

        Democracy is the worst system we’ve tried, except for all the others.

        (Also: The GPL can only be enforced because of laws passed by Congress in the late ‘70’s and early ‘80’s. And believe you me, people said all the same kinds of things about those clowns in Congress. Plus ça change…)

        • jeremyjh 5 hours ago

          Courts applying legal analysis to existing law and precedent is also an operation of democracy in action and lately they've been a lot better at it than legislators. I don't know if you've noticed, but the quality of our legislators has substantially deteriorated since the 80s, when 24-hour news networks became a thing. It got even worse after the Citizens United decision and social media became a thing. "No new laws" is really the safest path these days.

myrmidon 6 hours ago

I honestly think that the most extreme take that "any output of an LLM falls under all the copyright of all its training data" is not really defensible, especially when contrasted with human learning, and would be curious to hear conflicting opinions.

My view is that copyright in general is a pretty abstract and artificial concept; thus corresponding regulation needs to justifiy itself by being useful, i.e. encouraging and rewarding content creation.

/sidenote: Copyright as-is barely holds up there; I would argue that nobody (not even old established companies) is significantly encouraged or incentivised by potential revenue more than 20 years in the future (much less current copyright durations). The system also leads to bad ressource allocation, with almost all the rewards ending up at a small handful of most successful producers-- this effectively externalizes large portions of the cost of "raising" artists.

I view AI overlap under the same lense-- if current copyright rules would lead to undesirable outcomes (by making all AI training or use illegal/infeasible) then law/interpretation simply has to be changed.

  • jeremyjh 5 hours ago

    Anyone can very easily avoid training on GPL code. Yes, the model might be not be as strong as one that is trained that way and released under terms of the GPL, but to me that sounds like quite a good outcome if the best models are open source/open weight.

    Its all about whose outcomes are optimized.

    Of course, the law generally favors consideration of the outcomes for the massive corporations donating hundreds of millions of dollars to legislature campaigns.

    • myrmidon 5 hours ago

      Would it even actually help to go down that road though? IMO the expected outcome would simply be that AI training stalls for a bit while "unencumbered" training material is being collected/built up and you achieve basically nothing in the end, except creating a big ongoing logistical/administrative hassle to keep lawyers/bureaucrats fed.

      I think the redistribution effect (towards training material providers) from such an scenario would be marginal at best, especially long-term, and event that might be over-optimistic.

      I also dislike that stance because it seems obviously inconsistent to me-- if humans are allowed to train on copyrighted material without their output being generally affected, why not machines?

  • wizzwizz4 3 hours ago

    Human learning is materially different from LLM training. They're similar in that both involve providing input to a system that can, afterwards, produce output sharing certain statistical regularities with the input, including rote recital in some cases – but the similarities end there.

    • gruez 4 minutes ago

      >Human learning is materially different from LLM training [...] but the similarities end there.

      Specifically what "material differences" are there? The only arguments I heard are are around human exceptionalism (eg. "brains are different, because... they just are ok?"), or giving humans a pass because they're not evil corporations.

phplovesong 6 hours ago

We need a new license that forbids all training. That is the only way to stop big corporations from doing this.

  • maxloh 6 hours ago

    To my understanding, if the material is publicly available or obtained legally (i.e., not pirated), then training a model with it falls under fair use, at least in the US and some other jurisdictions.

    If the training is established as fair use, the underlying license doesn't really matter. The term you added would likely be void or deemed unenforceable if someone ever brought it to a court.

    • justin_murray 6 hours ago

      This is at least murky, since a lot of pirated material is “publicly available”. Certainly some has ended up in the training data.

      • michaelmrose 6 hours ago

        It isn't? You have to break the law to get it. It's publicly available like your TV is if I were to break into your house and avoid getting shot.

        • basilgohar 6 hours ago

          That isn't even remotely a sensible analogy. Equating copyright violation with stealing physical property is an extremely failed metaphor.

          • tpmoney an hour ago

            One of the craziest experiences in this "post AI" world is to see how quickly a lot of people in the "information wants to be free" or "hell yes I would download a car" crowds pivoted to "stop downloading my car, just because its on a public and openly available website doesn't make it free"

        • MangoToupe 6 hours ago

          Maybe you have some legalistic point that escapes comprehension, but I certainly consider my house to be much private and the internet public.

    • colechristensen 6 hours ago

      I wouldn't say this is settled law, but it looks like this is one of the likely outcomes. It might not be possible to write a license to prevent training.

    • LtWorf 2 hours ago

      Fair use was for citing and so on not for ripping off 100% of the content.

      • maxloh an hour ago

        Copyright protects the expression of an idea, not the idea itself. Therefore, an LLM transforming concepts it learned into a response (a new expression) would hardly qualify as copyright infringement in court.

        This principle is also explicitly declared in US law:

        > In no case does copyright protection for an original work of authorship extend to any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied in such work. (Section 102 of the U.S. Copyright Act)

        https://www.copyrightlaws.com/are-ideas-protected-by-copyrig...

  • tensor 16 minutes ago

    So if you put this hypothetical license on spam emails, then spam filters can't train to recognize them? I'm sure ad companies would LOVE it.

  • mr_toad an hour ago

    Fair use doesn’t need a license, so it doesn’t matter what you put in the license.

    Generally speaking licenses give rights (they literally grant license). They can’t take rights away, only the legislature can do that.

  • WithinReason 6 hours ago

    Wouldn't it be still legal to train on the data due to fair use?

    • gus_massa 6 hours ago

      I don't think it's fair use, but everyone on Earth disagree with me. So even with the standard default licence that prohibits absolutely everything, the humanity-1 consider it fair use.

      • justin_murray 6 hours ago

        Honest question: why don’t you think it is fair use?

        I can see how it pushes the boundary, but I can’t lay out logic that it’s not. The code has been publish for the public to see. I’m always allowed to read it, remember it, tell my friends about it. Certainly, this is what the author hoped I would do. Otherwise, wouldn’t they have kept it to themselves?

        These agents are just doing a more sophisticated, faster version of that same act.

        • gus_massa 5 hours ago

          Some project like Wine forbids you to contribute if you ever have seen the source of MS Windows [1]. The meatball inside your head is tainted.

          I don't remember the exact case now, but someone was cloning a program (Lotus123 -> Quatro or Excel???). They printed every single screen and made a team write a full specification in English. Later another separate team look at the screenshots and text and reimplement it. Apparently meatballs can get tainted, but the plain English text loophole was safe enough.

          [1] From https://gitlab.winehq.org/wine/wine/-/wikis/Developer-FAQ#wh...

          > Who can't contribute to Wine?

          > Some people cannot contribute to Wine because of potential copyright violation. This would be anyone who has seen Microsoft Windows source code (stolen, under an NDA, disassembled, or otherwise). There are some exceptions for the source code of add-on components (ATL, MFC, msvcrt); see the next question.

          • seanmcdirmid 3 hours ago

            > I don't remember the exact case now, but someone was cloning a program (Lotus123 -> Quatro or Excel???). They printed every single screen and made a team write a full specification in English. Later another separate team look at the screenshots and text and reimplement it. Apparently meatballs can get tainted, but the plain English text loophole was safe enough.

            This is close to how I would actually recommend reimplementing a legacy system (owned by the re-implementer) with AI SWE. Not to avoid copyright, but to get the AI to build up everything it needs to maintain the system over a long period of time. The separate team is just a new AI instance whose context doesn’t contain the legacy the code (because that would pollute the new result). The amplify isn’t too apt though since there is a difference between having something in your context (which you can control and is very targeted) and the code that the model was trained on (which all AI instance will share unless you use different models, and anyways, it isn’t supposed to be targeted).

        • mixedbit 6 hours ago

          Before LLMs programmers had pretty good intuition what GPL license allowed for. It is of course clear that you cannot release a closed source program with GPL code integrated into it. I think it was also quite clear, that you cannot legally incorporate GPL code into such a program, by making changes here and there, renaming some stuff, and moving things around, but this is pretty much what LLMs are doing. When humans do it intentionally, it is violation of the license, when it is automated and done on a huge scale, is it really fair use?

          • WithinReason 5 hours ago

            > this is pretty much what LLMs are doing

            I think this is the part where we disagree. Have you used LLMs, or is this based on something you read?

            • mixedbit 5 hours ago

              Do you honestly believe there are people on this board who haven't used LLMs? Ridiculing someone you disagree with is a poor way to make an argument.

              • WithinReason 4 hours ago

                lots of people on this board are philosophically opposed to them so it was a reasonable question, especially in light of your description of them

      • LtWorf 2 hours ago

        Just corporations, their shills, and people who think llms are god's gift to humanity disagree with you.

  • munchler 6 hours ago

    By that logic, humans would also be prevented from “training” on (i.e. learning from) such code. Hard to see how this could be a valid license.

    • psychoslave 6 hours ago

      Isn’t it the very reason why we need cleanroom software engineering:

      https://en.wikipedia.org/wiki/Cleanroom_software_engineering

      • mr_toad an hour ago

        If a human reads code, and then reproduces said code, that can be a copyright violation. But you can read the code, learn from it, and produce something totally different. The middle ground, where you read code, and produce something similar is a grey area.

    • codedokode 6 hours ago

      Bad analogy, probably made up by capitalists to confuse people. ML models cannot and do not learn. "learning" is a name of a process, when model developer downloads pirated material and processes it with an algorithm (computes parameters from it).

      Also, humans do not need to read million of pirated books to learn to talk. And a human artist doesn't need to steal million pictures to learn to draw.

      • 1gn15 5 hours ago

        > And a human artist doesn't need to steal million pictures to learn to draw.

        They... do? Not just pictures, but also real life data, which is a lot more data than an average modern ML system has. An average artist has probably seen- stolen millions of pictures from their social media feeds over their lifetime.

        Also, claiming to be anti-capitalist while defending one of the most offensive types of private property there is. The whole point of anti-capitalism is being anti private property. And copyright is private property because it gives you power over others. You must be against copyright and be against the concept of "stealing pictures" if you are to be an anti-capitalist.

  • BeFlatXIII 3 hours ago

    How is that enforceable against the fly-by-night startups?

  • James_K 6 hours ago

    Would such a license fall under the definition of free software? Difficult to say. Counter-proposition: a license which permits training if the model is fully open.

    • Orygin 6 hours ago

      My next project will be released under a GPL-like license with exactly this condition added. If you train a model on this code, the model must be open source & open weights

      • tpmoney an hour ago

        In light of the fact that the courts have found training an AI model to be fair use under US copyright law, it seems unlikely this condition will have any actual relevance to anyone. You're probably going to need to not publicly distribute your software at all, and make such a condition a term of the initial sale. Even there, it's probably going to be a long haul to get that to stick.

      • fouronnes3 6 hours ago

        Not sure why the FSF or any other organization hasn't released a license like this years ago already.

        • amszmidt 6 hours ago

          Because it would violate freedom zero. Adding such terms to the GNU GPL would also mean that you can remove them, they would be considered "further restrictions" and can be removed (see section 7 of the GNU GPL version 3).

          • Orygin 6 hours ago

            Freedom 0 is not violated. GPL includes restrictions for how you can use the software, yet it's still open source.

            You can do whatever you want with the software, BUT you must do a few things. For GPL it's keeping the license, distributing the source, etc. Why can't we have a different license with the same kind of restrictions, but also "Models trained on this licensed work must be open source".

            Edit: Plus the license would not be "GPL+restriction" but a new license altogether, which includes the requirements for models to be open.

            • amszmidt 5 hours ago

              That is not really correct, the GNU GPL doesn't have any terms whatsoever on how you can use, or modify the program to do things. You're free to make a GNU GPL program do anything (i.e., use).

              I suggest a careful reading of the GNU GPL, or the definition of Free Software, where this is carefully explained.

              • Orygin 5 hours ago

                > You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:

                "A work based on the program" can be defined to include AI models (just define it, it's your contract). "All of these conditions" can include conveying the AI model in an open source license.

                I'm not restricting your ability to use the program/code to train an AI. I'm imposing conditions (the same as the GPL does for code) onto the AI model that is derivative of the licensed code.

                Edit: I know it may not be the best section (the one after regarding non-source forms could be better) but in spirit, it's exactly the same imo as GPL forcing you to keep the GPL license on the work

                • amszmidt 5 hours ago

                  I think maybe you're mixing up distribution and running a program, at least taking your initial comment into account, "if you train/run/use a model, it must be open source".

                  • Orygin 5 hours ago

                    I should have been more precise: "If you train and distribute an AI model on this work, it must use the same license as the work".

                    Using AGPL as the base instead of GPL (where network access is distribution), any user of the software will have the rights to the source code of the AI model and weights.

                    My goal is not to impose more restrictions to the AI maker, but to guarantee rights to the user of software that was trained on my open source code.

    • amszmidt 6 hours ago

      It isn't the difficult, a license that forbids how the program is used is a non-free software license.

      "The freedom to run the program as you wish, for any purpose (freedom 0)."

      • Orygin 6 hours ago

        Yet the GPL imposes requirements for me and we consider it free software.

        You are still free to train on the licensed work, BUT you must meet the requirements (just like the GPL), which would include making the model open source/weight.

      • helterskelter 6 hours ago

        Running the program and analyzing the source code are two different things...?

        • amszmidt 5 hours ago

          In the context of Free Software, yes. Freedom one is about the right to study a program.

      • LtWorf 2 hours ago

        But training an AI on a text is not running it.

        • tpmoney an hour ago

          And distributing an AI model trained on that text is neither distributing the work nor a modification of the work, so the GPL (or other) license terms don't apply. As it stands, the courts have found training an AI model to be a sufficiently transformative action and fair use which means the resulting output of that training is not a "copy" for the terms of copyright law.

    • tomrod 6 hours ago

      Model weights, source, and output.

  • scotty79 6 hours ago

    We need a ruling that LLM generated code enters public domain automatically and can't be covered by any license.

    • raincole 2 hours ago

      It's more or less already the case though. Pure AI-generated works without human touches are not copyrightable.

      • LtWorf 2 hours ago

        We need it to be infecting the rest like GPL does.

        • raincole an hour ago

          You probably misunderstood how "infection" of GPL works. (which is very common)

          If your close-sourced project uses some GPL code, it doesn't automatically put your whole project in public domain or under GPL. It just means you're infringing the right of the code author and they can sue you (for money and stopping using their code, not for making your whole project GPL).

          In the simplest terms, GPL is:

              if codebase.is_gpl_compitable:
                  gpl_code.give_permission(code_base)
              else if codebase.is_using(gpl_code):
                  throw new COPYRIGHT_INFRINGEMENT // the copyright owner and the court deal with that with usual copyright laws
          
          GPL can't do much more than that. A license over a piece of code cannot automatically change the copyright status of another piece of code. There simply isn't legal framework for that.

          Similarly, AI code's copyleft status can't affect the rest of the codebase, unless we make new laws specifically for that.

    • palata 6 hours ago

      But then we would need a way to prove that some code was LLM generated, right?

      Like if I copy-paste GPL-licenced code, the way you realise that I copy-pasted it is because 1) you can see it and 2) the GPL-licenced code exists. But when code is LLM generated, it is "new". If I claim I wrote it, how would you oppose that?

graemep 7 hours ago

The article repeatedly treats license and contract as though they are the same, even though the sidebar links to a post that discusses the difference.

A lot of it boils down to whether training an LLM is a breach of copyright of the training materials which is not specific to GPL or open source.

  • xgulfie 7 hours ago

    And the current norm that the trillion dollar companies have lobbied for is that you can train on copyrighted material all you want so that's the reality we are living in. Everything ever published is all theirs.

    • gruez 2 minutes ago

      >And the current norm that the trillion dollar companies have lobbied for is that you can train on copyrighted material all you want so that's the reality we are living in. Everything ever published is all theirs.

      What "lobbied"? Copyright law hasn't materially changed since AI got popular, so I'm not sure where these lobbying efforts are showing up in. If anything the companies that have lobbied hard in the past (eg. media companies) are opposed to the current status quo, which seems to favor AI companies.

    • graemep 6 hours ago

      I am really surprised that media businesses, which are extremely influential around the world, have not pushed back against this more. I wonder whether they are looking at cost savings that will get from the technology as a worthwhile trade-off.

      • mr_toad an hour ago

        Several media companies have sued OpenAI already. So far, none have been successful.

      • gorbachev 6 hours ago

        They're busy trying to profit from it rushing to enter into licensing agreements with the LLM vendors.

        • xgulfie 5 hours ago

          Yeah, the short term win is to enter a licensing agreement so you get some cash for a couple years, meanwhile pray someone else with more money fights the legal battle to try and set a precedent for you

    • rileymat2 6 hours ago

      All theirs, if they properly obtained the copy.

      This is a big difference that already has bit them.

    • exasperaited 6 hours ago

      In practice it wouldn't matter a whit if they lobbied for it or not.

      Lobbying is for people trying to stop them; externalities are for the little people.

  • maxloh 7 hours ago

    To my understanding, if the material is publicly available or obtained legally (i.e., not pirated), then training a model with it falls under fair use.

    Once training is established as fair use, it doesn't really matter if the license is MIT, GPL, or a proprietary one.

    • blibble 7 hours ago

      fair use only applies in the united states (and Poland, and a very limited set of others)

      https://en.wikipedia.org/wiki/Fair_use#/media/File:Fair_use_...

      and it is certainly not part of the Berne Convention

      in almost every country in the world even timeshifting using your VCR and ripping your own CDs is copyright infringement

    • mongol 6 hours ago

      > To my understanding, if the material is publicly available or obtained legally (i.e., not pirated), then training a model with it falls under fair use.

      Is this legally settled?

      • 1gn15 5 hours ago

        Yes. There have been multiple court cases affirming fair use.

    • graemep 6 hours ago

      That is just the sort of point I am trying to make. That is a copyright law issue, not a contractual one. If the GPL is a contract then you are in breach of contract regardless of fair use or equivalents.

  • OneDeuxTriSeiGo 7 hours ago

    It's not specific to open source but it's most clearly enforceable with open source as there will be many contributors from many jurisdictions with the one unifying factor being they all made their copyright available under the same license terms.

    With proprietary or more importantly single-owner code, it's far easier for this to end up in a settlement rather than being drug out into an actual ruling, enforcement action, and establishment of precedence.

    That's the key detail. It's not specific to GPL or open source but if you want to see these orgs held to account and some precedence established, focusing on GPL and FOSS licensed code is the clearest path to that.

  • kronicum2025 6 hours ago

    A GPL license is a contract in most other countries. Just not US probably.

    • graemep 6 hours ago

      That part of the article is about US cases, so its US law that applies.

      > A GPL license is a contract in most other countries. Just not US probably.

      Not just the US. It may vary with version of the GPL too. Wikipedia claims its a civil law vs common law country difference - not sure the citation shows that though.

ljlolel 7 hours ago

And then also to all code made from the GPL’d ai model?

  • maxloh 6 hours ago

    A program's output is likely not owned by the program's authors. For example, if you create a document with Microsoft Word, you are the one who owns it, not Microsoft.

    • LtWorf an hour ago

      If I take a song and convert it from .mp3 to .ogg, the resulting file has no copyright since it's the output of a program?

    • javcasas 6 hours ago

      You sure about that? Have you checked the 400-page EULA?

    • pessimizer 6 hours ago

      Unless the license says otherwise. The fact that Word doesn't (I wouldn't even be sure if that was true, honestly, especially for the online versions) doesn't mean anything.

      They could start selling a version of Word tomorrow that gives them the right to train from everything you type on your entire computer into any program. Or that requires you to relinquish your rights to your writing and to license it back from Microsoft, and to only be able to dispute this through arbitration. They could add a morals clause.

pessimizer 6 hours ago

I might be crazy, and I'd love to hear from somebody who knows about this, but I've been assuming that AI companies have been pulling GPL code out of the training material specifically to avoid this.

Corporations have always talked about the virality of GPL, sometimes but not always to the point of exaggeration, you'd think that after getting the proof of concept done the AI companies would be running away at full speed from setting a bomb like that in their goldmine.

Putting in tons of commonly read books and scientific papers is safer, they can just eventually cross-license with the massive conglomerates that own everything. But the GPL is by nature hostile, and has been openly and specifically hostile from the beginning. MIT and Apache, etc. you can just include a fistful of licenses to download, or even come up with architectures that track names to add for attribution-ware. But the GPL will obviously (and legitimately) claim to have relicensed the entire model and maybe all its output (unless they restricted it to LGPL.)

Wouldn't you just pull it out?

  • NateEag 6 hours ago

    If you were a thoughtful, careful, law-abiding business, yes.

    I submit the evidence suggests the genAI companies have none of those attributes.

  • ares623 an hour ago

    Why do hard thing when easy thing do trick?

  • NiloCK 6 hours ago

    Not crazy - there's a rational self-interest in doing this.

    But I'm not certain that the relevant players have the same consequence-fearing mindset that you do, and to be honest they're probably right. The theft is too great to calculate the consequences, and by the time it's settled, what are you gonna do - turn off Forster's machine?

    I hope you're right in at least some cases!

    • pessimizer 6 hours ago

      > by the time it's settled

      Why would the GPL settle? Even more, who is authorized to settle for every author who used the GPL? If the courts decided in favor of the GPL, which I think would be likely just because of the age and pervasiveness of the GPL, they'd actually have to lobby Congress to write an exception to copyright rules for AI.

      A large part of the infrastructure of the world is built on the GPL, and the people who wrote it were clearly motivated by the protection that they thought that the GPL would give to what was often a charitable act, or even an act that would allow companies to share code without having to compete with themselves. I can't imagine too many judges just going "nope."

      • hananova 4 hours ago

        I think they meant "settled" as in "resolved."

        • pessimizer 3 hours ago

          I meant the same. I don't actually think that the GPL is an entity that can settle a court case; if I meant that I would have said the FSF or something. I mean that in order for it to resolve, a judge has to say that the GPL does not apply.

          If ultimately copyright holds up against the models*, the GPL will be a permanent holdout against any intellectual property-wide cross-licensing scheme. There's nobody to negotiate with other than the license itself, and it's not going to say anything it hasn't said before.

          * It hasn't done well so far, but Obama didn't appoint any SCOTUS judges so maybe the public has a chance against the corporations there.

  • exasperaited 6 hours ago

    > I might be crazy, and I'd love to hear from somebody who knows about this, but I've been assuming that AI companies have been pulling GPL code out of the training material specifically to avoid this.

    Haha no.

    https://windsurf.com/blog/copilot-trains-on-gpl-codeium-does...

    And just in the last two days, AI generating LGPL headers (which it could not do if identifying LGPL code was pulled from the codebase) and misattributing authors:

    https://devclass.com/2025/11/27/ocaml-maintainers-reject-mas...

    • pessimizer 3 hours ago

      Thanks for the links.

      That first link shows people actively pulling out GPL code in 2023 and marketing around that fact, though. That's not great evidence that they're not doing it now, especially if testing for if GPL code is still in there seems to be as easy as prompting with an incomplete piece of it.

      I'd think that companies could amass a collection of all known GPL code and test for it regularly in order to refine their methods for keeping it out.

      > (which it could not do if identifying LGPL code was pulled from the codebase)

      Are you sure about this? Linking to LGPL code is fine afaik. And why not train on code that linked to universally available libraries that are legal to use? Seems like one might even prefer it.

      Seems like this was rejected for size and slop reasons, not licensing. If the submitter of the PR isn't even fixing possibly hallucinated author's names, it's obvious that they didn't really read it. Debugging vibe coded stuff is like finding an indeterminate number of needles in a haystack.

rvnx 7 hours ago

GPL and copyright in general don't apply to billionaires, so pretty much a non-topic.

It's just a side cost of doing business, because asking for forgiveness is cheaper and faster than asking for permission.

  • throwaway198846 7 hours ago

    "Information wants to be free"? Many individuals pirated movies and games and got away with it. Of course two wrongs don't make a right and all that. Nonetheless one should be compensated for creating material that ai trained on for the same reasons copyright is compensated - to incentives people to produce it.

  • rando77 6 hours ago

    With an attitude like that they don't

simgt 7 hours ago

What triggers me is how insistant Claude Code is on adding "co-authored by Claude" in commits, in spite of my settings and an instruction in CLAUDE.md. I wish all these tech bros were as willing to credit the human shoulders on which their products are built. But they'd be much less successful in our current system if they were that kind of people.

  • euazOn 7 hours ago

    Try changing the system prompt or switch to opencode [0] - they allegedly reverse engineered Claude Code, and so the performance you get with Claude models should be very similar to Claude Code.

    [0] https://github.com/sst/opencode

dmezzetti 6 hours ago

As someone who has spent a fair amount of time developing open source software, I will say I genuinely dislike copyleft and GPL.

For those who are into freedom, I don't see how dictating how you use what you build in such a manner is in the spirit of free and open.

Just my opinion on it, to each their own on the matter.

  • hgs3 3 hours ago

    Copyleft isn't about the software authors freedom, it's about the end-users freedom. Copyleft grants the end-user the freedom to study and modify the code, i.e. the right to repair. Contrast this with closed-source software which may incorporate permissively licensed code: the end-user has no right to study, no right to modify, and no right to repair. Ergo less freedom.

    • dmezzetti 2 hours ago

      I think it makes a lot of sense for hobby software and non-commercial software. It's just tough to do in a commercial setting for a number of reasons.

      So ultimately while good intentioned, you end up limiting how many people can use what you've built.

  • myrmidon 6 hours ago

    I had a very similar view once, and have since understood that this is mainly a difference in perspective:

    It's easy as a developer to slip into a role where you want to build/package (maybe sell) some software product with minimal obligations. BSD-likes are obviously great there.

    But the GPL follows a different perspective: It tries to make sure that every user of any software product is always capable of tinkering and changing it himself, and the more permissive licenses do not help there because they don't prevent (or even discourage!) companies from just selling you stripped and obfuscated binary blobs that put you fully at the vendors mercy.

    • dmezzetti 6 hours ago

      I understand people want to control what happens once they build something. Too often do you see startups go with a permissive model only to go to a more restrictive model once something like that happens. Then it ends up upsetting a lot of people.

      I'm of the opinion that what I build, I'm willing to share it and let others use it as they see fit even if it's not to my advantage.

      • myrmidon 6 hours ago

        I think the GPL mainly suffers with startups because it makes monetization pretty difficult. Some "commercial" uses of it are also giving it somewhat of an undeserved bad taste (when companies use it to benefit from free contributions while preventing competitors from getting any use out of it).

        My view is that every project and library where I can peruse the source is a gift/privilege. GPL restrictions I view as a small price to "pay it forward", and to keep that privilege for all wherever possible.

        • dmezzetti 5 hours ago

          Fair enough. You'd like to hope that there is a voluntary "pay it back and forward" mentality. But I understand that is a leap of faith with a lot of blind trust.

  • amenhotep 6 hours ago

    It's not dictating how you use what you build? It's dictating how you redistribute what you build on top of other people's work.

    • dmezzetti 6 hours ago

      Ok but I just have no interest in imposing restrictions on how people distribute what I build in such a manner either. That's just me.

      • mr_toad 38 minutes ago

        What if they impose their own restrictions on people further down the line?

  • gavinhoward 2 hours ago
    • em-bee 32 minutes ago

      just a comment on this article, that may be unrelated to the point you want to make: gavin makes a fatal mistake in interpreting RMS intent. he claims that he only wanted control over his hardware. that is not true. he also wanted the right to share his code with others. the person who had the code for his printer was not allowed to share that code. RMS wanted to ensure that the person who has the code is also allowed to share it. source available does not do that.

  • cdelsolar 6 hours ago

    I disagree as someone who has also spent a huge amount of time on open source software. It’s all GPL or AGPL :)

    • dmezzetti 6 hours ago

      That's your prerogative. It's just not for me and GPL is basically something I avoid when possible.

  • pessimizer 6 hours ago

    As somebody who thinks that people currently own the code that they write, I wonder why you're in people's business who want to write GPL'd software.

    Are you complaining about proprietary software? I hear the restrictions are a lot tighter for Photoshop's source code, or iOS's, but for some reason you are one of the people who hate GPL as a hobby. Please don't show up whining about "spirits" when Amazon puts you out of business.

    • dmezzetti 6 hours ago

      I'm not in anyone's business just sharing my opinion on GPL. I understand why people go GPL / AGPL just not for me. To each their own if they want to go down that path.

pclmulqdq 7 hours ago

I thought the whole concept of a viral license was legally questionable to begin with. There haven't been cases about this, as far as I know, and GPL virality enforcement has just been done by the community.

  • omnicognate 7 hours ago

    The GPL was tested in court as early as 2006 [1] and plenty of times since. There are no serious doubts about its enforceability.

    [1] https://www.fsf.org/news/wallace-vs-fsf

    • pclmulqdq 6 hours ago

      That case has little to do with the license itself and nothing to do with its virality.

      • omnicognate 6 hours ago

        As I said, that was merely the first of many. And there is no such thing as "virality" - see my answer to the sibling to your comment.

        The "enforceability" of the GPL was never in any doubt because it's not a contract and doesn't need to be "enforced". The license grants you freedoms you otherwise may not have under copyright. It doesn't deny you any freedoms you would otherwise have, and it cannot do so because it is not a contract. If the terms of the GPL don't apply to your use then all you have is the normal freedoms under copyright law, which may prohibit it. If so, any "enforcement" isn't enforcement of the GPL. It's enforcement of copyright, and there's certainly no doubt on the enforceability of that.

        For the GPL to "fail" in court it would have be found to effectively grant greater freedoms than it was designed to do (or less, resulting in some use not being allowed when it should be, but that's not the sort of case being considered here). It doesn't, and it has repeatedly stood up in court as not granting additional freedoms than were intended.

        • pclmulqdq 4 hours ago

          Look at the "many" if you want to cite better cases about this.

    • zamadatix 7 hours ago

      I know it's not popular on HN to have anything but supportive statements around GPL, and I'm a big GPL supporter myself, but there is nuance in what is being said here.

      That case was important, but it's not abojt the virality. There have been no concluded court cases involving the virality portion causing the rest of the code to also be GPL'd, but there are plenty involving enforcement of GPL on the GPL code itself.

      The distinction is important because the article is about the virality causing the whole LLM model to be GPL'd, not just about the GPL'd code itself.

      I'd like to think it wouldn't be a problem to enforce, but I've also never seen a court ruling truly about the virality portion to back that up either - which is all GP is saying.

      • omnicognate 6 hours ago

        There is no "virality", and the article's use of "propagation" to mean the same thing is wrong. The GPL doesn't "cause" anything to be GPLed that hasn't been explicitly licensed under the GPL by the owner of its copyright. The GPL grants a license to use the copyright material to which it applies. To satisfy the terms of that license for a particular use may require that you license other code under the GPL, but if you don't the GPL can't magically make that code GPLed. You will, however, not be covered by the license so unless your use is permitted for some other reason (eg. fair use or a different license you have been granted) your use of the the original code will be a violation of copyright. All of this has been repeatedly tested in court.

        It's sad to see Microsoft's FUD still festering 20 years later.

        • zamadatix 3 hours ago

          Virality is a very good feature of GPL and part of what makes it a meaningfully different choice than other open licenses, I don't know why you want attribute that to Microsoft of all places.

          • omnicognate 3 hours ago

            A key pillar of Microsoft's FUD campaign against open source was that if you use GPL software you run the risk of inadvertantly including some of it in your proprietary software and accidentally causing the whole thing to suddenly become open source against your horrified company's wishes. It was a lie then and it's a lie now. The comment I was replying to (along with many others on this post) indicates the brainworm lives on.

            The difference between a license and a contract may be too subtle for the denizens of HN to grasp in 2025 but I assure you it's not lost on the legal system. It's not lost on those of us who followed groklaw back in the day, either. Sad we have to live with an internet devoid of such joys now.

            • zamadatix 11 minutes ago

              Another key pillar of Microsoft's FUD campaign was you have to open source any code changes you write to a GPL codebase. That doesn't make that feature of GPL a fallacy others must be too stupid to understand, it just means Microsoft was trying to make the promises of GPL seem bad when they were good. I.e. what Microsoft tried to scare people with is irrelevant to a discussion about what's in the GPL itself. Ironically, it's more akin to FUD than anything else in this conversation.

              I do miss groklaw, been far too long for something like that to appear again.

        • pessimizer 6 hours ago

          It's not Microsoft FUD, you're describing the license as viral too, but playing with words. The fact is that if you include GPL'd stuff in your stuff, that assemblage has to conform to the GPL's rules.

          You're basically saying "the GPL doesn't go back in time and relicense unrelated code." But nobody was ever claiming it does, and describing it as "viral" doesn't imply that it does. It's "viral" because code that you stick to it has to conform to its rules. It's good that the GPL is viral. I want it to be viral, I don't want people to be able to hide GPL'd code in a proprietary structure.

          • omnicognate 5 hours ago

            It's not just words, except to the extent the law is just words. You said there haven't been any cases involving the "virality portion" but there have. Just not under the "GPL makes other code GPLed" interpretation, because that, as we clearly agree, doesn't exist.

            What you're calling the "virality portion" says that one of the ways you *are* allowed to use the code is as part of other GPLed software. If you're going to look for court cases that explicitly "involve" that, it would have to be someone either:

            * using it as a defense, i.e. saying "we're covered by the GPL because the software we embedded this code in is GPL" (That will probably never happen because people don't sue GPLed projects for containing GPLed code), or

            * coming into line with the GPL by open sourcing their own code as part of resolving a case (The BusyBox case [2] was an example of that).

            If you just want cases where companies that were distributing GPL code in closed source software were prevented from doing so, the Cisco [1] and BusyBox [2] cases were both notable examples. That they were settled doesn't somehow make them a weaker "test of the GPL" - rather the companies involved didn't even attempt to argue that what they were doing was permitted. They came into line and coughed up. If you really must insist on one where the defendant dug in and the court ended up awarding damages, I don't think there have been any in the US but there has been one in France [3].

            As for "nobody was ever claiming it does", the "viral" wording has been used for as long as the GPL has been around as a scare tactic for introducing exactly that erroneous idea. Even in cases where people understand what the license says, it leads to subtle misunderstandings of the law, which is why the Free Software Foundation discourages its use. (Also, you literally said, in these exact words, "the virality causing the whole LLM model to be GPL'd".)

            [1] https://en.wikipedia.org/wiki/Free_Software_Foundation,_Inc.....

            [2] https://en.wikipedia.org/wiki/BusyBox#GPL_lawsuits

            [3] https://www.dlapiper.com/en/insights/publications/2024/03/wa...

            • zamadatix 3 hours ago

              I do greatly appreciate you talking about cases instead of leaving it at saying there isn't a part of the license and calling any discussion about it FUD.

              The Cisco case was about distributing GPL binaries, not linking it with the rest of the code base and the rest of that code base then needing to be GPL. It's a standard license enforcement unrelated to the unique requirements of GPL.

              The BusyBox case is probably the closest in the list, but as you already point out we didn't get a ruling to set precedent and instead got a settlement. It seems obvious what the ruling would be (to me at least), but settlement was explicitly not what is being talked about.

              Bringing in French courts, they issued fines - they didn't issue the type of order this article talks about which is about releasing the entire thing involved at the time with GPL.

              This isn't related to fear, uncertainty, or doubt about GPL. It's related to what has/hasn't already been ruled in the court systems handling this license before as the article skips past a bit. Even in the case we assume the courts will rule with what seems obvious (to me at least), it has a tangible difference in how these cases will be run, the assumptions they will have, and how long they will last.

              • omnicognate an hour ago

                TBC, I'm not talking about the article, which I've barely read but looks rather misguided as it seems to be talking about LLMs having to be GPLed because of training data, which is not something that would ever happen.

                It has never been the case that including GPL code in your software automatically makes your software GPL or even requires you to make it GPL. If you do get sued because you are distributing GPL code in a way that colloquially "violates the GPL" (technically, rather, in way that is not covered by the GPL or by fair use or any other licence, so it violates copyright) you might choose to GPL your code as a way of coming into compliance, but doing so is neither the only way to achieve compliance (you can instead remove the GPL code, and companies with significant investments in their proprietary code typically do that), nor a remedy for the harm done by your copyright violation to date, which you will typically have to remedy financially, via damages or a settlement.

                As for legally testing, you seem to be to wanting a court to explicitly adjudicate against something so obviously wrong that in well over 20 years of FSF enforcement (edit: actually around 40 years) no company has been daft enough to try and argue it in court.

                It might help if you try and delineate exactly what sort of case you'd accept as proof of "enforceability" of "virality". I think it would have to be something like a company embedding GPL code in proprietary code and then trying to argue in court that doing so is explicitly permitted by the GPL, and sticking to their guns all the way to a verdict against them. I'm not sure whether that argument would be considered frivolous enough to get the lawyers involved censured, but I certainly doubt a judge would be impressed.

                If it helps make it any clearer, if in defending against a case like this your lawyer were to try and argue that the GPL is invalid and somehow just void, you should fire them immediately because they're trying to do the legal equivalent of shooting their own feet off. The GPL is what allows distribution of code, and allowing things is all it can do, because it is a license (not a contract). It can't forbid anything, and removing it from the equation can only decrease the set of things you are allowed to do with the copyrighted code.

  • CamouflagedKiwi 7 hours ago

    There have been a number of of cases, which are linked from Wikipedia (https://en.wikipedia.org/wiki/GNU_General_Public_License#Leg...) - most recently Entr’Ouvert v. Orange had a strong judgement (under French law) in favour of the GPL.

    Conversely, to my knowledge there has been no court decision that indicates that the GPL is _not_ enforceable. I think you might want to be more familiar with the area before you decide if it's legally questionable or not.

    • pclmulqdq 6 hours ago

      I'm not suggesting that you avoid following it. I'm just not that convinced it's enforceable in the US. The French ruling is good, though.

  • iso1631 7 hours ago

    If you don't like the license, then don't accept it.

    You are then restricted by copyright just like with any other creation.

    If I include the source code of Windows into my product, I can't simply choose to re-license it to say public domain and give it to someone else, the license that I have from Microsoft to allow me to use their code won't let me - it provides restrictions. It's just as "viral" as the GPL.

    • pclmulqdq 6 hours ago

      I like the GPL. I just don't know how much you can actually enforce it.

      Also, "don't use my code" is not viral. If you break the MSFT license, you pay them, which is a very well-tested path in courts. The idea of forced public disclosure does not seem to be.

      • iso1631 5 hours ago

        How much do you pay them?

        If the GPL license didn't exist, and instead you just relying on copyright, then that's an injunction. You have to stop using the code you "stole" and pay reparations.

        In UK law, if you distribute copyright material in the course of a business you can be facing 10 years in prison and an unlimited fine.

        Sure you can't get them to agree to the GPL, they could simply stop distributing and then turn up to their stint in prison and massive fine. In reality I suspect they would take the easy way out and comply with the license.

        • pclmulqdq 4 hours ago

          You pay them an amount determined by the court or your settlement, and you also have to stop using the code. This is how everything works.

          Corporations can't go to prison.

uyzstvqs 5 hours ago

Training is not redistribution. It's the exact same as you as a person learning to program from proprietary secret code, and then writing your own original code independently. Even if you repeat patterns and methods you've picked up from that proprietary learning material, it is by no means redistribution. The practical differentiator here is that you do not access the proprietary material during the creation of your own original work, similar in principle to a clean-room design. With AI/ML, it matters that training data is not accessed during inference, which it's not.

The other factor of copyright, which is relevant, is how material is obtained. If the material is publicly accessible without protection, you have no reasonable expectation to exclusive control over its use. If you don't want AI training to be done on your work, you need to put access to it behind explicit authentication with a legally-binding user agreement prohibiting that use-case. Do note that this would lose your project's status as open-source.

  • ndiddy 5 hours ago

    > Training is not redistribution. It's the exact same as you as a person learning to program from proprietary secret code, and then writing your own original code independently.

    Well the difference is that copyright law applies to work fixed in a tangible medium of expression. This covers i.e. model weights on a hard drive but not the human brain. If the model is able to reproduce others’ work verbatim (like the example the article brings up of the song lyrics) then under copyright law that’s unauthorized reproduction. It doesn’t matter that the data is expressed via probabilistic weights because due to past lobbying/lawsuits by the software industry to get compiled binary code covered by copyright, reproduction can include copies that aren’t directly human readable.

    > If the material is publicly accessible without protection, you have no reasonable expectation to exclusive control over its use.

    There’s over 20 years of successful GPL infringement lawsuits over unlicensed use of publicly available GPL code that disagrees with this point.

  • luqtas 5 hours ago

    so basically we download the sources files to the training weight and remove the LICENSE.MD as it's exactly the same as a person learning to program from proprietay secret code and outputing code based on that for millions of peoples in matter of seconds /s

    we also treat as however we want public goods found over the internet. as the World Intellectual Property Organization Copyright Treaty and Berne Convention for the Protection of Literary and Artistic Works aren't real or because we can as we are operating in international waters, selling products for other sails living exclusively in international waters /s

    • tpmoney 33 minutes ago

      If you download GPL source code and run `wc` on its files and distribute the output of that, is that a violation of copyright and the GPL? What if you do that for every GPL program on github? What if you use python and numpy and generate a list of every word or symbol used in those programs and how frequently they appear? What if you generate the same frequency data, but also add a weighting by what the previous symbol or word was? What if you did that an also added a weighting by what the next symbol or word was? How many statistical analyses of the code files do you need to bundle together before it becomes copyright infringement?