If it’s flagged as “assisted by <LLM>” then it’s easy to identify where that code came from. If a commercial LLM is trained on proprietary code, that’s on the AI company, not on the developer who used the LLM to write code. Unless they can somehow prove that the developer had access to said proprietary code and was able to personally exploit it.
If AI companies are claiming “fair use,” and it holds up in court, then there’s no way in hell open-source developers should be held accountable when closed-source snippets magically appear in AI-assisted code.
Granted, I am not a lawyer, and this is not legal advice. I think it’s better to avoid using AI-written code in general. At most use it to generate boilerplate, and maybe add a layer to security audits (not as a replacement for what’s already being done).
But if an LLM regurgitates closed-source code from its training data, I just can’t see any way how that would be the developer’s fault…
How would they launder it? Just declare it their own property because a few lines of code look similar? When there’s no established connection between the developers and anyone who has access to the closed-source code?
That makes no sense. Please tell me that wouldn’t hold up in court.
I believe what they’re referring to is the training of models on open source code, which is then used to generate closed source code.
The break in connection you mention makes it not legally infringement, but now code derived from open source is closed source.
Because of the untested nature of the situation, it’s unclear how it would unfold, likely hinging on how the request was formed.
We have similar precedent with reverse engineering, but the non sentient tool doing it makes it complicated.
That makes sense. I see the problem with that, and I don’t have a good solution for it. It is a divergence of topic though, as we were discussing open-source programmers using LLMs which are potentially trained on closed-source code.
LLMs trained on open-source code is worth its own discussion, but I don’t see how it fits in this thread. The post isn’t about closed-source programmers using LLMs.
Besides, closed-source code developers could’ve been stealing open-source code all along. They don’t really need AI to do that.
Still, training LLMs on open-source code is a questionable practice for that reason, particularly when it comes to training commercial models on GPL code. But it’s probably hard to prove what code was used in their datasets, since it’s closed-source.
I don’t really see it as a divergence from the topic, since it’s the other side of a developer not being responsible for the code the LLM produces, like you were saying.
In any case, it’s not like conversations can’t drift to adjacent topics.
Besides, closed-source code developers could’ve been stealing open-source code all along. They don’t really need AI to do that.
Yes, but that’s the point of laundering something. Before if you put foss code in your commercial product a human could be deposed in the lawsuit and make it public and then there’s consequences. Now you can openly do so and point at the LLM.
People don’t launder money so they can spend it, they launder money so they can spend it openly.
Regardless, it wasn’t even my comment, I just understood what they were saying and I’ve already replied way out of proportion to how invested I am in the topic.
Conversations can drift to adjacent topics, yeah, but it’s not a “gotcha” when someone suddenly changes the topic to the inverse of what was being said, and then acts like they’re arguing against you because the thing that you said about the original topic doesn’t add up with the new topic.
If you change the topic, you need to at least give the other person an opportunity to respond to your new topic, not just assume that their same argument applies.
Alright. I didn’t see any gotchas or argument, and didn’t make the comment.
That being said, reading the context I assume you’re referring to, it hardly reads like anything more than talking about the implication of the idea you shared.
Disagreeing because applying the argument consistently results in an undesirable outcome isn’t objectionable.
Disagreeing because applying the argument consistently results in an undesirable outcome isn’t objectionable.
I’m not objecting to disagreement, I’m objecting to the attempt to apply my argument to a different situation that it wasn’t meant for, and then going on as if that’s even remotely what I was saying.
That’s not “applying the argument consistently”, it’s removing context, overgeneralizing the argument, and applying a strawman based on a twisted version of it.
Open-source developers using AI trained on closed-source code and closed-source developers using AI trained on open-source code are two different issues. My point was only intended to apply to the former, because that’s what we were talking about. Trying to apply what I said to the former is a distortion of my argument, and not the argument I was making.
And to try to conflate the two is to be allergic to nuance, which is honestly just typical and unsurprising, but if that’s the case then I’m done wasting my time on this conversation.
If it’s flagged as “assisted by <LLM>” then it’s easy to identify where that code came from. If a commercial LLM is trained on proprietary code, that’s on the AI company, not on the developer who used the LLM to write code. Unless they can somehow prove that the developer had access to said proprietary code and was able to personally exploit it.
If AI companies are claiming “fair use,” and it holds up in court, then there’s no way in hell open-source developers should be held accountable when closed-source snippets magically appear in AI-assisted code.
Granted, I am not a lawyer, and this is not legal advice. I think it’s better to avoid using AI-written code in general. At most use it to generate boilerplate, and maybe add a layer to security audits (not as a replacement for what’s already being done).
But if an LLM regurgitates closed-source code from its training data, I just can’t see any way how that would be the developer’s fault…
Pretty convenient.
This is how copyleft code gets laundered into closed source programs.
All part of the plan.
How would they launder it? Just declare it their own property because a few lines of code look similar? When there’s no established connection between the developers and anyone who has access to the closed-source code?
That makes no sense. Please tell me that wouldn’t hold up in court.
I believe what they’re referring to is the training of models on open source code, which is then used to generate closed source code.
The break in connection you mention makes it not legally infringement, but now code derived from open source is closed source.
Because of the untested nature of the situation, it’s unclear how it would unfold, likely hinging on how the request was formed.
We have similar precedent with reverse engineering, but the non sentient tool doing it makes it complicated.
That makes sense. I see the problem with that, and I don’t have a good solution for it. It is a divergence of topic though, as we were discussing open-source programmers using LLMs which are potentially trained on closed-source code.
LLMs trained on open-source code is worth its own discussion, but I don’t see how it fits in this thread. The post isn’t about closed-source programmers using LLMs.
Besides, closed-source code developers could’ve been stealing open-source code all along. They don’t really need AI to do that.
Still, training LLMs on open-source code is a questionable practice for that reason, particularly when it comes to training commercial models on GPL code. But it’s probably hard to prove what code was used in their datasets, since it’s closed-source.
I don’t really see it as a divergence from the topic, since it’s the other side of a developer not being responsible for the code the LLM produces, like you were saying.
In any case, it’s not like conversations can’t drift to adjacent topics.
Yes, but that’s the point of laundering something. Before if you put foss code in your commercial product a human could be deposed in the lawsuit and make it public and then there’s consequences. Now you can openly do so and point at the LLM.
People don’t launder money so they can spend it, they launder money so they can spend it openly.
Regardless, it wasn’t even my comment, I just understood what they were saying and I’ve already replied way out of proportion to how invested I am in the topic.
Conversations can drift to adjacent topics, yeah, but it’s not a “gotcha” when someone suddenly changes the topic to the inverse of what was being said, and then acts like they’re arguing against you because the thing that you said about the original topic doesn’t add up with the new topic.
If you change the topic, you need to at least give the other person an opportunity to respond to your new topic, not just assume that their same argument applies.
Alright. I didn’t see any gotchas or argument, and didn’t make the comment.
That being said, reading the context I assume you’re referring to, it hardly reads like anything more than talking about the implication of the idea you shared.
Disagreeing because applying the argument consistently results in an undesirable outcome isn’t objectionable.
I’m not objecting to disagreement, I’m objecting to the attempt to apply my argument to a different situation that it wasn’t meant for, and then going on as if that’s even remotely what I was saying.
That’s not “applying the argument consistently”, it’s removing context, overgeneralizing the argument, and applying a strawman based on a twisted version of it.
Open-source developers using AI trained on closed-source code and closed-source developers using AI trained on open-source code are two different issues. My point was only intended to apply to the former, because that’s what we were talking about. Trying to apply what I said to the former is a distortion of my argument, and not the argument I was making.
And to try to conflate the two is to be allergic to nuance, which is honestly just typical and unsurprising, but if that’s the case then I’m done wasting my time on this conversation.