My father got arrested and almost jailed for burning MP3 and audio disks for money. These people can scrape other people’s work for profit, and then get applauded by investors.
Piracy for me but not for thee.
I wonder if the next step will be making it able for a human to copyright the output of generative AI models, only for that these companies making sure through terms of use that they’ll be holding 50+% of the copyright and the profits in case of selling their slop.
“A copyright strategy that promotes the freedom to learn” - computers have more rights than people at this point.
Fuck Sam Altman
I’d rather not, thanks.
Though I would enjoy watching him fellate a cactus.
I second that
Thirded
That’s what they call ‘stealing’ or ‘piracy’ when the serfs do it.
“Massive copyright infringement”, and it would mean massive fines or jail time for us.
Wow! I wish I could learn from copyrighted materials as freely as OpenAI!
I’ve been training my own mental model with TPB and a VPN for years. Thanks Facebook for showing me that it’s the legal way to do it!
Are you a legal person backed by mega corpo and venture capital?
Then stfu, Toby
This perfectly illustrates America’s 2-tiered justice system: one for the wealthy and one for the little people. If I torrent copyrighted material, I risk fines/jail-time. If a big corporation like meta does it, then it’s allegedly “fair use”. To be clear, what OpenAI is requesting isn’t remotely close to the original intended purpose of fair use. Worst part is that small/independent creators will (if they aren’t already) be most adversely impacted by such selective application of copyright law
-
This is exactly the intended purpose of fair use. Look up the copyright clause.
-
Small creators are the biggest beneficiaries of this. They would have to pay the extortionate licensing fees.
-
It’s called stealing OpenAI. Call it what it is.
Let’s Eat, Grandma!
Let’s Eat Grandma!
The importance of the comma.
It’s not stealing when billionaires do it.
Just for humor’s sake I plugged in the proposal itself into chatgpt to have it give a summary on how it helps or hurts the average american – https://chatgpt.com/share/67d32e59-830c-800c-b9d6-c4abe50b37d4
The way I read that even chat gpt says it needs better safeguards
Overall Verdict
This proposal prioritizes AI industry growth and national security over strong worker and IP protections. If implemented well, it could boost the economy, create jobs, and enhance innovation, but it needs stronger safeguards for workers and content creators to prevent exploitation. The copyright section is the most concerning—it seems to favor big AI firms over independent creators. The export control strategy could be effective in protecting national security but might hinder global AI collaboration.
That’s still more positive than summaries from Cohere, Qwen, Deepseek, FuseAI and Arcee 32B (the latter two being combinations of different models, it’s complicated) in my quick test.
…And I’d recommend them all, TBH. Use anything but ChatGPT for the same reasons you’d use Lemmy over Reddit.
well yeah I generally don’t use AI for much anything, but in this case used it specifically because it’s the opinion on something written by OpenAI, which makes it’s disapproval coming from openai’s algorythm more amusing.
Plus funnier for them to have to debunk… is it better for them to argue “well our AI sucks, don’t take it’s word for anything”, or admit the obvious “you asked it for a view to the average person and not our profit margains, of course from that perspective our plan is bad”.
When Aaron Schwartz was in trouble for downloading copyrighted articles everyone was on the side of “copyright laws are dumb and need to be changed”.
Now it’s more popular to hate on AI and so now people want to see strict adherence to copyright law.
It Mmkes it seem like people lack real convictions on the issue and are just being led around by memes.
Copyright law is terribly implemented and needs to change. This isn’t new and doesn’t become less true because your favorite memes want you to dunk on AI.
Copyright laws are dumb and need to be changed.
And I also think that the laws that currently exist should apply to openAI. I don’t see any contradiction there.
The current system where regular people can get screwed over for torrenting movies but techno-oligarch wannabes are free to ignore the law is the absolute worst of both worlds.
To be clear, this isn’t a discussion about removing copyright laws. This is a discussion about specifically big data collecting tech companies being immune to the laws which still apply to everyone else.
I never suggested or implied that copyright laws need to be removed.
It does appear that OpenAI’s position is “copyright laws are dumb and need to be changed” which, during the Aaron Schwartz story, was the position of the community.
Now, since the entity involved is an AI company, we’re seeing people who’re on the side of using copyright laws to punish infringers because they don’t like AI.
Either copyright laws need to be changed or they don’t need to be changed. Someone’s position on the topic shouldn’t change based on who is being negatively effected by said laws.
People are being cynical about the laws applying equally to Big Corps vs regular people.
If we make a special case for abolishing copyright if it means you’re training an AI model, does that mean that now everyone can download copyrighted material if they do some form of locally hosted training?
The answer will probably end up being: one rule for the corporations and another for individuals.
This is just them going for regulatory capture. Again. The “tiered” country system, the controls on model weights, centralizing regulation in Washington, focus on datacenter build out (instead of on device inference), and more, it’s all just a big middle finger to open, locally runnable weights without saying it.
And they’re trying to justify it with Chinese hate more than “safety” fearmongering this time, even though this would let them run circles around the US (in time, though not without OpenAI making a healthy profit first).
They want to own your access, not let you have it.
QwQ 32B did a decent job writing that out:
spoiler
OpenAI’s proposal contains elements that could inadvertently or intentionally hinder open-source/open-weights AI and smaller competitors, while also raising concerns about regulatory capture. Here’s a breakdown of key points:
1. Regulatory Strategy (Preemption of State Laws):
- Potential Issue: The proposal advocates federal preemption of state AI regulations to streamline compliance. While this could reduce fragmentation, it centralizes regulatory power, favoring larger companies with resources to engage in federal partnerships. Smaller players might struggle to meet federal standards or secure liability protections, creating an uneven playing field.
- Risk of Regulatory Capture: The “voluntary partnership” framework could become a de facto requirement for accessing government contracts or protections, disadvantaging competitors not in the loop. This risks entrenching OpenAI and similar firms as preferred partners, stifling innovation from leaner, open-source alternatives.
2. Export Controls (Tiered System):
- Open-Source Concerns: While targeting Chinese models, the proposal emphasizes promoting “American AI systems” globally. This could pressure countries to adopt closed-source U.S. models over open-source alternatives (e.g., DeepSeek’s R1, despite its flaws). The focus on “democratic AI” might conflate national allegiance with openness, sidelining projects that prioritize technical transparency over geopolitical alignment.
- Hardware Dependencies: Requirements for “hardware-enabled mechanisms” and restrictions on non-U.S. chips (e.g., Huawei) could lock AI development into proprietary ecosystems, disadvantaging open-source projects reliant on diverse or cost-effective hardware.
3. Copyright Strategy:
- Double-Edged Sword: OpenAI’s defense of fair use for training data aligns with its own needs but could backfire. If other countries adopt stricter copyright regimes (e.g., EU-style opt-outs), smaller players without OpenAI’s scale might struggle to access training data. Meanwhile, OpenAI’s push to “shape international policy” risks leveraging U.S. influence to globalize its preferred norms, potentially stifling innovations in jurisdictions with more restrictive rules.
- IP Protectionism: The emphasis on safeguarding U.S. IP could lead to data silos, making it harder for open-source projects to train on global datasets unless they comply with U.S.-centric frameworks.
4. Infrastructure Investments:
- Bias Toward Scale: Proposals like AI Economic Zones and tax incentives prioritize large-scale, centralized infrastructure (e.g., hyperscale data centers). This disadvantages smaller players or open-source initiatives requiring decentralized, modular setups. The “National Transmission Highway Act” could further entrench big players with resources to navigate streamlined permitting processes.
- Workforce Training: While beneficial overall, tying training to partnerships with AI companies might channel resources toward OpenAI’s ecosystem, sidelining open-source communities or academic projects.
5. Government Adoption:
- Contractual Barriers: Faster procurement for “frontier AI labs” and requirements for SOC 2/ISO certifications or facility clearances could exclude smaller firms lacking the resources or bureaucratic capacity to comply. The push for custom classified models might also favor established firms with existing security clearances.
- On-Device Inference Risks: The focus on centralized infrastructure and export controls on chips (e.g., banning Huawei) could limit access to hardware optimized for on-device processing, disadvantaging open-source projects relying on efficient, edge-based solutions.
6. Open Weights/Open Source Specific Risks:
- Tiered Export Controls: While aimed at China, the framework might inadvertently penalize open-source projects that depend on global collaboration or diverse hardware. For instance, restrictions on “Tier III” nations could block cross-border contributions to open-source models.
- Proprietary Ecosystem Push: The emphasis on “American rails” and closed partnerships could marginalize open-source efforts that resist geopolitical framing, even if technically superior.
Conclusion: Regulatory Capture Concerns
OpenAI’s proposals, while framed as pro-innovation, risk entrenching its own dominance and disadvantaging smaller, open-source competitors through:
- Centralized Regulation: Federal preemption and partnerships may favor large firms with resources to engage.
- Infrastructure Bias: Investments prioritize scale over flexibility, sidelining decentralized or cost-effective alternatives.
- Export Controls: Could lock AI into U.S.-centric ecosystems, limiting open-source interoperability.
- IP and Copyright Tactics: While defending fair use, they may push restrictive norms abroad, hindering smaller players.
Verdict: While OpenAI positions itself as advocating for “freedom,” the proposals contain structural biases that could stifle open-source/open-weights innovation and enable regulatory capture. The focus on national competition with China overshadows neutral, inclusive frameworks, raising questions about whether the plan prioritizes U.S. corporate leadership over democratizing AI.
And it was generated on my desktop. That I own, in my house, with the PC completely disconnected from the internet atm, with some settings and features OpenAI would never let me have.
Sure. Deregulate Copyright Law as a whole. I dare you Orange Meatloaf Matryoshka. I double dare you.
You’re an absolute buffoon if you think any Republican would remove the rules for everybody instead of just themselves.
Naive take… They will keep fucking us and Sam rapist altman will get to use copyrighted works ofl lesser people but not Disney