Artists v. AI

Natasha Nel
2 replies
Gutenberg’s printing press v. scribes. Cars v. horses. Electricity v. gas. Artists v. AI. Today I'm thinking about how much media attention has been generated by writer and comedian Sarah Silverman joining the growing network of artists striking, speaking out against, and filing copyright infringement lawsuits against companies like OpenAI and Meta, owing to the fact that the tech giants' LLMs were trained on their content, without consent or compensation. Historically, this sobering “Trough of Disillusionment” phase of Gartner’s hype cycle has been followed by the "Slope of Enlightenment" and "Plateau of Productivity" stages — both of which are characterized by industry maturity, meaningfully useful applications, and the development of a symbiotic relationship between human and machine. What do you think? How does the AI element of the writers and actors strike make you feel?

Replies

Anatolii [ @Bootstraptor ]
Proven.ly Social Proof & FOMO Builder
Discussing Gartner's hype cycle, we might just be in the phase known as the "Trough of Disillusionment" right now, especially considering the controversies and debates around AI. But what's up ahead? It's a little hazy to predict. But one would hope we are heading towards the "Slope of Enlightenment." This is when we start figuring out the loopholes, streamline the regulations, and make sure those whose creative work has been used to educate AI get their fair share of acknowledgment and compensation. And finally, we'd reach the "Plateau of Productivity," where AI becomes a seamless part of our lives and is truly helpful. A step towards this could be designing AI systems that are skilled at crafting original work instead of just echoing the patterns they have learned from existing human content. This brings an interesting point to the forefront: AI doesn't function like prior technologies. It doesn't really produce content like humans do. It's like a copycat, mimicking patterns it's picked up from its training data. And this poses some really intricate questions about who gets the credit for what the AI creates—questions that we didn't really have to grapple with when dealing with older technologies.
Michael Silber
Honestly, it feels kinda gross. Studios and other folks who control the distribution are asking creatives to just "trust them" while they protect their IP by stealing artists' future ability to make a living. I don't quite understand why once a LLM hoovers up material it can't be removed and then retrained so that that the data is no longer available. Maybe someone with a bit more insight can shed light on that here. While people who self-host an LLM can always add content they don't have the copyright to, it seems like it should be pretty reasonable to hold the OpenAI, Google, and Metas of the world to remove it from their mass-market products.