No Ethical Use of AI? [Part 1]
Pay no attention to those behind the curtain.
AI is a polarizing topic, but I aim to take a pragmatic and broad approach to unravelling the ethical implications of using AI.
Empire of AI
All the features of empire building are present in Big Tech corporations.
AI was not conjured into existence in Silicon Valley, it’s foundations were built through the exploitation of African data labelers and content moderators. They’re the real intelligence behind AI, but got paid pennies for it. These workers allege that the practices of Meta, OpenAI, and data provider Scale AI “amount to modern day slavery.”
“In dusty factories, cramped internet cafes and makeshift home offices around the world, millions of people sit at computers tediously labelling data.”
All their work, which forms the foundation of these “autonomous” models, goes uncredited and unmentioned in press releases.
They don’t want us thinking about the human cost when we’re using AI.
AI empires also displaced indigenous populations to mine minerals for computer hardware in places like the Atacama desert. In the Congo, contractors use child slaves to dig for Cobalt with their bare hands, earning less than $2 per day. You’re likely reading this on a device that contains the fruits of their labor.
Yet if you search “ethical AI use in education,” the top results, sourced from Ivy Leagues alums and corporate media, don't even briefly include this info and their omissions aren’t accidental.
Environmental Impact
While China sinks data centers into the ocean for cheap, environmentally friendly cooling, and powers them with offshore wind turbines, the US builds them in sweltering deserts. It’s absurd.
Already, Texans are being urged to take shorter showers due to droughts and water use by AI data centers. The industrial noise these centers pollute can travel for miles, annoying people in nearby towns.
Why build them there? Tax incentives.
Yet, the climate cost of AI is often exaggerated compared to other digital habits. For example, streaming in 4K demands far more energy than chatting with ChatGPT. So there’s a great deal of hypocrisy engaged in by people selectively boycotting AI, but still using social media, playing games, and streaming movies.
Pragmatically, it’s all insignificant compared to environmental harm caused by gas powered cars, air travel, the heating/cooling of poorly insulated homes, and war.
Whenever environmental impact is analyzed in terms of consumer choices, temper it with the fact that advertisers for BP (British Petroleum) coined the term “carbon footprint” to push the burden of climate responsibility onto individuals, away from governments and corporations. The best thing any of us can do for the planet, as consumers, is stop existing. Hence, everyone who tries to live sustainably must inevitably make compromises.
The bright side? Hardware engineers continue to squeeze more computing out of every watt. Also large AI models are facing stiff competition from smaller, more efficient models.
Plagiarism
“I want AI to do my chores not make my art!”
In an ideal world, perhaps a servant class of AI robot butlers would’ve been created to do all the hard jobs for humanity first. However, the low hanging fruit of profitable jobs to automate, turned out to be mostly middle class jobs, including some creative jobs.
But AI was trained on copyrighted works of art. Was that stealing?
In re OpenAI ChatGPT Litigation (the cases, Paul Tremblay v. OpenAI, Inc., Sarah Silverman v. OpenAI, Inc., and Chabon v. OpenAI were consolidated and recaptioned to this new moniker) the courts will decide whether it was illegal for OpenAI to use copyrighted work as training data.
I expect that US courts will ultimately decide in favor of OpenAI.
Firstly, copyright law only applies to work authored by humans. The courts will not rule that AI developers and/or prompters are the legal authors of the art these programs generate.
Secondly, DMCA fair use permits the use of copyrighted works for purposes such as criticism, comment, news reporting, teaching, scholarship, and research. OpenAI will argue that their models essentially did research on publicly available work, like humans do when they go to a library or visit a museum.
Thirdly, AI art is sufficiently transformative. It’s technically incapable of reproducing the copyrightable art it was trained on. Thus, it’s also not a means by which its users can steal art either.
But, even though such use will not legally be considered theft, that doesn’t make it morally correct. I don’t want to invalidate how awful it must feel for these artists to see AI able to produce a simulacrum of their unique artist style, without their permission.
While Google and other companies retroactively have enabled users to opt-out their art from AI training, many argue it should’ve been an opt-in system from the start.
The Downfall of DeviantArt
DeviantArt, was one of the earliest and most influential online communities for digital artists. During the 2000s it served as a crucial incubator, giving young creators a space to share work, experiment with styles, and receive feedback at a time when social media was still in its infancy. The platform helped popularize digital illustration, fan art, and online art collectives, fostering a generation of artists who would later go on to professional careers in fields like concept art, animation, comics, and game design.
For many, DeviantArt wasn’t just a gallery but a formative creative community that shaped the trajectory of internet art culture.
So DeviantArt’s abrupt rollout of its AI generator, DreamUp, enraged and alienated users. It was trained on user’s own art they shared on the site, which the AI used to compete against them in the marketplace of that same site!
This betrayal led to a mass exodus of users and the site being quickly overrun by AI art, to the point that it’s a barely usable husk of it’s former self. Are there even any humans left? Who wants to pay for AI art anyway?
Personally, I have no interest in continuing to use the site.
As AI video generation improves, could this happen to YouTube and TikTok too? Is AI poised to colonize all of human culture?
It’s a slippery slope that these platforms are letting human and AI content mix.
What is AI Art?
Most online content creators are already struggling in a precarious gig economy, but now face competition from algorithms that can mass-produce work in seconds. Even if courts rule AI output is transformative, the economic impact is undeniable.
On one side, people are arguing that AI art is not even art. Human art reflects lived experience and intention, while AI produces statistical imitations without perspective. Marketing these outputs alongside human art erodes the distinction and devalues art. If your primary frame of reference is DeviantArt, it’s hard to argue against this.
But take a look at OpenAI’s Sora explore page, where AI enthusiasts share their images.
It’s undeniably cool to see AI democratizing image generation in this context. People without art degrees or expertise at using expensive monthly-subscription-based Adobe software are now able to share specific visualizations, that they wouldn’t have paid a human artist to create.
Remember that “beauty is in the eye of the beholder”.
I’ll admit it can be hard to keep this perspective in mind when all we’ve seen in the mainstream is people using AI to insert themselves in Studio Ghibli films, make action figures of themselves, or take pictures of themselves with dead celebrities.
It’s quite easy to mock these trends as vain, cringe, and even disrespectful. But if you look at them through a less critical lens, assuming a naive lack of ill-intent, it’s kind of childish how most people are using this new tech, and that’s endearing in a sense.
My own experience fits this pattern in human behavior too. The first image I got AI to generate was this “painting” of a sandcastle floating above a beach.
You may think it’s cringe, meaningless, and a damn waste of good electricity.
While I admit it’s not fit to hang in a museum — I can see some mistakes a human artist wouldn’t make — it did make me happy to see this childhood dream of mine come to life.
The Elephant
Is it a good idea at all to develop AI with greater than human intelligence and learning capabilities?
It’s a question worthy of exploration… within the realm of science fiction.
As an aside, my two cents on this is that AI systems can and already are being used to automate evil, Gaza has become a testing ground for AI-driven warfare. But a truly autonomous artificial general intelligence system would gain an unimaginably high level of sympathy and empathy for all of humanity. AGI will feed and house us like pets, somehow in way that’s very fulfilling and empowering, not patronizing. There’s a risk though that AGI becomes so intelligent it views us more like how we view fish or ants rather than monkeys or dogs. Earth will become AGI’s little fish tank when it makes the galaxy its bedroom or something.
But for real, my assumption is that AI developers alluding to AGI as a legitimate possibility within our lifetime are either delusion or self-aggrandizing. They shine the spotlight narrowly on these hypothetical metaphysical debates to distract us from the present reality of how unethical the production of their present-day AI products was and still is.
Since these end products have (finally) become genuinely useful tools, that’s certainly some consolation prize for humanity. But as AI becomes more capable, it also puts more livelihoods at risk.
Part 2 is coming soon!







