SOG Blog Post 1: Challenging AI Hype and Tech Industry Power
Published on:
The speakers convey that AI is a constructed bubble that is harmful to society and the environment.
Case Study:
Challenging AI Hype and Tech Industry Power
Summary
The purpose of the panel is to expose the AI bubble built by these tech giants that are trying to profit and expand their power and control in our society.
Who?
Emily M. Bender is a Co-author of “The AI Con”. She is a Professor of linguistics at the Univeristy of Washington.
Alex Hanna is a Co-author of “The AI Con”. She is the director of research at the distributed AI research institute, DAIR. She is also a lecturer in the school of information at the University of California. She is an outspoken critic of the tech inustry, and pushs for the use of community based technology.
Karen Hao is the author of the New York Times bestselling “Empire of AI”. She is a award winning journalist, working with the Atlantic, and covers the impacts of AI on our society.
Tamaran Kneese is the host speaker of the podcast, and asks questions.
What?
Bender and Hanna suggests that there is technolgies that are being sold as AI, when they are not true artificial intelligence. AI is being used as a marketing term, and is a con. The con can be broken down into that AI is being marketed as a brilliant machine, it also overpromises like LLMs that replace essential social workers.
Karen says that companies like OpenAI should be thought as a new imperial/empire hierachical power, that try to do what she calls the “scale at all costs” type of development, which is costly, harmful and exploitive. There is also bias in what intelligence is with AI, because of how narrow it can be.
Where?
The current administration in the white house is allowing a lot of these tech giants to do whatever they want with little to no regulation, allowing these companies to amass monopolies. The training facilities take up a lot of water, money and labor. The labor is often outsourced and takes advantage of the people.
Why?
The public needs to know now because of how intergated it has already become into our society. There is no sign of the grip of these tech giants slowing down, especially with the current administration of the United States. CEOs like Sam Altman and Elon Musk are very vocal about insane ideoligies, that threaten democracy and sustainability like transhumanism.
How?
Bender and Hanna say that the intelligence of these AI are more automative by using sociological critiques. They also make connections to past human history and empires. Hao says that all empires fall, because they are built from extraction and exploitation, and people do not stand for this. She also says that we need to talk about what is and is not okay for AI use. She suggests that AI should be task specific instead of boundless risky powerhouse.
So What?
AI usage has been linked to lower cogotive function and brain use. If things become dystopian with AI, think of how brain dead people will be. The environment along with humans are being hurt by fossil fuels with air pollution, and resource consumption like drinking water in communites that are already in need of it.
New Question
How can educators come together to create a shared doctrine of AI that promotes learning and uses the task specific AI model that does not kill thinking?
I chose this question, because one of my Professors mentioned how it would be nice to have a AI policy that all of my school St. Olaf uses. It is different for each class, and this lessens the push for communal debate we should have about AI that is beneficial and weakens the rhetoric of AI doing everything for you.
Reflection
I really liked listening to this panel because of how relevant it was. Watching the news and seeing the tech CEOs talk about AI is very disgusting, because they know full well how much damage it is and will cause. I can see the publics view start to shift to a more skeptical lens of AI. There needs to be more push from our local governments to slow or regulate AI.
