ALG Blog Post 2: How Generative AI Works and How It Fails

Published on:

Ways in which AI generates content. Ways generative AI fails and causes problems

Case Study:
How Generative AI Works and How It Fails

Summary

The purpose of the case study is explain the various ways AI generates content like images and text with diffusion and transformer models, as well as their problems and limitations. Another purpose is to highlight the societal problems caused by generative AI, like misinformation, deepfakes, and labor exploitation.

Discussion Topic: The use of creative work for training

Is this practice Ethical? No. The most utilized and popular AI models use stolen and pirated data to generate content that is often passed as its own. These companies do not attribute to the creators they trained off of, while making money off of that same AI. Creative jobs are at risk because generative AI can replicate and mass produce creative products.

How can those who want to change the system go about doing so? Talk to politicans and lawmakers in large groups. They could also data poision their own content to prevent their content being trained on by AI.

Can the market solve the problem, such as through licensing agreements between publishers and AI companies? AI companies could scrap existing models and begin to make data sets made up of data that has permission and/or pay to use. Although, this is not realistic because of it costing a lot to redo the training. There would need to be governmental orders to disband current AI models.

What about copyright law — either interpreting existing law or by updating it? There are transparency requirements of companies that disclose training data sources. AI companies should be required to pay people whose content they used to train the AI.

What other policy interventions might be helpful? Create global laws about AI because it trains on data used from around the world.

New Question

Deepfakes are becoming easier to create and harder to detect. Should society focus more on developing detection tools, passing stronger legislation?

I chose this question, because I have been seeing stories of kids in school making deepfakes of teachers doing/saying bad things. There are also a lot of deepfakes of politicans saying bad things. In the comment sections of these posts there are handfuls of people believing deepfakes. I think it is important to discuss the harm these deepfakes can cause to society. Because they are becoming so easy to make, I think there should be legislation passed to prevent bad actors.

Reflection

I already knew most of what the case study was referring to. Although I have not thought about the legality of AI as much as before. I remember watching a video about how Facebook exploits out sourced labour for weeding out toxic content. The video talks about the mental decline of these workers from them seeing vile content. There are stories of these workers commiting suicide, this can be attributed to the lack of benefits like mental health counseling. I think the exploitation done by these AI companies (while not as bad) can be connected to these past exploits. There needs to be more on the table for these workers with benefits and better pay.