Ethics of AI Technologies in “Sensitive” Content Creation and Evaluation. School Shooting Cases
pdf
html

Keywords

Artificial Intelligence AI Content Creation AI Text Analysis Ethical Frameworks Media Ethics School Shootings Columbine “Sensitive” Topics Psychological Trauma Harmful Narratives Content Moderation

How to Cite

Osipov, D. (2024). Ethics of AI Technologies in “Sensitive” Content Creation and Evaluation. School Shooting Cases. Galactica Media: Journal of Media Studies, 6(3), 44-65. https://doi.org/10.46539/gmd.v6i3.530

Abstract

This article looks into the ethical issues raised by AI-generated content, focusing on ‘sensitive’ topics like school shootings. As AI technologies progress, there is a greater risk that such information can accidentally reinforce negative narratives, glorify acts of violence, or cause psychological damage to victims and their communities. The study tackles these concerns by estimating the existing ethical frameworks and finding their limitations in dealing with these complicated situations. A main goal of the research is to create a refined set of ethical principles specifically geared to address the risks connected with AI-generated information about school shootings. The paper contains actual experiments using AI models such as ChatGPT, Claude, GigaChat, and YandexGPT to generate and analyze information about school shootings. These experiments highlight important issues in ensuring that AI-generated texts do not reinforce negative themes or cause suffering. For example, while some models, such as GigaChat, declined to generate content on sensitive themes, others, such as ChatGPT, created elaborate texts that risked retraumatizing readers or praising offenders. The findings show that, while current frameworks take into consideration basic concepts such as transparency, accountability, and fairness, they frequently lack precise direction for dealing with difficult issues. To close this gap, the suggested ethical framework incorporates particular content development criteria, stakeholder participation, responsible dissemination techniques, and ongoing research. This paradigm prioritizes the protection of vulnerable people and the prevention of psychological injury.

https://doi.org/10.46539/gmd.v6i3.530
pdf
html

References

Artificial Intelligence of the Russian Federation. (2024, May 12). Artificial Intelligence of the Russian Federation. https://ai.gov.ru/en/ai/regulatory/

Atkins, S., Badrie, I., & Otterloo, S. (2021). Applying Ethical AI Frameworks in practice: Evaluating conversational AI chatbot solutions. Computers and Society Research Journal. https://doi.org/10.54822/qxom4114

Bazzinotti, J. (2024). Why do you think school shootings happen? What’s the main culprit in your opinion? https://qr.ae/pr0ghn

Chudinov, S. I., Serbina, G. N., & Mundrievskaya, Yu. O. (2021). School Shooting Network on VKontakte: Case Study of a Fan Community Dedicated to “Kerch Shooter”. The monitoring of public opinion economic&social changes, 4, 363–383. https://doi.org/10.14515/monitoring.2021.4.1740 (In Russian).

Crawford, K., Calo, R., Whittaker, M., Creton, R., Reddy, S., Joshi, S., & Umil, A. (2019). AI Now 2019 Report. AI Now Institute, New York University.

Dörr, K. N., & Hollnbuchner, K. (2017). Ethical Challenges of Algorithmic Journalism. Digital Journalism, 5(4), 404–419. https://doi.org/10.1080/21670811.2016.1167612

Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. (2019). IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.

Floridi, L. (2019). Establishing the rules for building trustworthy AI. Nature Machine Intelligence, 1(6), 261–262. https://doi.org/10.1038/s42256-019-0055-y

Gillespie, T., Aufderheide, P., Carmi, E., Gerrard, Y., Gorwa, R., Guo, R., & West, S. M. (2018). The Santa Clara Principles on Transparency and Accountability in Content Moderation. The Santa Clara Principles. https://santaclaraprinciples.org/#:~:text=These%20principles%2C%20named%20after%20the,their%20content%20guidelines%20is%20fair%2C

Gorbatkova, O. I. (2017). Problema nasiliya v shkolakh v zerkale sovremennykh rossiyskikh media. Mediaobrazovanie, 4, 189–205. (In Russian).

Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-09517-8

Holmes, W., Porayska-Pomsta, K., Holstein, K., Sutherland, E., Baker, T., Shum, S. B., Santos, O. C., Rodrigo, M. T., Cukurova, M., Bittencourt, I. I., & Koedinger, K. R. (2022). Ethics of AI in Education: Towards a Community-Wide Framework. International Journal of Artificial Intelligence in Education, 32(3), 504–526. https://doi.org/10.1007/s40593-021-00239-1

Ilnitsky, A. S. (2021). Criminal ideology in network: methodology and technique of criminological research. Journal of the Volgograd Academy of the Ministry of the Interior of Russia, 4(5), 52–60. https://doi.org/10.25724/vamvd.uabc (in russian).

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. https://doi.org/10.1038/s42256-019-0088-2

Karpova, A. Yu., & Maksimova, N. G. (2021). School Shooting in Russia: what Matters? Vlast’ (The Authority), 29(1), 93–108. https://doi.org/10.31171/vlast.v29i1.7920 (In Russian).

Khan, A. A., Badshah, S., Liang, P., Waseem, M., Khan, B., Ahmad, A., Fahmideh, M., Niazi, M., & Akbar, M. A. (2022). Ethics of AI: A Systematic Literature Review of Principles and Challenges. The International Conference on Evaluation and Assessment in Software Engineering 2022, 383–392. https://doi.org/10.1145/3530019.3531329

Lankford, A., & Madfis, E. (2018). Don’t Name Them, Don’t Show Them, But Report Everything Else: A Pragmatic Proposal for Denying Mass Killers the Attention They Seek and Deterring Future Offenders. American Behavioral Scientist, 62(2), 260–279. https://doi.org/10.1177/0002764217730854

Lankford, A., & Tomek, S. (2018). Mass Killings in the United States from 2006 to 2013: Social Contagion or Random Clusters? Suicide and Life-Threatening Behavior, 48(4), 459–467. https://doi.org/10.1111/sltb.12366

Massacre at an American university – 33 dead. (2024, May 9). IQ Media. https://iq.hse.ru/news/177691629.html (In Russian).

Peters, D., Vold, K., Robinson, D., & Calvo, R. A. (2020). Responsible AI—Two Frameworks for Ethical Design Practice. IEEE Transactions on Technology and Society, 1(1), 34–47. https://doi.org/10.1109/TTS.2020.2974991

Ryan, M., & Stahl, B. C. (2021). Artificial intelligence ethics guidelines for developers and users: Clarifying their content and normative implications. Journal of Information, Communication and Ethics in Society, 19(1), 61–86. https://doi.org/10.1108/JICES-12-2019-0138

Saprykin, V. A. (2019). Destructive behavior of youth in the context of the information war: Columbine challenges and measures to overcome them. Theory and Practice of Social Development, 1, 18–21. https://doi.org/10.24158/tipor.2019.1.2 (In Russian).

School Shooter. (2024, May 12). Wattpad. https://www.wattpad.com/story/122359068-school-shooter

School Shootings by Country. (2024, May 10). World Population Review. https://worldpopulationreview.com/country-rankings/school-shootings-by-country

Shuvalov, L. A. (2022). The influence of mass media on the popularization of the phenomenon of Schoolshooting. Vestnik Tverskogo gosudarstvennogo universiteta. Seriya: Filologiya, 1, 149–154. https://doi.org/10.26456/vtfilol/2022.1.149 (In Russian).

Smuha, N. A. (2019). The EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence. Computer Law Review International, 20(4), 97–106. https://doi.org/10.9785/cri-2019-200402

Towers, S., Gomez-Lievano, A., Khan, M., Mubayi, A., & Castillo-Chavez, C. (2015). Contagion in Mass Killings and School Shootings. PLOS ONE, 10(7), e0117259. https://doi.org/10.1371/journal.pone.0117259

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.