The Security Risks of Generative Artificial Intelligence

  • Meet Ashokkumar Joshi Meet
Keywords: Generative Artificial Intelligence, Security Risks, Threats, Vulnerabilities, Misuse, Fake Content, Impersonation Attacks

Abstract

Generative Manufactured Insights (AI) frameworks, competent of creating human-like yields such as content, pictures, and recordings, have seen surprising progressions in later a long time. Whereas these frameworks offer different benefits in inventive assignments, amusement, and robotization, they moreover posture noteworthy security dangers. This paper looks at the security suggestions of generative AI advances, centering on potential dangers and vulnerabilities they present over diverse spaces. We talk about the abuse of generative AI for pernicious purposes, counting the creation of modern fake substance, pantomime assaults, and the spread of disinformation and purposeful publicity. Furthermore, we analyze the challenges in recognizing and moderating these dangers, given the quick advancement and complexity of generative AI models. Besides, we investigate the moral contemplations encompassing the advancement and arrangement of generative AI, emphasizing the significance of capable AI administration and direction to address security concerns. By highlighting these dangers, this paper points to raise mindfulness among analysts, policymakers, and specialists to create proactive techniques for overseeing the security challenges postured by generative AI advances. 

 

References

Smith, A., & Venkatadri, G. (2019). Deepfakes and the new disinformation war: The coming age of Post-Truth Geopolitics. National Security Journal, 4(1), 57-74.

Crootof, R. (2017). Bots, Babes, and the Californication of commerce: The dispute over regulating deepfakes. Yale JL & Tech., 19, 211.

Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.

Amy Mitchell, Jeffrey Go!fried, Jocelyn Kiley and Katerina Eva Matsa, “Political Polarization and Media Habits, Pew Research Center, Oct. 21, 2014, www.journalism.org/2014/10/21/political-polarization-media-habits/

Camila Domonoske, “Students Have a Dismaying Inability to Tell Fake News from Real, Study Finds,” Nov. 23, 2016, www.npr.org/sections/thetwo way/2016/11/23/503129818/study-finds-students-have-dismaying-inability-to-tell-fakenews-from-real

Narayanan, A., & Rubin, V. (2018). Faking news: Fraudulent news and the fight for truth. Cambridge University Press.

Floridi, L. (2019). Soft ethics and the governance of the digital. Philosophy & Technology, 32(1), 1-8.

OpenAI. (2020). Generative Pre-trained Transformer (GPT). Retrieved from

European Commission. (2019). Ethics guidelines for trustworthy AI. Retrieved from https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.

Kemp, S. (2021). Digital 2021: Global Overview Report. Retrieved from https://datareportal.com/reports/digital-2021-global-overview-report

Published
2024-02-27
How to Cite
[1]
Joshi, M.A. 2024. The Security Risks of Generative Artificial Intelligence. International Journal on Integrated Education. 7, 1 (Feb. 2024), 91-95.
Section
Articles