Generative AI, particularly Large Language Models (LLMs), has transformed the Information Age by amalgamating technology and AI with human creativity and problem-solving, revolutionising how we ideate and innovate. From generating texts and ideas to injecting a human touch, LLMs are unlocking new ideas and facilitating rapid access to information. However, these opportunities, alongside their benefits, have raised critical ethical concerns. Paramount challenges in this domain encompass the rise of deepfakes, the persistence of algorithmic bias, and copyright infringement. Deepfakes, created by text-to-media models, undermine authenticity and could spread misinformation and weaken public trust. Bias in training data can cause discrimination and reinforce stereotypes, creating inequalities in AI-driven decision-making. Copyright disputes arise when training LLMs on unlicensed materials, raising questions of legality and ownership. This paper consolidates concepts from research and ongoing debates to depict how these issues can change people's perspectives on AI and how it fits into society. It delves into fairness, accountability, and ownership of content. The key takeaway is the insufficiency of technological solutions. Advancing in this field requires a strong government, transparent systems, and collaborative partnerships among technologists, policymakers, and communities. This paper concludes with possible ways to make AI more accountable and responsible for society.
Keywords: AI, LLM, Ethics, Deepfake, Bias, Copyright