Abstract
The broad adoption of artificial intelligence (AI) technologies across industries has raised several important ethical debates. Concerns that AI systems will exhibit biases, affect individual privacy, and raise issues of accountability surrounding the use of advanced technologies are some of the most critical points. While most AI frameworks address responsible use, the importance of informed consent and data privacy are some grey areas that remain untouched by most researchers due to the dynamic nature of AI development itself. This paper aims to develop a holistic ethical AI framework that balances ethics, privacy, consent, and responsible use. The researchers conducted a systematic literature review (SLR) using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) spanning the years 2020 to 2024, with IEEE Xplore, Web of Science, PubMed, and ACM being the primary databases. A total of 29 papers met the final inclusion criteria. The researchers managed to identify the main features of AI frameworks as privacy, transparency, fairness, trust, security, and accountability. Other features were responsible use, good governance, autonomy, data protection, and explainability. The tools used to evaluate existing ethical frameworks in the observed studies were Bias Detection Algorithms, Deep Learning techniques, and Natural Language Processing. Most studies did not include any tools for evaluating ethical frameworks. According to the observed studies, healthcare was the most prominent application area contributing to more than half compared to other areas like finance, education, transport, information technology, and governance. The main challenges surrounding the privacy and security of AI frameworks were a lack of transparency and compliance, raising concerns about data misuse and user privacy. Security vulnerabilities and breaches of user information further highlight the need for stricter governance and user control. This paper will provide a comprehensive analysis of the research findings and identify research gaps for future research such as aspects/features, application areas, and tools used to evaluate the ethical soundness and challenges surrounding the privacy and security of AI frameworks.