Machine learning models have been employed in many organizations for impactful decision-making processes. However, bias prevalent in an organization’s model could create meaningful issues, ranging from incorrect hiring decisions to a negative impact on an organization’s image. Organizations often lack the knowledge or time to create a model in line with society’s standards, leading to the utilization of faulty systems. Here, we present a framework to help combat this issue, allowing organizations to prevent bias in their models. Our framework starts with a general guidebook to clarify relevant terminology and provide best practices for developers to measure and mitigate bias. We then incorporate an automated algorithm based on statistical parity and disparate impact to de-bias raw data. We apply SMOTE-Tomek to resample imbalanced datasets and a Reject Option Classification algorithm to reduce biased predictions. We infer adding more data related to minority and unprivileged classes to datasets will help create more equitable and representative datasets leading to fair AI systems for all. We propose a novel decentralized database using web-scraping and homomorphic encryption as a reliable source of real-world data. Our assumed hypotheses were confirmed by data derived from extensive testing.