Artificial Intelligence (AI) is changing the world around us dramatically. As AI becomes more sophisticated and has the ability to perform more complex human tasks, we are seeing increasing concerns of what AI will be in decades and what it means, not just for business, but for humanity as a whole and for future of humans and society. Furthermore, since AI is fuelled by data, it faces ethical challenges such as data governance, including consent, ownership, and privacy. Ethics is specifically becoming one of major concerns related to AI uses, such as inconclusive evidence, inscrutable evidence, misguided evidence, unfair outcomes, transformative effects, and traceability. As a result, there are increasing active debates about the ethical principles and values that should guide AI’s development and deployment in recent years. Our work focuses on the identification of the right set of fundamental ethical principles to inform the design, regulation, and use of AI.
We aim to offer a framework to identify, implement, and validate ethical principles of AI to make ethics of AI operationable in real-world applications.