Artificial intelligence (AI) and Machine learning (ML) have changed our lives for the better in fields ranging from medical care, public health, environmental conservation, and energy consumption to accelerating scientific insights, improving agriculture, and helping governments address societal inequities. As our computational capabilities further increase and as more data is available, the impact of these algorithms would only deepen in our lives.

Artificial intelligence (AI) – artistic concept. Image credit: geralt via Pixabay (Pixabay licence)
While it has already opened an unlimited range of applications, it is necessary to make AI transparent to ensure that the underlying AI/ML algorithms do not discriminate or treat people in an inequitable manner, and that it is possible to track and explain the key assumptions that were made during the automated decision making process. Dr David Leslie and Morgan Briggs have discussed this issue in their research paper titled “Explaining decisions made with AI: A workbook (Use case 1: AI-assisted recruitment tool)”
Importance of this Research
It is necessary to make the AI/ML models transparent to make them fair. We should be well equipped to explain the AI algorithm in a simple to understand way, even for non-technical users, to ensure that the data processing is robust, reliable, and safe,
Four Principles of AI Explainability
The below four parameters give us a broader understanding of explaining AI-assisted decisions to individuals.
- Be transparent
- Be accountable
- Consider context
- Reflect on impacts
Explanation Aware Design:
Explaining AI decisions could be classified into Six primary types of explanations:

Image credit: arXiv:2104.03906 [cs.CY]
Six tasks for the explanation-aware design and the use of AI Systems
- Task 1: Select priority explanations by considering the domain, use case and impact on the individual
- Task 2: Collect and pre-process your data in an explanation-aware manner
- Task 3: Build your system to ensure you are able to extract relevant information for a range of explanation types
- Task 4: Translate the rationale of your system’s results into useable and easily understandable reasons
- Task 5: Prepare implementers to deploy your AI system
- Task 6: Consider how to build and present your explanation
Using AI/ML in Recruitment:
Let us consider a Recruitment use-case for building an ML application. Recruitment decisions for this algorithm would make a decision based on several factors, such as the data below.

Image credit: arXiv:2104.03906 [cs.CY]
An ML algorithm design would start from accumulating the above data for all applicants. A successful person in the existing roles could be identified, and this could be used as training data (along with the ones that did not perform well in the roles, plus the ones that were rejected by HR during the selection process).
In this rejecting/shortlisting application, we can choose the model of learning based on our requirements. We can also select the model to use for our specific application. The most common learning techniques are regression, classification, and deep learning techniques such as neural networks.
- Linear regression is used to model the relationship between one (or more) predictor variable(s) and one outcome variable—for example, the salary variation of an individual wrt the years of experience.
- Classification: This supervised learning technique would classify data in a binary Yes, OR No. Two use-cases are as below
- For an email classifier that filters spam emails, it could be beneficial to classify words as spammy or non-spammy. This classification could be used to determine the chances that an email is SPAM or not.
- In Recruitment, it could be related to the qualification of the applicant. For some positions, people with a specific qualification might perform better in a role than others. The supervised learning, once trained, could classify resumes based on the identified qualification for high performance.
- Neural Network: This supervised learning technique will assign labels & approximate associated weights to predict results close to actual data.

Image credit: arXiv:2104.03906 [cs.CY]
After training, the algorithm built could start presenting output (shortlist) to the hiring manager. The shortlist should be monitored to ensure that unfair biases are not made in the system.
Conclusion
The research paper summarises the end-to-end guidance on how to apply principles and best practices of AI explainability. In the words of the researchers
Over the last two years, The Alan Turing Institute and the Information Commissioner’s Office (ICO) have been working together to discover ways to tackle the difficult issues surrounding explainable AI. The ultimate product of this joint endeavour, Explaining decisions made with AI, published in May 2020, is the most comprehensive practical guidance on AI explanation produced anywhere to date. We have put together this workbook to help support the uptake of that guidance. The goal of the workbook is to summarise some of main themes from Explaining decisions made with AI and then to provide the materials for a workshop exercise that has been built around a use case created to help you gain a flavour of how to put the guidance into practice. In the first three sections, we run through the basics of Explaining decisions made with AI. We provide a precis of the four principles of AI explainability, the typology of AI explanations, and the tasks involved in the explanation-aware design, development, and use of AI/ML systems. We then provide some reflection questions, which are intended to be a launching pad for group discussion, and a starting point for the case-study-based exercise that we have included as Appendix B. In Appendix A, we go into more detailed suggestions about how to organise the workshop.
Source: Dr David Leslie and Morgan Briggs’s Explaining decisions made with AI: A workbook (Use case 1: AI-assisted recruitment tool)