When applying for a job, many organizations use artificial intelligence-based tools to scan resumes and predict the skills needed for a particular position. Colleges and universities use AI to automatically grade essays, process test books, and view extracurricular activities in order to identify potentially good students. In response to claims about the unfairness and bias of such tools, specialists from the University of Minnesota and Purdue University have developed a new set of recommendations for auditing AI-based tools.
The researchers developed AI audit recommendations by first examining the ideas of fairness and bias from three main perspectives.
How people determine whether a decision was fair and impartial.
How do social, legal, ethical and moral standards represent fairness and bias
How do individual technical fields such as computer science, statistics, and psychology determine internal fairness and bias
The audit structure consists of twelve components in three categories, including:
Components related to the creation, processing and forecasts created by AI.
Components related to how AI is used, who is affected by its decisions and why.
Components related to common issues: the cultural context in which AI is used, respect for people, and the scientific credibility of the research used by AI vendors to support their claims.
The researchers recommend following the standards developed by them both to internal auditors when developing predictive AI technologies, and subsequently to independent external auditors.