(Investigator(s): Pablo Rivas, Ph.D., Patricia Shaw, Esq., and Kenneth Goodman, Ph.D.)
Overview
Computer algorithms and analytics are playing an increasingly influential role in government, business and society. They underpin information services and autonomous intelligence systems including (but not limited to) artificial intelligence applications that involve: Machine Learning, Deep Learning and Reinforcement Learning. These smart technologies are having a direct and significant impact on human lives across a wide socioeconomic and cultural spectrum. Algorithms enable the exploitation of rich and varied data sources from different spheres including culture, in order to support human decision-making and actions that could serve the diverse interests of the societies in which they operate. However, alongside the benefits, the vast ethical issues remain. This standard provides a framework, which helps professionals who participate in the development or maintenance of algorithmic systems and those responsible for their deployment or usage to identify and mitigate unintended, unjustified or inappropriate biases in the outcomes of the algorithmic system. Algorithmic systems in this context refers to the combination of algorithms, data and the output deployment process that together determine the outcomes that affect end users. Unjustified bias refers to differential treatment of individuals based on criteria for which no operational justification is provided. Inappropriate bias refers to bias that is legally or morally unacceptable within the social context where the system is used, e.g. algorithmic systems that produce outcomes with the negative differential impact that is correlated with protected attributes. This standard describes specific methodologies to identify and address issues of unintended, unjustified and inappropriate bias in the creation and use of algorithmic systems. Figure 7 depicts the structure of the evaluation of algorithmic systems during design or deployment of the systems.
Scope
This standard describes specific methodologies to help users certify how they worked to address. Overview of standard-sections associated with bias consideration stages Taxonomy Legal Psychology Culture Stakeholder Identification Risk/Impact Assessment Representative Data Assurance System Evaluation Project Management Technical Delivery Ethical Review Informative Annexes Documentation Requirements setting Legend: (Primary Activity Type) and eliminate issues of negative bias in the creation, deployment and use of their algorithms, where “negative bias” implies the usage of overly subjective or uniformed data sets or information known to be inconsistent with legislation concerning certain protected characteristics (such as race, gender, sexuality, etc); or with instances of bias against groups not necessarily protected explicitly by legislation, but otherwise diminishing stakeholder or user wellbeing and for which there are good reasons to be considered inappropriate. Possible elements include (but are not limited to): benchmarking procedures and criteria for the selection of validation data sets for bias quality control; guidelines on establishing and communicating the application boundaries for which the algorithm has been designed and validated to guard against unintended consequences arising from out-of-bound application of algorithms; suggestions for user expectation management to mitigate bias due to incorrect interpretation of systems outputs.