>Business >Casual Bayesian Networks: A flexible utility to facilitate fairer machine learning

Casual Bayesian Networks: A flexible utility to facilitate fairer machine learning

Decisions on the basis of machine learning (ML) are prospectively beneficial over human decisions, as they are not impacted from the same subjectivity, and can be more precise and simpler to undertake analysis of. Simultaneously, data leveraged to train ML frameworks often consist of human and societal biases that can cause harmful decision making, extensive evidence in spheres like hiring, criminal justice, surveillance, and healthcare indicates that Machine Learning decision systems can treat individuals unfavourably based on traits like race, gender, disabilities, and sexual orientation – which is referred to as sensitive attributes. 

Presently, a majority of fairness criteria leveraged for assessing and developing ML decision systems concentrate on the relationships between the sensitive attribute and the system output. However, the training information can put forth differing patterns of unfairness dependent on how and why the sensitive attribute influences other variables. Leveraging such criteria without completely accounting for this could be an issue, it could, for instance, bring us to the erroneous conclusion that a model demonstrating harmful biases is fair and vice-versa, that a model demonstrating harmless biases is unfair. The production of technical solutions to fairness also needs thinking of the differing, possibly intricate, methods in which unfairness can crop up in the data. 

Understanding how and why a sensitive trait influences other variables in a dataset can be a challenging activity, needing both a technical and sociological analysis. The visual, however mathematically accurate, framework of Casual Bayesian Networks (CBNs) indicates a flexible useful tool in this regard as it can be leveraged to formalize, measure and handle differing unfairness situations underlying a dataset. A CBN is graph consisting of the nodes indicating arbitrary variables, connected by links indicating casual influence. By providing a definition of unfairness as the existence of a harmful influence from the sensitive trait in the graph, CBNs furnish us with a simplistic and intuitive visual tool for detailing differing possible unfairness scenarios underlying a dataset. Additionally, CBNs furnish us with a powerful quantitative tool to measure unfairness in a dataset and to assist researchers generate techniques for tackling it. 

Think of a hypothetical university admission example in which applicants are admitted on the basis of qualifications Q, option of Department D, and gender G, and in which female applicants apply more frequently to specific departments (for simplicity’s purpose, gender is considered as binary, but this is not a necessary limitation imposed by the framework) 

The admission process is indicated by the CBN in Figure 1. Gender has a direct influence on admission through the casual path G->A and an indirect influence via the casual path G->D->A. The direct influence gets the fact that individuals with the same qualifications who are making applications to the same department might be viewed differently on the basis of their gender. The indirect influence captures different admission rates amongst male and female applicants owing to their different department options. 

While the direct influence of the sensitive trait is viewed as unfair for social and legal purposes, the indirect influence could be viewed of as fair or unfair dependent on contextual aspects. In Figure 2a, 2b, and 2c, we demonstrate three potential scenarios, where complete or partial red paths are leveraged to signify unfair and partially-unfair links, respectively. 

To begin with, in the first scenario, female applicants voluntarily apply to departments with reduced acceptance rates, and thus the path G->D is viewed of as fair. 

In the next scenario, female applicants make applications to departments with reduced acceptance rates owing to systemic historical or cultural pressures, and thus the path G->D is viewed of as unfair (as a result, the path D->A becomes partially unfair) 

In the final scenario, the college reduces the admission rates for departments voluntarily opted more typically by women. The path G->D is viewed of as unfair, however the path D->A is partially unfair. 

This simplified instance demonstrates how CBNs can furnish us with a visual framework for detailing differing possible unfairness scenarios. Comprehending which scenario underlies a dataset can be a challenge or even undoable and might need specialist knowledge. It is nevertheless required to prevent pitfalls when assessing or developing a decision system. 

As an instance, let’s assume that a university leverages historical information to training a decision system to determine whether a potential applicant should receive admission, and that a regulator wishes to assess its fairness. Two popular fairness criterion are statistical parity (needing the same admission rates amongst male and female applicants) and equivalent false positive or negative rates (EFPRs/EFNRs, needing the same error rates amongst male and female applicants, i.e., the percentage of applicants who were accepted wrongly forecasted as rejected, and vice-versa.) To put it in different words, statistical parity and EFPRs/EFNRs need all the forecasting and the wrong predictions to be independent of gender. 

From the discussion prior, we can make the inference that if such criterion are relevant or not strictly dependent on the nature of the data pathways. Owing to the presence of the unfair direct influence from gender on admission, it would appear inappropriate for the regulator to leverage EFPRs/EFNRs to measure fairness, as this criterion takes up the influence that gender has on admission in the data as authentic. This implies that it would be doable for the system to be considered fair, even if it carries the unfair influence: this would automatically be the scenario for an error-free decision system. On the other side, if the path G->D->A was viewed of as fair, it would appear inappropriate to leverage statistical parity. In this scenario, it would be doable for the framework to be viewed of as unfair, even if it doesn’t consist of the unfair direct influence of gender on admission through the path G->A and only consists of the fair indirect influence through the path G->D->A. 

CBNs can also be leveraged to measure unfairness within a dataset and to develop strategies for alleviating unfairness in the scenario of complicated relationships in the data. Path-particular strategies facilitate us to estimate the influence that a sensitive traits has on other variables along particular sets of casual paths. This can be leveraged to measure the degree of unfairness of a provided dataset in complicated scenarios in which a few casual paths are viewed of as unfair while other casual paths are viewed of as fair. In the University admission instance in which the path G->D->A is viewed of as fair, path-particular strategies would facilitate us to quantify the influence of G on A limited to the direct path G->A over the entire population, in order to get an estimate of the degree of fairness, included in the dataset. 

It’s worth observing that, in our simplistic instance, we do not consider the existence of cofounders for the influence of G on A. In this scenario, as there are no unfair casual paths from G to A except the direct one, the degree of unfairness could merely be gotten by measuring the discrepancy between p(A | G = Q, Q, D) and p(A | G=1, Q, D) denotes the distribution of A conditioned on the candidate being male, their credentials, and department choosing. 

The extra usage of counterfactual inference strategies would facilitate us to query if a particular individual received unfair treatment, for instance by asking if a rejected female applicant (G=1, Q=q, D=d, A=O) would have gotten the same decision in a counterfactual world where here gender were male along the direct path G->A. In this simplistic instance, going by the assumption that the admission decision is gotten as the deterministic function f of G, Q, and D, i.e., A=f(G,Q,D), this correlates to querying if f(G=O, Q=q, D=d) = O specifically if a male applicant with the same department choosing and credentials would have also faced rejection.  

It is worth noting that path-particular counterfactual inference is typically more complicated to accomplish, if a few variables are unfairly influenced by G. Going by the assumption that G also has an influence on Q via a direct path, G->Q which is viewed of as unfair. In this scenario, the CBN consists of both variables that are fairly and unfairly influenced by G. Path-particular counterfactual inference would be made up of performance of a counterfactual correction of q: q_0, that is of the variable which is unfairly influenced by G, and then computation of the counterfactual decision as f(G=0, Q=q_0, D=d) The counterfactual correction q_0 is gotten by first leveraging the data of the female applicant (G=1, Q=q, D=d, A=0) and awareness with regards to the CBN to obtain an estimate of the particular latent randomness in the makeup of the applicant, and then leveraging this estimate to re-compute the value of Q as if G=0 along G->Q. 

Additionally, to responding to questions of fairness in a dataset, path-particular counterfactual inference could be leveraged to develop strategies to alleviate the unfairness of a machine learning framework. In the second paper, we put forth a strategy to execute path-particular counterfactual fairness. 

As machine learning goes on to be embedded in an increasing number of systems which have a considerable impact on individual’s lives and safety, it is incumbent on scientists and practitioners to detect and tackle possible biases embedded in how training information sets are produced. Casual Bayesian networks provide a capable visual and quantitative utility for expressing the relationships between arbitrary variables in a dataset. While it is critical to acknowledge the restrictions and complications of leveraging this tool – like identification of a CBN that precisely details the dataset’s generation, tackling confounding variables, and executing counterfactual inference in complicated scenarios – this novel combination of capacities could facilitate a deeper comprehension of complicated frameworks and enable us to better align decision systems with society’s ethics and standards. 

Add Comment