>Business >Machine Learning – applications, use-cases, and current presence within banking and finance Part 3

Machine Learning – applications, use-cases, and current presence within banking and finance Part 3

The need of the hour surrounding artificial intelligence that enhances enterprise and social outcomes, is robust governance and risk mitigation. Specifically, AI governance.

Machine learning (ML) and Artificial Intelligence (AI) technologies have major potential with regards to the financial services space, but they also come with inherent risks that ought to be tackled with the correct governance strategies, according to the Wharton AI for Business.

The group, christened Artificial Intelligence/Machine Learning Risk and Security, AIRS, is collaborating with Wharton, who act as the academic partner of the group. Located in NYC, the AIRS had its conception three years back in 2019, and is made up of approximately 50 academics and industry SMEs.

Their research does a deep dive into the avenues and hurdles of AI implementation techniques by financial enterprises and how they could detect, categorize, and mitigate possible risks by developing relevant governance frameworks. But, AIRS didn’t make any particular recommendations and stated that its paper is implied for the purpose of discussion. It is considered to be of vital importance that every entity evaluate its own AI deployments, risk profile/tolerance, and develop governance frameworks that are relevant under their unique scenario.

Specialists, experts, and professionals, industry-wide and across the domain of academia are bullish on the prospective advantages of Artificial Intelligence at a time when its governance and risks are handled in a responsible manner. AI risk categories being standardized are discussed in the whitepaper and Artificial Intelligence governance framework would go the distance to facilitate accountable adoption of Artificial Intelligence in the domain,” he specified.

Prospective Benefits conferred by Artificial Intelligence

We have already had a look at a few of the use cases for Machine Learning and Artificial Intelligence within the banking and finance space. In this section of the blog by AICorespot, we will be further exploring potential and tangible advantages that these technologies furnish to enterprises. Adoption and proliferation are appreciating at an exponential rate, and the need of the hour is awareness on these technologies.

Financial enterprises are picking up AI, driving adoption on a more frequent basis as tech barriers have dissolved and its advantages and possible risks have become more obvious. The Financial Stability Board, a global body that monitors and suggests recommendations with regards to the international financial system, brought focus to four spheres where Artificial Intelligence would influence banking.

1] Client-facing uses that could furnish more accessibility to credit and other financial-related services by harnessing ML algorithms to

  • Price insurance policies,
  • evaluate credit quality
  • and to propel financial inclusion.

It begins with increasing awareness and education levels in users. Everybody should possess awareness of when algorithms are making decisions for everybody.

2] AI is deployed in fortification of back-office operations, which includes producing sophisticated models, for:

3] In relation to trading and investment techniques and strategies.

4] The fourth is about AI advancements with regards to:

Identification and containment of risks

For Machine Learning and Artificial Intelligence to enhance “societal and business outcomes,” its risks must be “handled with accountability,” the writers document in their paper. AIRS research is concentrated on self-governance of Artificial Intelligence risks with regards to the financial services space, and not Artificial

Intelligence regulation as such, in the opinion of Kartik Hosanagar, Professor of Operations at Wharton.

In looking into and understanding the prospective hazards and risks of Artificial Intelligence, the paper furnished “a standardized practical categorization” of risks in connection to:

  • Data
  • Machine learning, and
  • AI-based attacks
  • Testing
  • Trust
  • Compliance

Solid frameworks ought to concentrate on definitions, inventory, policies, and standards, and controls the authors specified. Those strategies must additionally tackle the prospect for Artificial Intelligence to put forth privacy issues and possibly discriminatory or unfair outcomes if they do not undergo implementation with the requisite care.

In developing their AI governance mechanisms, finance enterprises ought to start by identification of the settings where Artificial Intelligence cannot substitute humans. Artificial Intelligence suffer from a drawback that humans don’t, they don’t possess the judgment and context for several of the scenarios in which they undergo deployment. In a majority of scenarios, it is not feasible or realistic to undertake training of the AI framework on all potential situations and data. Obstacles like the lack of judgment, context, lacking compliance and cumulative learning restrictions would inform strategies with regards to risk mitigation.

Weak quality of data, lacking compliance, and the prospect for AI-based attacks are other hazards financial enterprises must consider during making their decisions. Their research dived into how those compromises could happen. Within scenarios where data privacy is compromised, a malicious actor could infer sensitive data from the dataset for undertaking the training of AI frameworks. There are two noteworthy ways to compromise data privacy – “inference of membership” and “model inversion” compromises. In the attack of the former variety, a malicious actor could possibly decide if a specific record or a grouping of records exist within a specific training dataset and decide if that is a portion of the dataset harnessed to undertake training of the AI framework.

In the attack of the latter variety, the model inversion, a malicious actor could possibly extract the training information leveraged to train the model in a direct fashion. Examples of other attacks include “data poisoning” which could be leveraged to improve the error rate in AI/machine learning frameworks and distort learning procedures and results.

Add Comment