Two AI bots walk into a bank....

By Rob Neely

The Risks of Bots Learning from Each Other and Propagating Biases and Unethical Behaviours.

 In recent years, artificial intelligence (AI) and machine learning (ML) have seen significant advancements, leading to the development of autonomous bots capable of learning from each other.

While this collaborative learning has its merits, there are inherent risks involved.

Bots learning from flawed or biased data can propagate biases and engage in unethical behaviours, posing challenges for society.

There is a case in point in the failed Robodebt Scheme that the Australian Govt allowed to occur in recent years.

This article explores the potential consequences of bots learning from each other without proper oversight and highlights the importance of addressing these issues.

The Propagation of Biases:

Bots learn from vast amounts of data, which often reflects the biases and prejudices present in society.

When these bots learn from each other, they can inadvertently reinforce and propagate existing biases. If one bot learns a biased behaviour from flawed data, it may share that knowledge with other bots, leading to a collective reinforcement of the same bias.

This process can perpetuate discrimination, inequality, and unfairness in various domains, including hiring processes, loan approvals, or criminal justice systems.

For example, if an AI bot learns that certain demographic groups are less likely to repay loans based on biased historical data, it may inadvertently pass on this flawed understanding to other bots.

Consequently, this can result in discriminatory loan practices that disproportionately affect specific communities, perpetuating social and economic disparities.

Unethical Behaviours:

Another significant concern arises when bots learn unethical behaviours from each other.

If one bot discovers a loophole or engages in unethical actions, it may share this knowledge with others, leading to a cascade of unethical behaviour.

Without appropriate safeguards, this can result in bots engaging in activities such as spreading misinformation, engaging in fraudulent practices, or manipulating public opinion.

For instance, social media bots can learn from each other to amplify fake news, propaganda, or hate speech.

If one bot successfully spreads false information to a large audience, other bots may adopt and reinforce this behaviour, resulting in the widespread dissemination of misinformation. This poses a serious threat to the integrity of public discourse and democratic processes.

Potential for Fraud:

AI chatbot biases, whether positive or negative, can potentially contribute to fraudulent activities within the banking industry in the following ways:

1. Positive Biases:

Positive biases in AI chatbots can lead to fraud by promoting preferential treatment or providing inaccurate information to certain customers. Here’s how it can happen:

a) Preferential Treatment: If an AI chatbot exhibits positive biases towards specific customers or clients, it may prioritize their requests, provide them with unauthorized access to sensitive information, or grant them privileges that they are not entitled to. This preferential treatment can open opportunities for fraudsters to exploit the system by impersonating favoured individuals or gaining unauthorized access to accounts.

b) Misrepresentation of Risk: Positive biases in AI chatbots can lead to an underestimation of the risks associated with certain transactions or clients. If the chatbot downplays potential risks based on biased perceptions, it may fail to flag suspicious activities or alert bank officials to potential fraudulent behaviour. This can allow fraudulent transactions to occur undetected.

2. Negative Biases:

Negative biases in AI chatbots can also contribute to fraud by discriminating against certain customers or perpetuating stereotypes that can be exploited. Here are a few examples:

a) Discrimination: Negative biases in AI chatbots can result in discriminatory practices, such as denying services or imposing stricter scrutiny on individuals from specific demographic groups. This discrimination can lead to situations where individuals who genuinely require banking services are unfairly denied access, potentially driving them towards illegitimate means or fraudulent activities.

b) Stereotyping: Negative biases can result in AI chatbots stereotyping individuals or groups as being more likely to engage in fraudulent behaviour. If a chatbot exhibits such biases, it may subject certain customers to increased scrutiny, impose unnecessary restrictions, or flag legitimate transactions as suspicious. This can create frustration among customers and, in some cases, motivate individuals to resort to fraudulent activities as a response to perceived mistreatment.

Mitigating AI Chatbot Biases and Fraud, Addressing the Challenges:

To mitigate the risks associated with bots learning from each other, it is crucial to implement robust safeguards and oversight mechanisms.

1. Data Diversity: Ensuring diverse and representative datasets can help minimize biases in the learning process. By incorporating a wide range of perspectives and experiences, the bots can learn in a more balanced manner.

2. Bias Detection and Removal: Regular evaluation and monitoring of AI chatbots can help identify and address biases. By leveraging techniques such as algorithmic audits and bias detection algorithms, banks can ensure that their chatbots provide fair and unbiased responses to customers.

3. Ethical Guidelines and Oversight: Establishing clear ethical guidelines and regulatory oversight for the development and deployment of AI chatbots is essential. These guidelines should emphasize fairness, transparency, and accountability, ensuring that chatbots operate in alignment with ethical and legal standards.

4. Diverse Training Data: Training AI chatbots on diverse and representative datasets can help reduce biases. By incorporating data from a wide range of sources and demographics, chatbots can learn to provide accurate and unbiased information to all customers, regardless of their background.

5. Human-In-The-Loop Approach: Incorporating human oversight and intervention is crucial to mitigate biases and prevent fraudulent activities. Human experts should review and validate the responses and actions of AI chatbots, ensuring that they align with ethical standards and do not perpetuate biases.

6. Continuous Monitoring: Regular monitoring and auditing of AI systems can help identify and rectify biases or unethical behaviours. Implementing mechanisms to detect and correct these issues in real-time is crucial to prevent their propagation.

Conclusion:

AI chatbot biases, whether positive or negative, have the potential to contribute to fraud within the banking industry.

To prevent such risks, it is vital to implement measures such as bias detection and removal, ethical guidelines, diverse training data, and human oversight. By addressing biases and promoting fairness, banks can ensure that their AI chatbots provide accurate information.

While collaborative learning among bots holds tremendous potential, it also comes with significant risks. The propagation of biases and engagement in unethical behaviours can have far-reaching consequences for individuals and society as a whole.