Stay up to date
Subscribe to our newsletter
What is Responsible AI?
Responsible AI refers to the development of AI solutions while prioritizing ethics, society, and the environment.
Responsible AI adheres to four pillars:
- Accountability and trust
- Non-bias, fairness and ethics
- Interpretability, transparency and explainability
- Privacy and data protection
Irresponsible AI is artificial intelligence that has negative consequences on society or the environment, and is therefore distrusted by its stakeholders. Continue reading to know more about both types of AI.
It is very important to distinguish that AI that is used for good is not by default considered responsible or ethical. For instance, an AI-based model that uses machine learning to save rhinos from poachers isn’t automatically ethical or responsible. Regardless of its goal, an AI-based model should be applied responsibly, i.e. free of factors that might result in biased, opaque, unfair, and socially or environmentally detrimental decisions.
What are examples of irresponsible or unethical AI?
Common issues associated with the irresponsible use of AI include AI that is not properly used by stakeholders, that causes harm or unfairness to a group of people, or that causes damage to the environment or society.
Irresponsible or unethical AI can be grouped into the following categories:
Unfair or biased AI
Those are usually systems that exhibit bias in their decision-making processes, often leading to discriminatory or unfair outcomes for minority groups. This may be caused by biased data or flawed algorithms that disproportionately impact certain groups.
Examples of unfair or biased AI include:
- AI systems used for filtering out the best applicants to an academic institution: These might result in a self-fulfilling prophecy situation, whereby groups that are often associated with poorer academic performance find themselves at an unfair disadvantage that further feeds into the status quo.
- AI systems used for predicting sentence length for convicts: These might result in predictions that are unfair to groups that are often correlated with imprisonment, despite the correlation not being a causality.
- AI systems used at the tax department to identify fraudulent behavior based on features that can be considered biased.
- Facial recognition AI, where white male subjects are recognized significantly better than subjects with other skin complexions and genders. The documentary Coded Bias discusses facial recognition algorithms that don't see dark-skinned faces accurately, which is also a phenomenon that MIT Media Lab researcher Joy Buolamwini concluded.
AI with negative environmental impact
This generally includes any extremely heavy-to-train AI model (which is becoming increasingly common). Those damage the environment through their substantial energy usage.
As discussed in a previous blog about AI's carbon footprint, researchers who assessed the energy cost and carbon footprint of four NLP models found that at worst, the process of training an algorithm can emit more than 626,000 pounds (or ~312 metric tons) of carbon dioxide equivalent. That’s a lot considering that in 2016, the average person in the Netherlands emitted about 10 tons of CO2 equivalent per year.
Unaccountable AI
Generally, these are AI systems that lack transparency, making it difficult for humans to understand how they make decisions or how to hold systems accountable for their actions.
Examples of unaccountable AI include:
- Autonomous weapon systems that work without human intervention, making it difficult to determine who is responsible for any harm or damage that they cause.
- Chatbots that may provide misleading or even incorrect information to users without any accountability for the information shared by the system.
- Financial trading algorithms that can cause market instability or engage in unethical practices without the intention to identify or correct such behavior.
Unethical autonomous AI
Autonomous systems can operate independently and make decisions without any human intervention, but sometimes, they are at risk of being unethical.
Examples of unethical autonomous AI include:
- Self-driving cars that use AI algorithms to decide when to accelerate, break and steer based on data from sensors and cameras. Those may face moral dilemmas, such as who to save in risky situations: the passengers in the car or pedestrians.
- AI systems that automatically filter out job applicants: These can be difficult to audit, and the quality of their decisions may decline if their data is not carefully audited and kept up to date by human input.
- Drones programmed to fly anonymously and perform tasks such as mapping, surveying, and inspecting infrastructure: Those have been also unethically programmed for tasks such as surveillance or hunting.
Opaque AI
This is a term that refers to AI systems that are too complex and hard to understand. The term “black box AI” specifically refers to opaque machine learning models, and similarly, it is difficult to understand how such models arrive at their decisions.
Examples of opaque AI include:
- Credit scoring algorithms: Those may use complex calculations to determine how worthy users are to receive new credit, but do not always provide a clear explanation of the methodology and factors considered.
- Deep neural networks trained to perform complex tasks: Those are often too difficult to interpret because they involve thousands of parameters.
- Complex decision trees: Those can also be an example of opaque AI as they are difficult to interpret when they comprise of many branches or decision nodes.
Legally non-compliant AI
This refers to AI systems that do not comply with applicable laws or regulations, potentially resulting in legal and financial consequences.
Examples of legally non-compliant AI include:
- Healthcare AI that violates patient privacy by collecting or processing patient data without proper consent or in violation of patient privacy laws.
- Some facial recognition systems that collect and store personal data without obtaining proper consent, violating laws such as GDPR.
- Deep fakes, which compile AI-based human images and are usually used for malicious purposes, such as political manipulation and spreading false news. These may have harmful social effects and undermine public trust.
What are the tools to implement Responsible AI?
Every AI implementation, no matter how small, should be responsible. Responsible AI is not an optional tool, but a way of working that should become the default for any current or future AI project. Recognizing irresponsible AI within a project is the first step to mitigating its risks.
Examples of tools and frameworks that can be used to establish responsible AI in projects include:
- Ethics and value exploration workshops: It is important to start by establishing ethical guidelines and principles that align with the project goals.
- Risk assessment and analysis canvases: This assessment should be done before the development of the AI system to address potential issues such as bias, privacy, data quality, transparency, and so on.
- Stakeholder analysis and involvement workshops: It is possible to gather diverse perspectives on possible risks by involving multiple stakeholders (project managers, developers, data scientists, etc.) from the beginning of the project.
- Activity / process / task / context analysis
- Bias analysis and mitigation tooling, such as FairLearn
- Interpretability tooling, such as ELI5
Those tools and techniques can be used to overcome challenges hindering the responsible use of AI, or to maximize responsible AI’s impact, by:
Identifying, assessing, analyzing, and mitigating the risks of irresponsible AI, and/or
Identifying, assessing, analyzing, and maximizing the positive impact of AI on society while also monitoring its environmental impact.
It is important to note that you should keep an eye out for irresponsible AI in any project, even after it has been put in production. If you observe during a project any risks to its responsible application, then guided frameworks can assist with identifying and resolving these risks.
Why is responsible AI important?
Before AI became popular for all types of use cases, decisions were made based on previous experiences and business rules. Nowadays, we can automate decision-making by training a machine based on lots and lots of data.
Irresponsible decisions are not a new direction - decisions can always be biased, whether based on (human) experience or data. But while human bias is not new, computer bias is. In addition, the scale of the problem can become much bigger when a machine makes the same biased decisions over and over, impacting large numbers of people.
Responsible AI is becoming increasingly important because:
- We’re increasingly applying AI to support decision-making in all aspects of life, from grocery recommendations to personalized healthcare.
- Many users of AI systems don’t fully understand the inner workings of those AI systems, which puts them at risk of being treated unfairly.
- The risk of introducing bias or other unwanted subjectivity into a model is not always obvious. For example, using historical data on job applications may contain more applications by men than women, introducing unintended bias against female applicants.
- Using AI to improve decision-making in risky situations can hurt groups of people who have not been chosen immediately. For example, if AI is used to guide firefighters to prioritize the riskiest buildings or areas first instead of others, this targeted treatment may affect groups that were not initially considered.
- Current technology makes it possible to scale these decision-support tools almost infinitely, greatly amplifying their impact.
- Auditing decision-making processes that are aided by AI is often very hard, since the way that models have been trained/set up is not always transparent and/or explainable.
Therefore, it is really important for people involved in creating/using AI to have an awareness of the concepts of Responsible AI. The application domain of AI will only further grow in the future; therefore, now is the time to embed these responsibility concepts in every AI application.
How to make an AI-run model more environmentally friendly?
To reduce energy consumption during machine learning model training, developers can use less resource-intensive models that require fewer computation resources. This can be done by using simpler models with a reduced size, or by using more efficient algorithms.
Additionally, training can be done less frequently with pre-trained models and transfer learning, or by optimizing hyperparameters in order to reduce the number of iterations required. For example,
- A decision tree algorithm can sometimes solve a problem that would be too costly to implement with a random forest or neural network.
- Linear regression models are another example of simpler models that can be used instead of more complex and expensive models as a first choice.
If a heavy model is still required for your application, there are a few techniques that can reduce energy consumption and be more responsible towards the environment:
- Pruning, which involves deleting unnecessary parameters from a neural network, reducing it in size and therefore requiring less computational resources. These parameters can be individual or entire groups of parameters.
- Model distillation, which involves compressing complex models into simpler ones, requiring less energy to be trained.
- Knowledge distillation, which involves transferring the knowledge from a large model to a smaller one while maintaining its quality. Smaller models can be put on less powerful hardware because they are cheaper to evaluate.
- Quantization reduces the precision of numerical data used in machine learning models (e.g. from 32-bit to 16-bit floating-point numbers). This technique can significantly reduce energy consumption without sacrificing model accuracy.
What is the European regulation on artificial intelligence?
The European Commission published in February 2020 a whitepaper on artificial intelligence and proposed to set up a European regulatory framework for trustworthy AI. With this background, the European Commission unveiled a proposal for a new Artificial Intelligence Act in April 2021.
The Commission proposes to adopt a different set of rules tailored to a risk-based approach with four levels of risks:
- Unacceptable risk AI: harmful usage of AI
- High-risk AI: systems that are creating a large impact on people’s safety or rights
- Limited risk AI
- Minimal risk AI
To read more about this, visit our blog that extensively explains the EU AI Act of 2021.
Responsible AI at Xomnia
Xomnia’s mission is to empower people, society and businesses to responsibly seize the enormous opportunities offered by data and AI.
As the number of opportunities provided by AI grows every day, so do the cases where AI is misused under various reasons and circumstances. Therefore, the responsible part of our mission is more relevant now than ever.
To read more about Xomnia’s commitment to ethical and responsible AI, click here.