AI and Ethics: Balancing Innovation with Responsibility
The concept of ‘Intelligent Machinery’ was first introduced by Alan Turing in 1948. Computer programs with the ability to play chess followed in the 1950s. Fast forward another 70 years to the present, where we interact with computer intelligence several hundred times a day. From smart home setups that gradually brighten our homes to social media algorithms that define what we’re shown in our feeds; AI programs have become ingrained in the lives of the general population . The release of ChatGPT in 2022 catapulted the concept of AI to the forefront of the public’s awareness. ChatGPT is an example of a chatbot that makes use of generative AI (AI that can create content, e.g. text, images, videos in response to user prompts). Now, not only can our devices be used to help us carry out tasks, but they can also create new material for us. Thus, interest and usage of AI is continuing to rise exponentially within daily life.
As the use of AI technology has become ingrained in our lives, the importance of ensuring these technologies are governed, and don’t lead to unintended negative consequences, has become increasingly crucial. The amount of reliance and trust in these systems is growing, therefore so are the risks that go along with them. This is where the concept of AI ethics comes into play. According to www.gov.uk, AI ethics is defined as ‘A set of values, principles and techniques that employ widely accepted standards to guide moral conduct in the development and use of AI systems.’
Some examples of the kinds of dangerous impacts AI can have include cases where these systems have demonstrated inherent biases. Facial recognition software has faced criticism for being racially discriminatory, with the accuracy for identifying faces varying drastically between Caucasian and those of other ethnic backgrounds. In addition to providing phone security, this software is used for surveillance and screening - which could result in racial harassment and exacerbate inequality. In the health sector, AI models have been shown to miss detection of diseases in women. As well as these issues of bias, AI also produces a great deal of CO2 emissions. Therefore, whilst it is used a great deal in the fight against climate change, the technology itself is also adding to the global carbon footprint.
What causes ethical issues?
AI ethics can broadly be grouped into two categories – carbon emissions and bias/fairness. We’ll now dive into the reasons these issues can occur.
Carbon emissions
Human activity that produces excessive emissions of carbon dioxide into the atmosphere is one of the leading causes of climate change. Whilst activities such as travelling by plane often come to mind, all activities produce carbon emissions to a greater or lesser extent. The creation and maintenance of AI products is no exception. There are two types of emissions from AI:
- Lifecycle – carbon emitted while manufacturing the materials required for AI. A large variety of materials are required to create the computers, servers, and buildings to store these. From silicon used to create computer chips, to copper for wiring, through to concrete and bricks for data centres.
- Operational – carbon emitted from energy required to operate AI. Whilst the lifecycle emissions described above cover the physical infrastructure required to create AI products, operational emissions come about through the electricity and power that operate these products. This includes creating and training the underlying models and algorithms, running the technology on a regular basis and the associated software that monitors the outputs.
As discussed, a large amount of carbon is produced through the AI technology itself. However, another huge contributor comes from the data required to run the technology. AI models learn from the data they are trained on. As these models are becoming more sophisticated, they require larger amounts of data, which needs to be stored somewhere.
Data centres are large buildings containing thousands of servers that store the data. The servers produce vast amounts of heat; therefore, these buildings need cooling to prevent the servers from overheating, which requires both water and electricity (in addition to the electricity that powers the servers themselves). The amount of data generated and stored has increased exponentially over the last decade and continues to do so, thus increasing carbon emissions.
Bias/Fairness
As mentioned above, AI is trained on data. Therefore, this data dictates the outputs of the AI. If there are any biases that exist in the data, the AI will inherit these. Some examples of biases include:
- Unrepresentative samples – certain characteristics or attributes are not included in the data or are low in number. It is this type of bias that led to the racial discrimination in facial recognition software. The software was trained on photographs of faces to determine when it is being presented with a face and how to tell them apart. However, it transpired that the photos used were predominantly of Caucasian Males, so when presented with faces of other backgrounds and gender, the software was not able to accurately distinguish between faces.
- Reinforcements of stereotypes – data mimics commonly held stereotypes. For example, if a dataset on hospital staff consisted of nurses that were majority female and doctors that were majority male, AI trained on this data would make gender and job role assumptions that mimic this.
- Inaccurate measurement – where the method used to collect the data is flawed, the resulting data will also be flawed. For example, if photographic data was collected using a camera with a faulty flash the photos may be over or under exposed. When an AI trained on this data came across photos with accurate exposure it may misclassify the photos.
In addition to the data used, biases can creep into AI during the development of the models. AI models are created by humans who have innate biases. Data used in AI is rarely used in its raw form. The data needs to be ‘cleaned’ and manipulated to be used correctly by AI. For example, a person’s age is often calculated from their date of birth. In some systems if a date is not available, it will appear in the data as ‘1900-01-01’. The person handling the data can sense that this is not a real date of birth and act accordingly, for example ensuring the age for those people is labelled ‘Missing’. Each developer will be influenced by their background and perspectives when preparing the data for AI models. For example, depending on the developer’s culture and upbringing they will have had exposure to certain foods. Therefore, if there are foods missing from a dataset that the developer is unfamiliar with, they will not realise that the data is unrepresentative. If an AI that classifies food groups is presented with one of these missing foods, it will also be unfamiliar with it, leading to these foods being misclassified.
Embracing Ethical AI
There’s a growing spotlight on the inequity that’s inherent in how AI is applied. With businesses investing in AI moving forward, now is the right time to learn how and why your approach matters.
DOWNLOAD EBOOK NOWHow to avoid ethical issues
Whilst AI can lead to some very serious negative consequences, there are several strategies that can be implemented to avoid these.
Transparency
AI products are used and consumed by the public. It is, therefore, crucial that information on the development, outputs and usage of these products is openly available to reduce the risk of ethical violations. Diverse perspectives during development and bias measurement tools can also help prevent bias in models and underlying data.
There are also measurement tools to check how eco-friendly AI models are. This helps identify issues and inform business strategies to reduce emissions. For example, understanding how much CO2 is being generated by a company’s AI model can help them allocate an appropriate carbon footprint budget to keep this in check.
Ensuring continuous monitoring of AI products is also important. Data shifts, so a model's fairness and footprint can too. Businesses need to be aware of any issues that come up as data evolves so they can mitigate these in time.
Model development and infrastructure
There are several strategies that AI developers leverage to reduce the environmental impact of AI products. Here are some:
- Using data centres geographically close to where the model is being run reduces the amount of energy required for data transfer.
- Using ‘eco-friendly’ centres, where environmentally friendly techniques are used to store data, e.g. using wind farms to generate the electricity.
- To run an AI model, developers need computing clusters (a group of interconnected computers that work together as a single system to perform tasks). Using clusters of the right size to execute their model is key to avoid having extra idle power.
- Developing simpler models that avoid unnecessary complexity. This may mean choosing pretrained models rather than building one from scratch but even if a bespoke model is necessary, there are techniques that can still improve efficiency.
- Terminate training for underperforming models early to conserve energy and debug issues on small scale examples rather than using the entire dataset where possible.
- Getting rid of unused data. Businesses can save energy from storage and reduce their carbon footprint by regularly checking what data they actually use and deleting any unnecessary data.
Conclusion
This blog post has outlined some of the ethical considerations of AI, how they come about and how they can be avoided by those who create these technologies. Unfortunately, AI bias and carbon emissions cannot be completely avoided; however, we can minimise these by employing ethical frameworks when creating these dynamic products. Striving to be as transparent as possible throughout the lifecycle of an AI tool can help identify potential biases and will allow us to monitor its carbon footprint. Being vigilant around the way the underlying models are developed and ensuring they are running as efficiently as possible can reduce carbon emissions. It is important those creating AI are mindful of its development and the data that is being used to train it. AI can be extremely powerful and with great power comes great responsibility!
Ready to build a future of responsible AI? Contact Merkle today!
Want more like this?
Want more like this?
Insight delivered to your inbox
Keep up to date with our free email. Hand picked whitepapers and posts from our blog, as well as exclusive videos and webinar invitations keep our Users one step ahead.
By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy
By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy
Other content you may be interested in
Categories
Want more like this?
Want more like this?
Insight delivered to your inbox
Keep up to date with our free email. Hand picked whitepapers and posts from our blog, as well as exclusive videos and webinar invitations keep our Users one step ahead.
By clicking 'SIGN UP', you agree to our Terms of Use and Privacy Policy