top of page
  • Writer's pictureLivi Adu

What ethics do I need to consider when using AI?

Beginner guide to AI ethics.

Artificial intelligence (AI) is rapidly transforming the cultural and heritage sector, with museums at the forefront of this shift. AI-powered technologies are being used to improve visitor experiences, enhance collections research, and automate museum operations. However, the use of AI in museums also raises a number of ethical concerns. This blog post will explore the key ethical considerations that museums must keep in mind when using AI. We will discuss Transparency, Accountability, Data protection, Copyright, Social impact, and Bias. We will also provide a few recommendations for museums to consider when beginning to bring AI into the workplace so AI can used in a responsible and ethical manner.


Cartoon style image of 3 business people looking at a white grid with yellow lines.  the central person has 3 arms . The background is blue with white question marks
Having clear decision-making processes is important. This is an Ai generated image, notice how one person has 3 arms? Ai doesn't always get things right... Dall-E 2 Ai generated image

Transparency and accountability

The cultural and heritage sector needs to have transparency and accountability in their use of AI. AI algorithms can be complex and difficult to understand, making it challenging to hold them accountable for their decisions. Organizations must have a clear understanding of the potential impact of AI and must take responsibility for its use. There must be clear lines of accountability, and organizations must be transparent about their decision-making processes. This will help ensure that the use of AI in the cultural and heritage sector is ethical and responsible. Here are some specific steps that organizations can take to improve transparency and accountability in the use of AI:

  1. Document and disclose AI systems. This includes information about the purpose of the system, the data it uses, and how it makes decisions. Check out my AI usage policy here for inspiration.

  2. Provide mechanisms for feedback and oversight. This could include allowing users to challenge decisions made by AI systems or establishing review boards to oversee the use of AI.

  3. Be transparent about the use of AI and personal data. Organizations should inform individuals about how their personal data is being used and obtain their explicit consent.


An illustration in a blue and white colour pallet. It is of of a persons head and shoulders. the perosn is wearing a lightblue mask and a dark blue top. they are surrounded by icons that represent data security with a mid blue background  with vertical numbers cascading.
Data protection and privacy is essential, not only legally but, to ensure trust with your customers. Dall-E 2 Ai generated image.

Data Protection and Copyright

Since the introduction of GDPR regulations, museums have had to navigate carefully when using AI technologies that involve processing personal data. Striking a balance between providing personalized experiences for visitors while respecting their privacy rights can be a delicate task. With AI opportunities, parts of this could be automated but, Its use in the culture and heritage industry raises many data protection and copyright concerns, including:

  • Privacy. Some AI systems often rely on personal data to function, such as facial recognition technology. This raises concerns about the potential for misuse of personal information. Or using ChatGTP to answer an email? or sort through data in a spreadsheet with all your client's names and contact information? You are sharing this with the world, as that data becomes a part of its training model and could then be accessible to anyone who uses Chat GTP. It is essential to be transparent about the use of personal data and to obtain explicit consent from individuals.

  • General Data Protection Regulation (GDPR). The GDPR requires organizations to be transparent about their use of personal data and to obtain explicit consent from individuals. This regulation helps protect people's privacy and ensures that their personal information is not being misused.

  • Copyright. AI-generated content raises questions about ownership and copyright. If AI creates works that are similar to existing works, who owns the intellectual property? Additionally, using AI to create works based on cultural heritage raises questions about cultural ownership and appropriation. It is important to consider the ownership and rights associated with any data or content used in AI-generated work. It is also essential to be transparent about the use of AI in creating works and to give credit to all parties involved in the process.

Here are some tips for mitigating these risks:

  1. Implement strong security measures. Organizations should take steps to protect personal data from unauthorized access, use, or disclosure.

  2. Consider the ownership and rights associated with AI-generated content. Organizations should be aware of the copyright implications of using AI to create works based on existing works or cultural heritage items.

  3. Give credit to all parties involved in the creation of AI-generated content. This includes both the humans who developed the AI system and the individuals who provided the data or content used to train the system.


an office space illuminated with red florescent light. all of the desks are manned by white robots in suits and headphones.
Do you think all our jobs will be replaced by robots? What will that mean for our futures? Dall-E 2 Ai generated image

Social impact

The use of AI in the culture and heritage industry has the potential to have a significant social impact, both positive and negative. On the positive side, AI can make cultural heritage more accessible to a wider audience, preserve cultural heritage, and promote cultural understanding. For example, AI can be used to develop new ways to interact with and understand cultural heritage, such as through virtual tours, interactive exhibits, and personalized recommendations. AI can also be used to automate the cataloguing of objects, digitisation of historical texts, and develop new methods for conservation and restoration. It can be used to create tools and resources to help people from different cultures learn about and appreciate each other's heritage.

The use of AI in the culture and heritage industry raises several social impact concerns, including:

  • Exacerbating existing inequalities. AI systems can be biased, and if they are used to make decisions about cultural heritage, they could perpetuate or even amplify existing inequalities. For example, an AI system that is used to recommend museum exhibits to visitors could be biased towards recommending exhibits that are relevant to certain cultural groups or socioeconomic backgrounds.

  • Cultural homogenization. AI systems can be used to create and distribute cultural content on a large scale. However, if they are not used carefully, they could lead to cultural homogenization, where a small number of dominant cultures are promoted over others. For example, an AI system that is used to recommend movies to viewers could be biased towards recommending movies from certain countries or cultures.

  • Job displacement. AI has the potential to automate tasks previously performed by humans, which could lead to job displacement in the cultural and heritage sector.

  • The impact of AI on social cohesion. AI systems can be used to manipulate people's opinions and beliefs. This could harm social cohesion, particularly if AI systems are used to spread misinformation or propaganda.

  • The impact of AI on human rights. AI systems could be used to violate human rights, such as the right to freedom of expression or the right to a fair trial. It is important to ensure that AI systems are designed and used in a way that respects human rights.

We must become aware of the potential social impacts of AI in the culture and heritage industry and take steps to mitigate any negative impacts. This could include:

  1. Developing ethical guidelines for using AI in the cultural and heritage sector. These guidelines should be developed in consultation with a wide range of stakeholders, including representatives from underrepresented groups.

  2. Investing in training and education programs to help people adapt to new roles and technologies. This is essential to ensure that everyone has the opportunity to benefit from the use of AI in the cultural and heritage sector.

  3. Designing AI systems that are inclusive and representative of the diversity of human cultures. This includes using diverse data sets to train AI systems and incorporating ethical considerations into the design and development process.


Matrix style background, this is black with green symbols cascading vertically. in the centre of the image of a purson's head and sholders looking to the left. they have short black hair and headphones. their face is glitching with light blue and pink.
AI hallucination could cause some real problems in terms of spreading misinformation and perpetuating stereotypes. Dall-E 2 Ai generated image

Bias

Bias is a major ethical concern in the use of AI in the culture and heritage industry. AI algorithms are only as unbiased as the data they are trained on, and if this data is biased, then the algorithm will produce biased results. This could lead to perpetuating stereotypes or marginalizing certain groups. Here are some of the ways that bias can manifest in AI systems in the cultural and heritage sector:

  • Curating or recommending content that is biased. For example, an AI system that is trained on a dataset of mostly Western art may be more likely to recommend Western art to users, even if they have expressed interest in other cultures.

  • Promoting a particular view of history. AI systems can be used to interpret and present cultural heritage in a variety of ways. However, if they are not used carefully, they could promote a particular view of history that is not accurate or inclusive. For example, an AI system that generates captions for museum exhibits could be biased towards focusing on certain aspects of cultural heritage over others.

  • AI bias can lead to inaccurate or misleading information being presented to users.

  • Making biased decisions about funding or support. For example, an AI system that is used to assess grant applications may be more likely to favour applications from institutions that are already well-established or that are located in certain geographic regons.

  • Reinforcing existing biases in the way that cultural heritage is interpreted and presented. For example, an AI system that is used to generate captions for museum exhibits may be more likely to use stereotypical language or to focus on certain aspects of cultural heritage over others.

  • AI bias can undermine the credibility of cultural and heritage institutions.

  • AI sometimes makes up answers and produces false results; this is known as AI hallucination.

It is recommended that you verify sources and check information gathered through AI when using it for research or copywriting. When developing Ai systems in your own organisation it is essential to take steps to mitigate bias some ways this can be done is:

  1. Training AI systems on diverse and representative data. This means ensuring that the data used to train the system is representative of the population that it will be used to serve.

  2. Auditing and testing AI systems regularly for bias. This can be done by having human experts review the system's outputs or by using other methods to identify and address potential biases.

  3. Be transparent about the limitations of AI. AI systems are not perfect, and it is important to be aware of their potential biases and errors.


Conclusion:

AI has the potential to revolutionise the museum experience, but it is vital to use this technology responsibly and ethically. It's important to educate people about AI and its potential harms to create a culture of risk awareness. By being mindful of the potential risks and taking steps to mitigate them, museums can ensure that AI is used to benefit all visitors and communities, but it can not replace human judgment, as AI systems can lack human intuition and context sensitivity when making judgments or interpretations. This limitation can potentially affect areas such as curatorial decisions, where subjective judgment plays a significant role. AI systems can be powerful tools, but they should be used in conjunction with human oversight and expertise.


Here are some recommendations for museums to consider before implementing AI systems into their organisations:

  1. Be transparent about your use of AI and how you collect and use personal data. Obtain explicit consent from visitors before collecting and using their personal data. Implement strong security measures to protect personal data from unauthorized access, use, or disclosure.

  2. Be mindful of the potential for bias in AI systems and take steps to mitigate bias. This could be done by establishing an ethical review board to oversee the use of AI in the museum and engaging with stakeholders. This includes consulting with communities that could be affected by the use of AI, and seeking feedback on ethical issues.

  3. Be respectful of copyright and cultural ownership.

By carefully considering the ethical implications of AI, museums can help to ensure that this powerful technology is used in a way that benefits society and advances the museum's mission. There are plenty more ethical considerations that haven't been discussed in this blog post, such as:

  • The impact of AI on the workforce. AI could lead to job displacement in the museum sector, so it is crucial to think about how to support workers who may be affected by this change.

  • The impact of AI on accessibility. AI can be used to make museums more accessible to people with disabilities, but it is essential to ensure that AI systems are designed and implemented in a way that is inclusive and equitable.

  • The impact of AI on public trust. Museums are trusted institutions, and they need to maintain this trust when using AI. This means being transparent about how they are using AI and being accountable for the way that they use it.

References:


180 views
bottom of page