top of page
  • Writer's pictureLivi Adu

MKAI Global AI Ethics and Safety - People's Summit 2023


In this post, I'm excited to share a few of my favourite insights from the Inclusive MKAI Global AI Ethics and Safety People’s Summit that took place on 9th October 2023. This unique hybrid event served as a melting pot of ideas, bringing together industry experts, researchers, policymakers, and the public for a critical discussion on the ethical and safety implications of artificial intelligence. From exploring AI bias to delving into issues of transparency, privacy, and accountability, the summit covered a broad spectrum of topics. The aim was to highlight the potential benefits of AI for society and address the risks associated with it.


The amount of academic, public-funded research into that list is shockingly low, but, most research is being done in corporations that are creating the technologies.

There needs to be an individual held accountable and have to sign off of governance strategies that allows you to have multiple checkpoints when using AI software. We also need records and documentation to address the challenges with the lack of standards and the lack of quality assurance that we are currently experiencing.

Glowing light blue blocks floating in a black void. these blocks are joined by dotted lines of light highlighting different methods of communication
It is important to have transparency with the public in how we create our AIs, what data they are training on, and accountability structures .(AI generated image)


ChatGPT says safety standards alone may not be enough to prevent AI from potentially taking over the world, it also shared that under certain circumstances, AI might ignore safety standards and human-made laws. But, if we only speak and explore the dystopian aspects of AI, we don't engage at sufficient level of critical depth with what's at stake and it just becomes superficial with the extreme outcomes being considered rather that the real-time issues we are dealing with. You also run the risk of creating so much fear around AI, that only malicious people will use it for nefarious purposes.

an infographic of a person's head shot. they are waring a greay mask adn a dark blue t-shirt. in the background there is a web od symbols associated with security, such as keys, padlcoks etc in white. the back ground is blue and looks like a background of the matrix in a slightly lighter blue
Security is a major issue, not only with ensuring data is held securely but also how the data is used. (AI generated image)


There was a general consensus that using AI shouldn't be able to use it for college assignments or any assignment in general, because it's not demonstrating your understanding or way of thinking. You should have your own understanding and principles of how everything works and your knowledge of what you're doing. However, having strick rules shouldn't be in place because it's up to the student how they could use like chatGPT to ask like websites to get the information from instead of just asking it to write it for you. It is useful for creating a list the resources for the information you need and ask the AI for a summary.

a black man in a café with a laptop. he is wearing black glasses, white headphones and an orange jumper.
Should we be preventing students using AI to study, when they will be expected to use this in the workplace?

Biases and AI ethics

AI knows what bias is but, it is unable to recognise data in its own data set. We have biases that have been amplified by the data the AI's have been trained on.

When you think about large scale, you will find that it is enforces biases that we've historical biases. People already struggle to recognise their own biases, let alone bias in AI or data sets. Although it is natural to have biases, if we don't recognize them and work to avoid having them influence us in decisions, it leads to bad decisions, but the issue with not addressing Bias, either in yourself or technology, is that it perpetuates stereotypes and leads to prejudice, which is then self-perpetuating.

For example, if an AI system is trained on historical data that is biased against a certain demographic group, such as women or people of a certain race or ethnicity, then the system may produce biased outputs that perpetuate stereotypes or marginalize that group. This can lead to unfair outcomes in decision-making processes or perpetuate systemic discrimination. Another example is image recognition algorithms that may struggle to accurately identify the faces of people with darker skin tones because they were primarily trained on data that consisted of lighter skin tones. This is a well-known issue in the AI community called "racial bias in facial recognition."

We have to be aware of this so that we can be better critical thinkers in our professional practice. Awareness is the first step; you then have to find ways to reduce the bias (with decision-makers having relevant lived experience), and then you apply that new learning to make your product better for the end user's experience. We totally do lose sight of the bias with AI tools but, with AI hallucinations, it is bringing it to the forefront.

It's important to note that bias in AI systems is not necessarily intentional or malicious. Instead, it is often the result of incomplete or biased training data. Therefore, to mitigate bias in AI systems, it is important to train them on diverse and representative data, to audit and test them regularly for biases, and to incorporate ethical considerations into their design and development.

an abstract image of a silluette of a head with yellow headphones. int eh back ground is blue, abstract test
Will Ai help us understand out biases or exacerbate them?(AI generated image)


We need to start creating systems with all these people in mind, with the help of strong voices in each part of the globe. We need to hire the right kinds of people, invest in responsible AI, invest in responsible AI teams and in that strategy. We need to build AI in an intersectional way by including psychologists, philosophers, social scientists, engineers, and people from underrepresented communities from all over the world because we need various expertise here to be able to actually understand these problems both in the technically and in the human sense too.

a gallery of cartoon figures showing people working with robots and robots solving the world's problems.
Will we be replaced by AI or will they be helping us attain higher goals? (AI generated image)

AI hallucinations

AI sometimes makes up answers and produces false results, this is known as AI hallucination. Developers are constantly testing this in their models by trying to trick the AI solutions to see if it will hallucinate, and if it does, then put more guardrails in place to prevent this from occurring again. They actually ask the AI the wrong question to see if it will produce the wrong response, if it does, then they go back and retrain it. Because unless you understand where the risks are going to come from, and you take deliberate steps to control it, nothing is going to change. We need to educate people about what AI is, why it can harm people, where do where do that harm comes from. this is so we can create a culture of risk awareness within our companies, where people know what AI is and how it can harm others, how it can harm other employees and customers.

a blue glowing humanoid being looking up as a galaxy swirling with pink and orange clouds
Does AI dream, imagine or hallucinate like humans do? (AI generated image)


The event was more than just a discussion - it was a call to action. It provided a thorough exploration of the challenges we face as well as practical solutions. It was a truly engaging summit, and I learnt a lot about how we can create a more equitable future for all. It is well worth checking out MKAI's future events, they always have diverse and engaging conversations and topics to do with AI.

Check out their Website to find out more:

You should also have a look at their report all about the findings from the summit and surveys/feedback from members:


Foster-Fletcher, R. et al. (2023). Inclusive MKAI Global AI Ethics and Safety People’s Summit [Webinar]. MKAI. November 01, 2023. Available at:



bottom of page