Are we ready for AI? | Jisc
by Beth Singler
on27 January 2020
With our lives increasingly affected by artificial intelligence (AI), there's a need for a big conversation that reaches beyond technologists.
I'm a huge science fiction fan – but I recognise it when I'm watching or reading it. The more I research artificial intelligence (AI), the more I'm concerned about the blurring of the line between fact and fiction.
As an anthropologist, I findAIfascinating. That’s partly because it’s such a slippery term, treated differently in different contexts. To those who work in the field, it can mean a very specific, narrow tool. For the general public, it can mean many different things, not least the assumptions driven by science-fiction narratives.
The press often hypes up even the most banalAIstory and illustrates it with Terminator pictures, which gives the impression thatAIhas possibly malevolent capabilities already.
On the other hand, I once took a taxi with a very chatty driver who asked me what I do. I replied that I work inAIand his response was 'oh, artificial insemination’.
But while we're being distracted by such narratives and misunderstandings, actual applications right now are potentially quite dangerous.
When we focus on big, scary futures and the robo-apocalypse, we’re not thinking about the big, scary present, the personal robot apocalypse and the less visible forms ofAIthat are already being implemented and are affecting people.
Losing trust
We've seen the influence ofAIon social media and democracy. Now we're seeing the problem of deep fakes, which will further erode trust.
But issues of trust don’t just lie in deliberate manipulation. Unconscious bias is also having some very specific demographic impacts. There’s the well-known example of people trying to use a soap dispenser that has a sensor which doesn't recognise their skin colour because the people who were building the technology weren't from a variety of ethnic backgrounds.
Part of the problem is that the stereotypes about tech companies do hold true in a lot of instances: they're often white, male, of certain generations – and that can limit perspectives.
While there is pushback against that, with more efforts to welcome people from different backgrounds - and while some larger tech companies have been good at forming connections with universities that have arts and humanities scholars - it's not always apparent how much they’re listened to. Sometimes, it’s simply 'ethics washing’.
Biased neutrality
While unconscious bias is an issue, whether an algorithm can ever be fair or unbiased is a difficult question because our definition of fair and unbiased is, in itself, never unbiased.
You can say an algorithm is being neutral - but how do we define neutrality? Who gets to define what is a neutral response? Absolutely everything that goes into an algorithm - every dataset, every formulation of the algorithm - comes with our assumptions.
Amazon ran a CV application app for human resources withAIin it and tried to make the application process gender neutral. But the dataset included successful applicants and those successful applicants tended to be men, who tended to do 'hockey’ at university rather than 'women's hockey’. So despite the process never asking if candidates were male or female, the unsuccessful candidates, who did women’s hockey, were more likely to have the word 'women's’ in their CV and the algorithm picked up on this.
It was biased because it was built on human presumptions that we'd already fed into the data without even knowing.
AIin education
WithAIin education there’s a balance, as in almost any application ofAI, between opportunities and risk.
In the UK, the opportunity seems to be the personalisation of education. We're at a crisis stage of underfunding, where teachers are faced with classrooms of 30-plus children and not enough time to give dedicated personal attention to every single learner, who all have very different needs. It makes sense to do what is automatable in order to catch children's needs and requirements better.
Meanwhile, someAIedtech companies, such as Squirrel AI in China, are focused on personalised pathways for education. In these products, theAIrecognises each module that interests the student and then suggests the next module and the next. So the syllabus is less teacher-driven or even state-driven: it is a personalised syllabus.
My concern is that to silo children's interests, based on them showing interest in one topic, could be detrimental. One of the wonderful things about schools and universities is the opportunities they offer to explore new subjects and find new areas of interest that you didn't know you could possibly even have.
This kind ofAI-based recommendation system can also go terribly wrong. Or it can just be poor technology.
For example, on Amazon, the system sometimes recommends you buy similar things to something you’ve already bought … well, I don't want 20 rugs. I've just bought one rug so why would I still be looking at rugs?
Voices in the room
The answer to all of this is having a variety of voices in the room. You cannot leave it to one group of people because it will have impact on many different types of people: the integration ofAIinto our lives, day to day, is more than just a technological application.
It's going to impact people's choices, their lives and the directions they take their lives in. It's not possible to reflect on that impact purely from a technological standpoint. That doesn't take on board the human element.
We need anthropologists and social scientists, historians and people from the arts and humanities to be part of this conversation.
Speaking as an anthropologist, we’re particularly useful because we're so engaged with human communities and ideas. We can see some of the repercussions and see, in advance sometimes, when the knowledge isn't there in the technological sphere to say what this application will do in a community.
That's not prediction. It's having a cultural understanding of interactions between humans that may not be immediately apparent in the application of technology.
Time for the conversation
We know that humans are the creators of bias. We've relied on human judgment withoutAIfor centuries and we know it’s flawed.
But if we get enough humans into the conversation, we can try to find the least bad solution. We can stop blindly relying on the output of any algorithm and instead critique it, deal with AI’s black box issues, and ask how it came up with the decision that emerged.
What are the elements in the data that have created this decision? If those elements are collectively decided to be bad in our current society and we don't want to see that bias, we should push back against the algorithmic decision, using critical thinking, cultural interactions and common sense.
Are we ready forAI? We have to be: a lot of the applications are already here, embedded and having an impact on our society. So instead of asking that question, we've got to keep talking about it and making it visible when it's invisible.
We have to engage the people who, like my taxi driver, don't even think aboutAIas a topic. We have to spread that conversation wider - and we're absolutely ready for.
Beth Singler is a keynote speaker at Networkshop48, 15-17 April 2020. You can save 10% with our early bird discount if you book before 31 January 2020.