Sharing Our Views on Ethics: DataKind UK Releases Survey Results from Community

By DataKind UK

 

At the beginning of 2019, we surveyed members of the DataKind UK community on their attitudes towards the responsible use of data science and AI. We uncovered a range of differing views and levels of optimism from DataKinders.

 

Asking people to list the words they associated with ‘Ethics in Data Science and AI’ elicited responses from misuse, intrusion, bias, prejudice, ambiguity, failure, misunderstood, buzzwords, and frustrating, through to fairness, accountability, transparency, wellbeing, respect, honesty, urgent, and essential.

 

So what has us feeling positive about the opportunities AI can uncover, and what has us running scared?

 

What are people worried about?

 

Respondents were divided on whether the overall present and future impact of data and AI on humanity is positive or negative. Just under half of respondents felt positive about it (45%), 11% felt negative, and 43% weren’t sure either way. ¯_(ツ)_/¯

 

undefined

 

Two thirds of respondents (67%) were somewhat or very concerned about AI being applied in ways that will have a negative impact on human rights and freedoms. Only 18% weren’t worried about potential harmful consequences in this area.

 

Somewhat predictably, our survey respondents weren’t too worried about the singularity or robots taking over (17% were somewhat or very concerned, while 63% weren’t worried).

 

Interestingly, unemployment due to AI is also not high on people’s list of concerns, with just 25% worrying of automation’s effect on society, and 75% either unworried or unsure of its impact.

 

Who is responsible?

 

When it comes to who is ultimately responsible for making sure AI doesn’t have a negative impact on society, no clear ‘leader’ emerged. The most popular response (26%) was that it should be the joint responsibility of different entities, for example governments and regulators along with corporations. After that, the top response (20%) was that the buck should stop with government. Only 7% felt that data scientists and developers were ultimately responsible for the effect of their work on society.

 

Over half (56%) of people surveyed said they would potentially write code for something they believed could be harmful, depending on the product and the circumstances. Forty-two percent said that they wouldn’t write it if they had misgivings, and only 3% said they would still write the code, despite suspecting it would be deployed in a harmful way.

 

This isn’t only surprising, but begs the question, can we trust that data scientists and developers have enough information to be able to make a decision like this? Even if we have no hard evidence that a product or service will be used harmfully, we might suspect that it could be. However this seems to square with our findings above — if the vast majority of data scientists and developers think others are ultimately responsible for the effects of their work, it makes sense that they don’t feel they should refuse a project based on its potential future application.

 

On our best behaviour

 

The vast majority of respondents (84%) haven’t signed up to a code of ethics, with 38% saying that’s because they weren’t aware such a thing existed.

 

If you’re looking for a code of conduct to guide your work, some of the codes people mentioned in the survey were:

 

However, perhaps guidelines alone don’t go far enough. As one respondent said, “Signing up to a code doesn’t change how ethically I behave, it only declares it to others.”

 

Raising the alarm

 

Half (52%) of those that filled in the survey said they’re aware of a whistleblowing policy at their work. A quarter (24%) said their organisation doesn’t have a policy for whistleblowing, and the remaining quarter weren’t sure if one exists or not.

 

Just under half of all our respondents (49%) said there was no code of conduct or principles for data science at their organisation. Even more worryingly, out of those who said they were aware of a whistleblowing policy at work, 41% said their organisation lacked a data science code of conduct. Presumably it would rely on personal judgement alone (rather than employing the help of a framework or guide) to determine if something unethical had taken place.

 

Most of our respondents said when they have concerns, they’ll raise the issue with a manager or mentor (84%), and/or raise it with their team (82%). A smaller proportion (66%) said they’d discuss it with a colleague privately. Happily, the least likely response chosen was to do nothing at all. Discussing the issue with an external contact or organisation such as a union, journalist, or online forum was also pretty rare.

 

Nearly half of respondents (44%) thought there would be no consequence for them, either positive or negative, as a result of raising an ethical concern with their manager. It’s encouraging that only 8% thought there would be a negative impact on their job security as a result of flagging a concern, yet just 12% thought raising a concern would lead to a positive outcome for them.

 

What’s next for AI?

 

Overall, only 7% of those who answered our survey feel that the pace of AI is moving too slowly and preventing society from capturing its full benefits. Just under half (46%) think that AI is moving ahead at the right pace, while a further 46% think AI is moving too quickly, without enough protection against unintended consequences.

 

This split becomes more interesting when you compare people’s working environments. People working in public or social sector organisations appear more optimistic, with 67% thinking AI was moving at the right pace and 11% thinking it was moving too slowly. From privately employed respondents, 57% thought AI was moving too fast, 6% thought it was moving too slowly, and a more tentative 38% thought it was moving at the right pace. Is the gap between the rate of AI adoption so different in the different sectors? Or is there something else at play?

 

How can we reduce the risks?

 

People suggested some great ideas on how DataKind can continue to support and enable data for good.

 

In order to both champion the positive impact, and reduce the potential for negative impact of data science and AI, most people feel that DataKind UK should communicate more often, more clearly, and more widely about the topic. This is something we can do to support our data science community as well as the social change organisations we engage with on projects.

 

A few more ideas that we’re going to be looking into in the coming months are:

  • Looking into how we engage and recruit volunteers to ensure we have a diverse community
  • Leveraging the power of our network to raise awareness and mould best practices for data science projects
  • Continuing to take on projects and hold events to showcase what can be achieved with data science
  • Increasing our engagement with charities (we’re planning workshops, a charity analyst mentoring pilot, and we’ll continue to run our ever-popular Office Hours and Social Data Society)
  • Educating our data science community about the role of ethics in AI through our book club (contact us for more info and to join up)

 

Have your say!

 

We’ve published the survey responses on our github, so you can explore them for yourself. Access them here, and let us know what you uncover!

 

Note on the data: This survey was circulated to DataKind UK’s network between December 2018 and January 2019, and fully completed by 99 respondents.

Scroll to Top