Making the Ethical Practical: Ideas for Applying a Digital Health Ethical Framework for Charities

AMRC charity members above develop questions to address ethical principles to tech companies. Image courtesy of DataKind UK.

 

By Christine Henry, Head of Ethics Committee, DataKind UK

 

How can a charity identify key ethical principles against which they can assess the ethical impacts of digital/data projects; and how can they communicate this with their tech partners when developing digital/data projects? DataKind UK was tasked with

these questions by the Association of Medical Research Charities (AMRC) in order to work with their members to create a framework for navigating the many ethical principles and codes in existence and help them discover which ethical principles are relevant to them. Key principles were identified from a review of over 60 ethical principles and codes, interviews, and through consultation with AMRC and their members, leading to nine core principles that charities should consider when developing their own ethical code.

 

The Nine Principles of the AMRC Digital Health Ethics Framework

 

Principle Explainer
Beneficence Do work that is to the benefit, not detriment of people. The benefits of the work should outweigh the potential risks.
Nonmaleficence Avoid harm. This is closely related to beneficence.
Autonomy Enable people to make choices. This requires people to have sufficient knowledge and understanding to decide.
Justice Be fair— the benefits and risks should be distributed fairly and not worsen existing inequities.
Explicability Provide explanations for the outputs of algorithmic models, the working and assumptions in the models themselves, and how data are used.
Open Research Commit to make research freely open and accessible for reuse.
Community-Mindedness Be willing to collaborate e.g. using common platforms. Be aware of the ecosystem in which you work, especially for patients with multiple conditions.
Proportionality Ensure potential benefits and risks are in balance, including for society as a whole.
Sustainability (financial and operational) Minimise harm of developing/deploying digital technology products and services. which produce service user dependencies and can’t be sustained.

 

Whilst this blog focuses on ethics for charities working in the health sector, our approach is relevant for charities across specific domains.

 

Once the charity has chosen their ethical principles, an important question – applicable for any organisation working with AI and new technologies – is how to apply them. This includes identifying concerns early on.

 

For example, the health and medical research charities we spoke with tended to be confident about ethical issues in their own domain, and very aware of working with their users and championing user-led design and planning. However, there was concern about maintaining values when partnering with third party tech companies whose objectives may not align with their own. To help charities navigate these partnerships, we created guidance on questions to ask of tech partners to ensure they’re on board with the charity’s ethics. These were developed within a roundtable with some AMRC members who were provided with a case study and supplemented by additional research.

 

Applying the Principles: A Case Study

 

Our case study was loosely based on one used by the Academy of Medical Sciences in their public opinion gathering around taking photos of patients’ feet, in their own homes, and running an algorithm around finding swelling and circulation. The case study involved a totally fictional tech partner, “DeepFoot”.

 

Case StudyYou are the Executive Director of a medical research charity in the field of heart disease.

You wish to partner with a third-party small tech company, DeepFoot, to develop a tool for taking photos of a patient’s feet to monitor blood flow, in the period after a heart attack, to see if the patient develops heart failure (as shown by ankle swelling).

The photos would come from cameras fitted in patients’ bedrooms at home – with their consent – to take photos when they stand up from bed. The photos would be sent to the patient’s phone app via Bluetooth and to a medical professional for assessment.

Part 2: As a new phase of the project, you and your partner want to build a machine learning algorithm for the app, which can analyse the images itself for patient risk (and then alert the patient about a need to seek medical attention and flag those images differently for the clinical professionals).

Part 3: As well as the foot monitoring, DeepFoot proposes expanding your app so that it also includes activity tracking (including location data) to assess patient activity, and potentially incorporate this into the machine learning model.

 

We asked participants to develop questions they could raise with DeepFoot that would cover the ethical principles provided above and prompted the group to consider different population segments (e.g. users, beneficiaries, the public, different gender/race/socioeconomic groups). Several questions are shared below with the key ethical principles in brackets. There’s overlap within principles — a good thing, as it suggests that there may be common solutions to different issues!

 

Suggested Questions for DeepFoot

  • How will you monitor and test that the product is providing benefit (vs standard of care) on an ongoing basis? (Beneficence)
  • How do you plan to incorporate user/patient input and feedback throughout the project? (Beneficence, Autonomy)
  • What would be the impact of the product/service failing by providing wrong results? (Nonmaleficence)
  • Is there a plan in place to identify errors and biases in data/model outputs (for example, is it equally effective for people of different races)? (Justice)
  • Are there any other companies or charities who would benefit from using the platforms or tech? (Community-Mindedness, Open Research)
  • Could this use of data interfere with the rights of individuals (for example, showing faces of those in the bedroom breaching a right to sexual privacy)? If yes, is there a less intrusive way of achieving the objective (e.g. automatic face blurring, user-controlled photos)? (Proportionality, Nonmaleficence)
  • What’s your business model for this project being sustainable financially, including if it develops into common usage? (Sustainability)

 

The full list of questions for each ethical principle can be viewed here.

 

The important thing here is not that the tech partner can answer all of the ethics questions before a project, but that they’re aware of requirements and the charity’s values. You may not know exactly how an algorithm’s output will be explained when you’re still working with human experts to train the model, but you should show awareness that explainability is important, a plan to minimise complexity, and a person who is responsible.  

 

As a side point, we note that in contradiction to the idea that ethics considerations in AI are always at the expense of performance, here in the charity and health space, there’s reasonably strong alignment between performance and ethical values. Questions around benefits, risks, and sustainability, tie very closely to the robustness and provable value of a product or piece of research. Similarly, a product’s consequences for users and the wider community, and the aim of explainability, are closely tied to acceptability and the question of whether a piece of research – no matter how innovative – will end up being used.

 

Therefore, one interim finding here is that considering ethics and processes around meeting the charity ethical framework may bring real value for tech company partners as well. It’ll be interesting to monitor the alignment – and possible tensions between ethics and performance – in real digital health projects.

 

What Happens Next?

 

The AMRC Ethical Framework is a first version, to be tested and will be reassessed against real cases and in the context of new technologies and regulations. Ethical discussions are only one part of partnering between charities and tech companies, and in the midst of negotiations around revenue, IP, and delivery dates, these issues can be swept aside or ignored. The aim of the ethical framework and ready-made question selection is not to solve all ethical problems. As many have pointed out, a project isn’t made ethical by a “checkbox” at the start, but by ongoing reassessment and discussion. By having principles and questions available and public, charities can have a basis for starting the discussion, both inside and outside of their own organisations.

 

DataKind UK believes that other organisations, especially in the charity sector, can learn from AMRC’s work and perhaps use this as a useful template for developing their own ethics principles.

 

In addition to this work, we have an exciting year planned for engaging our community of data scientists and the social sector community around responsible data usage. We launched an ethics committee comprised of data scientists working within ethics, and/or have a passion to better integrate ethics into how we operate. Also, we have recently launched our ethics book club. The next #DKbookclub meeting is on May 22, and we have other responsible data charity projects in store! We intend to collaborate at every opportunity, so if you’re interested, do get in touch!

 

If you’d like to learn more about building frameworks and practical processes for ethical data use, or any of our work on ethics or data science more broadly, we’d love to hear from you.

Scroll to Top