RCR 2019 – Ethics of artificial intelligence (AI) in radiology: be aware of risks and motives.


  • Jo Whelan
  • Oncology Conference Roundups
Access to the full content of this site is available only to registered healthcare professionals. Access to the full content of this site is available only to registered healthcare professionals.

Takeaway

  • It is vital to anticipate how rapidly evolving systems could go wrong or be abused.
  • Be wary of grandiose claims from entities with a strong profit motive.

Why this matters

  • The best way to ensure AI tools are used ethically is to make physicians aware of the moral risks they run when using them.

Expectations for AI in radiology are still in the ‘hype’ phase, and will be tempered as real-world experience grows, according to Dr Adrian Brady (Mercy University Hospital, Cork). Dr Brady is a lead author on a new international joint position statement on the ethics of AI in radiology, published in October 2019. The over-arching principles are:

  • Do no harm.
  • Respect human rights and freedoms (including dignity and privacy).
  • Be transparent, and ensure humans remain responsible and accountable.

Data sets are all-important as developers of AI need access to them to train their algorithms.

  • Who owns the data? This varies between the EU and the US.
  • De-identified data can easily be re-identified. Social media and smartphone use are major threats to patient anonymity.
  • Data sets often represent limited populations, are not always generalisable, and may contain biases.

Other ethical issues include:

  • Automation bias: the tendency to favour machine-generated decisions can lead to blind agreement, with contrary human observations being ignored.
  • Algorithms do what they have been designed for: safety must be built in to ensure they optimise outcomes for patients (rather than, say, meet performance targets or maximise organisational profits).
  • Algorithms must be transparent, interpretable, and explainable – there should be no ‘black box’.
  • Liability: it is not clear who is responsible in the event of a bad outcome after use of AI.

There should be continuous post-implementation monitoring for unintended consequences, with corrective action where necessary.

Please confirm your acceptance

To gain full access to GPnotebook please confirm:

By submitting here you confirm that you have accepted Terms of Use and Privacy Policy of GPnotebook.

Submit