This week, Ian Loynes, SPECTRUM’s Chief Executive, spoke at a debate held at Winchester Science Centre. Below, you can read his presentation which focused on some of the ethical issues that arise from this subject. We would love to hear your views either by commenting on the blog itself or by tweeting us at @SPECTRUMCIL
Can Robots be Care Givers?
SPECTRUM’s general stance:
Whilst we welcome any technology that enables user choice and control and Independent Living, we are concerned that any move to robotic provision of care is likely to be driven by a cost saving agenda, rather than for quality of care or user preference.
Disabled People are concerned that ethical and human rights aspects of this debate are not receiving enough consideration and that care recipients (users) of this brave new world are likely to be largely excluded from the discussion
I’d like to focus on following issues, from perspective of Disabled People:
- Robots don’t go sick, don’t need holiday cover and don’t cause HR problems – or could they!
- Intimate personal care can be embarrassing and undignified – Robots to support toilet needs for example
- Robots could reduces certain risks – safeguarding, abuse, theft, language barriers
- Real opportunities to empower and enable to individual to compensate for some impairment barriers (communication aids, memory aids, visual aids, for me – exoskeletons!)
- It will happen (and already has – iPad, Labour saving aids, communication aids)
- May well be Imposed on people – often the most vulnerable with least voice
- The care giver will often be the only human the user sees – social isolation is already the most common ‘unmet need’ – robots could make this worse. This would not be good for people who are already socially isolated or people whose conditions actively benefit from interactions with others; ie Alzheimers.
- What is the Motive?: Likely to be seen as a “cheaper option” by local authorities looking to save money?
In our experience (i.e. telecare debate) ethical and human rights issues often receive scant attention.
Who is responsible if the robot or software goes wrong or breaks or causes damage/death? – these events will happen.
Who is in control? The care recipient, the local authority or the manufacturer?
The main challenge in creating robotic care givers is the problem of programming a machine with a reliable set of ethics.
A robot will have to make complicated decisions regarding its users on a daily basis (particularly for nursing care). Since its function will involve giving advice that will determine the health/welfare of human beings, it will need to have an ethical system that will allow it to properly carry out functions while treating users with respect.
For example, if a robot is programmed to remind its users to take their medicine, it needs to know what to do if the user refuses. On one hand, refusing the medicine will harm the user. On the other hand, the user may be refusing for a number of legitimate reasons that the robot may not be aware of. For instance, if the user feels ill after taking the medicine, then insisting on administering the medicine may turn out to be harmful.
These scenarios are everyday situations that humans navigate with ease. The human brain can assess a situation not only based on data that it directly receives through its senses, but it can also logically process other signs, such as the look of a person or the intonation of a response. If there is not enough data to make a decision, a human can figure out which questions to ask in order to receive more information.
A key point for me is that many of the remote technologies in use or development today rely on being able to track people’s movement and behaviours and, in practice, that can – and does – very easily lead to some serious breaches to people’s rights to a private life as well as putting dignity and autonomy at serious risk. We are alarmed at how little attention providers seem to pay to these issues – in fact I’d go as far as to say in telecare they were mostly oblivious to the risks.
The Health Select Committee also raises concerns about this issue. They emphasised that while technology can facilitate things like robotics and telecare, this has to be balanced against people’s right to privacy. They recommended that privacy and confidentiality policies and protocols should be developed, implemented and audited when new technologies are introduced.
They said that: It is essential that a balance between the use of technology and the continuation of human contact is an important element in any such judgement. Furthermore, evaluation needs to take account of the qualitative benefits for users and carers over time.
I don’t want to rain on robotic’s parade – I really don’t. Anything that can play a positive role should be welcomed.
But, we do have to be realistic about what it can achieve and, more important still, we need to recognise that there are no quick fixes to the challenge of building a social care system capable of addressing the needs of an ageing population.
Rather, we need a serious debate about the value we place on social care and the willingness – on the part of both government and the public – to invest in social care as a positive public policy resource.
And we must do this in an environment where we know that the reality of social care at the moment is that, if anything, overall provision is in decline and local authorities are deserting families who require their support
So, you’ve read what SPECTRUM thinks, now it’s your turn. Let us know what you think. Comment below or tweet us at @SPECTRUMCIL