Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care

Page created by Gail Ward
 
CONTINUE READING
Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care
Somebody That I Used to Know:
The Risks of Personalizing Robots for Dementia Care

Alyssa Kubota, Maryam Pourebadi, Sharon Banh, Soyon Kim, and Laurel D. Riek
Department of Computer Science and Engineering
University of California San Diego
{akubota, pourebadi, s1banh, sok020, lriek}@ucsd.edu

Abstract Robots have great potential to support people with dementia (PwD) and their caregivers. They can provide
support for daily living tasks, conduct household chores, provide companionship, and deliver cognitive stimulation and
training. Personalizing these robots to an individual’s abilities and preferences can help enhance the quality of support
they provide, increase their usability and acceptability, and alleviate caregiver burden. However, personalization can also
introduce many risks, including risks to the safety and autonomy of PwD, the potential to exacerbate social isolation, and
risks of being taken advantage of due to dark patterns in robot design. In this article, we weigh the risks and benefits
by drawing on empirical data garnered from the existing ecosystem of robots used for dementia caregiving. We also
explore ethical considerations for developing personalized cognitively assistive robots for PwD, including how a robot can
practice beneficence to PwD, where responsibility falls when harm to a PwD occurs because of a robot, and how a robot can
acquire informed consent from a PwD. We propose key technical and policy concepts to help robot designers, lawmakers,
and others to develop personalized robots that protect users from unintended consequences, particularly for people with
cognitive impairments.

Keywords— Personalized Robots, Cognitively Assistive Robots, Robot-Delivered Health Interventions, Robot Ethics,
Healthcare Robotics, Human-Robot Interaction
Kubota, A., Pourebadi, M., Banh, S., Kim, S., and Riek, L. D. "Somebody That I Used to Know: The Risks of Personalizing
Robots for Dementia Care". In We Robot 2021.

1    Introduction                                                researchers have used socially assistive robots to ad-
                                                                 minister cognitive stimulation and training (i.e., a
Imagine the following scenario: A person with de-                cognitively assistive robot (CAR)) with the goal of
mentia (PwD) lives at home with a full-time care-                slowing the progression of dementia and/or reduc-
giver. That caregiver is overburdened, stressed, and             ing the severity of its impact [8, 24, 75, 127, 132].
is an older adult with their own health problems.                For instance, JESSIE (see Figure 1.1) teaches peo-
This scenario is experienced by over 16 million in-              ple metacognitive strategies to reduce the impact of
formal dementia caregivers in the US, and this num-              cognitive impairment on their daily lives [75].
ber will only continue to grow, representing a global                For these assistive robots, a key concept dis-
health emergency [5].                                            cussed in the health technology community is per-
    Over the past 20 years, many researchers have                sonalization, which reflects how well a system can
explored the use of assistive robots that can aid both           adapt to a person longitudinally. Personalization
PwD and their caregivers with a range of daily liv-              offers many benefits, such as improving adher-
ing tasks, including to provide companionship, de-               ence to cognitive interventions, increasing engage-
liver cognitive stimulation, and support household               ment with intervention content, and enabling goal-
chores (see Figure 1) [24, 47, 50, 75, 89, 92, 138, 141].        oriented health management [76, 107, 128].
For example, PARO (see Figure 1.3) is a robotic seal                 Personalizing assistive robots for PwD is espe-
which has been shown to provide companionship                    cially important due to the progressive and un-
to PwD and reduce stress, anxiety, and pain among                predictable nature of dementia, as one’s individual
PwD and their caregivers [44, 93, 138]. In addition,             needs and preferences will evolve over time [50, 78].
Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care
Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care

    1                                    2                                      3

Figure 1: Three exemplar robots used to support PwD and their caregivers. From left to right: 1) JESSIE is
a tabletop robot developed by our lab, and is used to support people with mild cognitive impairment and
early-stage dementia. It provides personalized, adaptive cognitive training to help teach users metacognitive
strategies to minimize the impact of cognitive impairment on their daily lives [75]. 2) Spoonbot is a tabletop
robot radio developed by our lab, and is used to support people with late-stage dementia who have trouble
eating. It leverages embodied cueing, mimicry, and music to encourage eating [50]. 3) PARO is a robotic seal
and is used to stimulate social interaction and support therapy for PwD, including robot-assisted therapy and
sensory therapy [110, 138].

For example, for people with early stage dementia, a        to carefully consider the ramifications: What are the
robot can interact verbally with someone (e.g., pro-        potential consequences of introducing underdevel-
vide medication reminders) [79], whereas for those          oped personalized CARs to care for PwD? Further-
with late stage dementia verbal prompts will not            more, what are some unintended consequences of a
work, and physical and nonverbal aural cues are             highly personalized CAR for PwD?
more appropriate [50, 58]. Spoonbot (see Figure 1.2)            In this article, we draw upon empirical data
is an example of a robot our team built for peo-            from our own work, as well as from the literature, to
ple with late-stage dementia - it uses mimicry cues         explore these questions. We contextualize concerns
and a person’s favorite music to assist with eating.        regarding inaccurate personalization of CARs for
Therefore, it is important to adapt to a PwD’s phys-        PwD, and the potential unintended consequences
ical and cognitive abilities, personal preferences,         of personalizing robot behavior accurately. We also
care setting, and other life circumstances [78].            propose key technical and policy concepts to en-
    Roboticists have made great strides in develop-         able robot designers, law-makers, and others to de-
ing personalized systems for PwD and their care-            velop CARs that protect users from unintended con-
givers. However, a large body of this work has              sequences, particularly those designed for people
been either primarily technology-focused or health-         with cognitive impairments. We hope that our work
outcomes focused, yet there’s a growing need for            will inspire roboticists to consider the potential risks
further investigation into the potential negative           and benefits of robot personalization, and support
consequences assistive robots could have on this            future ethically-focused robot design.
population. For example, researchers have raised
concerns about some robots used in dementia care-
giving, such as PARO use being associated with ir-
                                                            2     Background
ritability, hallucinations, and disinhibition among
people with severe dementia, or overstimulation of          2.1   Dementia and Caregiving
PwD in group settings [69, 131]. PwD are a vulner-          Dementia is an irreversible, neurodegenerative
able population who are already at high risk of ma-         syndrome characterized by cognitive, functional,
nipulation and abuse [88], so one must think criti-         and/or behavioral decline. Worldwide, approxi-
cally about how these robots could cause harm, and          mately 50 million people live with dementia, most
possible means for mitigation.                              of whom are older adults. It is most commonly
    We are currently at an inflection point, where it       caused by a neurodegenerative disease such as
is becoming relatively easy and inexpensive to de-          Alzheimer’s disease, vascular disease, or Lewy
velop and deploy CARs to deliver personalized in-           body disease. Dementia is progressive, and symp-
terventions to PwD, and many companies are vying            toms can range across the spectrum from early stage
to capitalize on this trend. However, it is important       (e.g. difficulty with attention or problem solving) to

                                                        2
Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care
Kubota, Pourebadi, Banh, Kim, and Riek

late stage (e.g. loss of communication abilities). The       tain basic personal hygiene (e.g. bathing), should
decline in cognitive abilities such as reasoning and         a caregiver respect their desire to not do so and
short-term memory can lead to hazardous behav-               risk causing harm (e.g. a urinary tract infection, so-
iors such as wandering or driving without supervi-           cial ostracism), or should they override the PwD’s
sion, and leave PwD particularly susceptible to do-          wishes and force them to complete these activities
mestic or financial abuse, neglect, and exploitation         [119]? Neither scenario is ideal for satisfying both
[78].                                                        the PwD’s autonomy and health, so there is much
    As PwD lose their ability to live independently,         debate surrounding whether to prioritize respect-
they become reliant on caregivers to complete ac-            ing a person’s autonomy or abiding by the princi-
tivities of daily living (ADLs) (e.g. personal hy-           ple of non-maleficence (i.e. preventing harm). The
giene, eating) and instrumental ADLs (IADLs) (e.g.           caregiver’s decision may change based on culture,
managing medication, scheduling medical appoint-             situation, and personal preference, but a PwD’s in-
ments) [78, 94]. As a result, family members (e.g.           dependence and privacy are often considered sec-
spouses, children) must often assume the role of in-         ondary to harm prevention [38, 52].
formal caregivers and shoulder the responsibility of
caring for a PwD [78]. However, caregiving can
place heavy emotional, mental, physical and finan-           2.2   Assistive Robots for People with De-
cial strain on an individual, placing them at higher               mentia
risk of additional comorbidities such as cardiovas-
                                                             There are a wide range of technologies that can sup-
cular diseases and depression [19]. Furthermore,
                                                             port and enhance the care of PwD, as well as ex-
this transition to a caregiving role can place strain
                                                             tend their independence. These technologies can
on the relationship between PwD and their care-
                                                             be categorized into devices used “by” PwD such
givers, which can cause feelings of guilt, anxiety,
                                                             as for prompts and reminders, devices used “with”
and depression in both a PwD and their caregivers
                                                             PwD such as for communication with caregivers,
[42].
                                                             or devices used “on” PwD such as for location or
    Person-centered care is one predominant care             activity monitoring (see Table 1) [45, 52]. Com-
approach that can help maintain the relationship             mercially available assistive technologies used “by”
between a PwD and their caregivers by encour-                PwD include memory aids (e.g. electronic medi-
aging caregivers to recognize the personhood and             cation reminders), while those used “with” PwD
individuality of a PwD. As a strengths-based ap-             may include communication aids (e.g. telephones,
proach, person-centered care recognizes an individ-          telepresence robots) or collaborative devices (e.g.
ual’s goals, abilities, and preferences, such as by          electronic games, reminiscence software) [45, 122].
understanding their culture or building on their             Caregivers may also use products “on” a PwD for
strengths and current abilities, rather than trying to       safety and security, such as wearable devices to de-
replace the abilities they have lost, to promote their       tect if a PwD has fallen or to track their sleep pat-
well-being [36, 38, 78].                                     terns [84].
    One important aspect of person-centered care                 Many types of robots exist to cognitively and so-
is supporting the autonomy of PwD. Being active              cially assist PwD (see Figure 1). These robot-delivered
in daily decision making can help a PwD preserve             health interventions have many benefits, including
their dignity and identity, which can help them lead         the potential to expand access to healthcare by ex-
more full and rewarding lives [38, 78]. These deci-          tending it into a person’s home, reducing treatment
sions may be major, such as deciding which health            time and cost, and prolonging a PwD’s indepen-
interventions to receive, or relatively minor, such as       dence [47]. Robots can provide support on multiple
choosing what food to eat. It is generally agreed that       dimensions, including socially and cognitively.
PwD should be able to make and act on their own                  Robots can also provide social support and pro-
decisions whenever possible, and caregivers should           vide non-pharmaceutical therapeutic interventions
structure interactions to support the autonomy of            for PwD. Often, zoomorphic robots such as PARO
PwD.                                                         or AIBO act as companions for PwD, or therapists
    However, as dementia progresses, PwD often               may use them to augment interventions such as
begin to lose their capacity to make or communi-             animal-assisted therapy or multi-sensory behavior
cate decisions. Thus, they may not know or be able           therapy [61, 90]. These robots can help reduce
to reason about what is best for their health. Care-         negative feelings such as stress and anxiety among
givers may be forced to choose between support-              PwD and caregivers, and even improve their mood
ing a PwD’s autonomy vs. ensuring their health               [44, 67, 93, 96].
or safety. For example, if a PwD refuses to main-                Researchers have also explored the use of CARs

                                                         3
Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care

Table 1: Assistive technologies for PwD can be categorized into a) devices used “by” PwD, b) devices used
“with” PwD, and c) devices used “on” PwD.

to support cognitive training and cognitive stimu-          the development process [1, 31]. These approaches
lation among PwD which can help slow the pro-               enable technology creators to move away from a
gression of the disease [76, 132]. These robots can         deficit model of aging, which focuses on a person’s
remind PwD of appointments, medication, and di-             potential disabilities and loss of ability, and instead
etary requirements to reduce reliance on their mem-         incorporate a social model of aging, which better
ory [91]. In addition, they may assist PwD with cog-        captures the preferences and contexts of users [81].
nitive training games to support their memory, or           This framing can help promote the dignity and per-
accompany human clinicians with memory training             sonhood of PwD when designing assistive robots.
programs [98, 126]. Researchers are also exploring              While it is vital to include the perspectives of
the use of robots to teach PwD metacognitive strate-        PwD and caregivers in the development of robots,
gies which help strengthen memory, planning, and            their respective values may not always align, partic-
executive functioning in order to help them manage          ularly in relation to a PwD’s autonomy. For exam-
their impairment in their daily life [75]. Often, the       ple, caregivers may use cameras to surveil a PwD to
cognitive support that an assistive robot provides          ensure their safety (e.g. see if they are wandering
can extend or supplement a health intervention.             or have fallen) which can infringe upon the PwD’s
   However, the majority of these technologies              privacy. Caregivers may also imagine using robots
have been designed and developed without thor-              to encourage PwD to do something that they do not
ough consideration or understanding of the needs            want to do (e.g. eat, bathe), or prevent them from
and perspectives of PwD, particularly those in              doing something unsafe that they want to do (e.g.
the later stages of their disease. In addition,             go out alone, eat unhealthy food) which can limit
many commercial technologies center themselves              their autonomy [89, 95, 119]. Thus, it is important
around PwD’s cognitive limitations, rather than             to also consider these design tensions and how they
their strengths which can lead to them being stig-          might impact the autonomy of a PwD and the PwD-
matized and disempowered [50, 66].                          caregiver relationship. We further explore this in
                                                            Section 3.2.
    Fortunately, researchers are increasingly adopt-
ing more inclusive approaches when designing
robots for PwD through a critical dementia lens [50,        2.3   Personalization of CARs that Deliver
80]. Critical dementia encapsulates person-centered               Health Interventions
dementia care, and focuses on understanding and
supporting the strengths and personhood of PwD              To ensure assistive robots are usable and accept-
in technology design. It explores how embodiment,           able for individuals living with dementia, it is crit-
context, and emotional and sensorial experience im-         ical that the robots are personalized. Personaliza-
pact how PwD interact with the world around them            tion is tailoring a health intervention or system to
[80]. It draws from approaches including participa-         suit an individual’s factors such as their preferences,
tory design, which aims to involve all stakeholders         abilities, and goals, and it reflects how well a sys-
(e.g. PwD, caregivers, clinicians) throughout the de-       tem can adapt an intervention to a person longitu-
sign/development process in order to ensure that            dinally. Personalization is essential in this space be-
the end product is usable and valuable to them [83,         cause there is no singular experience shared by ev-
114]. It also includes user-centered design, which          eryone living with or caring for someone with de-
prioritizes the interests and needs of users and en-        mentia. Personalization can maximize the utility
tails gathering iterative feedback at each stage of         and efficacy of interventions for each PwD’s indi-

                                                        4
Kubota, Pourebadi, Banh, Kim, and Riek

vidual situation by enabling assistive robots to ad-          fulfill the needs of PwD. The ability to adapt with
dress the heterogeneity of PwD including cultural             little to no input from users is especially important
and personal backgrounds, living situations (e.g. at          when users have low technology literacy and do not
home, in a long-term care facility), how dementia             have the time or resources to learn how to use the
progresses in different people, and individual pref-          system, as is often the case for clinicians, informal
erences. For instance, an assistive robot can be per-         caregivers, and PwD [51, 75].
sonalized to a user physically (e.g. adjusting move-
ment speed, proxemics), cognitively (e.g. adjust-
ing the difficulty level of cognitive training tasks),        2.4   Key Technical Concepts for Person-
and socially (e.g. referring to a person by name)                   alization
[76, 109]. In this work, we primarily focus on the
personalization of CARs that deliver health inter-            From a technical perspective, developers often
ventions [132].                                               use machine learning (ML) algorithms to enable
                                                              robots to autonomously personalize their behavior
    Personalized CARs offer many benefits, includ-
                                                              to users. This can be decomposed into two main
ing improving adherence to and adoption of an in-
                                                              phases: preference learning and behavior adap-
tervention, as well as adoption of and engagement
                                                              tation. In this section, we will provide a brief
with the technology. For example, cognitive stimu-
                                                              overview of these topics and why existing compu-
lation is most effective and enjoyable if it meets a
                                                              tational approaches may not be appropriate for use
person where they are in terms of their cognitive
                                                              with PwD.
abilities [39, 40]. Tapus et al. [127] demonstrated
that adjusting the difficulty of a robot-delivered cog-           One technique for personalizing robots to an
nitive stimulation game to a PwD’s performance                individual is to learn and understand what that
improved their overall task performance, engage-              person’s likes and dislikes are, i.e., learning their
ment with the intervention, and enjoyment during              preferences. Preference learning aims to predict
the task. In contrast, if a health intervention is not        what a person will prefer based on their known
personalized, it can provoke frustration and depres-          preferences, often inferred from previous behav-
sion in a person with cognitive impairments and               ior [41]. For instance, in the context of assistive
their caregivers [6, 118].                                    robots for PwD, a robot might take note of songs
                                                              that elicited a positive response in order to play
    Research also demonstrates that adapting robot            new music for a PwD. Common computational ap-
behavior to an individual can help improve its                proaches to preference learning include classifica-
adoption and engagement with users. For example,              tion algorithms such as k-nearest neighbors and de-
a robot can adapt its behavior in real time to a user         cision trees [41]. While most work in preference
to maintain engagement, such as changing its tone             learning is limited to only ranking a specific set of
of voice to draw their attention back if they are dis-        items, more recent work aims to infer a user’s un-
tracted during an interaction. This can help main-            derlying preferences so that learned preferences can
tain engagement with users for longer periods of              be generalized across contexts [142, 143, 144]. For a
time, and thus improve retention of material (such            more detailed discussion of current research in pref-
as in a therapeutic intervention) [124]. A robot may          erence learning, please see [41, 140].
also adapt its behavior to be more acceptable to a
                                                                  Once a system has an understanding of a per-
user, such as adjusting its communication style to
                                                              son’s preferences, it can adapt its behavior to suit
be more passive or assertive depending on a user’s
                                                              those preferences. Behavior adaptation refers to
cultural background or which they respond better
                                                              how a robot adjusts its behavior, usually in response
to [109].
                                                              to external stimuli. This adaptation can occur over
    There are also health applications for which per-         short periods of time (e.g. making a noise to draw
sonalization to PwD is necessary. For instance, rem-          attention to itself if a user is distracted), or longer
iniscence therapy encourages PwD to recall mem-               periods of time (e.g. adopting an encouraging per-
ories from their past. So, robots that provide this           sonality if a user responds better to that during a
intervention must have some knowledge of a user’s             therapy). Reinforcement learning (RL) approaches
history in order to ask relevant questions and guide          are among the most common for autonomous be-
the therapy. The MARIO robot is one such example,             havior adaptation, though researchers are also ex-
which could store and retrieve user-specific knowl-           ploring other methods such as neural networks and
edge provided by family members and caregivers in             Gaussian processes [76]. Inverse RL is another ap-
order to facilitate reminiscence therapy [4].                 proach which enables systems to learn from human
    Robots that can autonomously personalize their            experts; for instance, a therapeutic robot might ob-
behavior are particularly important in this space to          serve how a human therapist interacts with a user

                                                          5
Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care

in order to learn how it should behave. For a more           3    Risks to personalizing robots
thorough overview of current research in robot be-
havior adaptation, please see [76, 109].
                                                                  for people with dementia
    While existing approaches to preference learn-           While personalizing robots offers many benefits,
ing and behavior adaptation have proven effective            there are also risks associated with doing so, even
for applications such as post-stroke rehabilitation          for people without dementia. Inherently, personal-
and teaching social skills to children [10, 105], most       ization requires the collection of personal informa-
may not be appropriate for longitudinally personal-          tion, including health-related information, which
izing robots to PwD for a few reasons.                       raises privacy concerns. Furthermore, these data
                                                             are often collected longitudinally, fused with other
    First, the majority of approaches assume peo-            data, and then used by other machine learning sys-
ple’s preferences and abilities stay constant. How-          tems to infer and predict behavioral patterns of in-
ever, learning the preferences of PwD can be dif-            dividuals. This not only raises the risks of bias and
ficult due to the complex and fluctuating nature             proxy discrimination [101], but also violates users’
of dementia. For example, many PwD experience                ability to provide informed consent as they are un-
“sundowning,” or increased confusion, anxiety, and           witting recipients of these opaque systems [68].
agitation later in the day [71], as well as fluctuat-            Personalizing technology to users has led to a
ing levels of lucidity, which may make it challeng-          rise in concerns such as privacy violations, over at-
ing for robots to learn what types behaviors to com-         tachment to the technology, “echo chambers” (i.e.
municate when. Furthermore, one cannot assume                only conveying content that reinforces a user’s ex-
that a person will have the same preferences over            isting beliefs), and manipulation of users [55, 60,
time or in different contexts. This concern is espe-         102]. For instance, social media platforms have be-
cially true for PwD whose cognitive abilities may            come adept at presenting users personalized con-
change dramatically as their condition progresses,           tent in order to maintain engagement with the plat-
so a personalized assistive robot must be able to            form. They can even use a person’s personal infor-
keep up with these changes. For instance, the roles          mation to show them targeted advertisements in or-
of an assistive robot may transition from delivering         der to maximize advertisement revenue, sometimes
a one-on-one cognitive intervention, to observing            at the cost of a user’s well-being, through the dis-
and sharing information with a caregiver as a PwD            semination of inaccurate information or falsely ad-
needs more support from caregivers as their condi-           vertised products [12]. In addition, researchers have
tion progresses.                                             identified content personalization as a mechanism
    Next, many approaches suffer from the “cold              that has amplified extreme behavior among radical-
start” problem, where a system must begin interact-          ist groups, including violent extremists, by enabling
ing with a user with no prior knowledge about them           large-scale personal expression and collective action
[112]. In order to learn more about a user, many             with little moderation [11, 129, 139].
approaches rely on exploration of possible actions.              The physical embodiment of robots introduces
However, depending on the possible actions and the           additional concerns that are not present in virtual
context, acting without knowing the preferences or           systems. Research shows that a robot’s physical
abilities of a PwD may have the potential to harm a          embodiment affords it many advantages that can
PwD. For instance, a robot may need to know a per-           increase engagement and trustworthiness in social
son’s level of dementia, tolerance for sensory input,        interactions (e.g. richer communication channels,
or emotional state to avoid overstimulating them or          physical presence) [29]. However, these character-
causing distress during an interaction.                      istics can be problematic if used in a careless or ma-
                                                             nipulative manner. For instance, perceived trust-
    Finally, there are existing computational ap-            worthiness of a robot based on its appearance can
proaches that learn from human experts (e.g. in-             significantly influence a user’s intention to purchase
verse RL). These approaches require stakeholders             the device [120] or cause them to find a robot more
to commit much time and effort to use them effec-            authoritative [54]. Thus, a robot could exploit a
tively. However, in the context of assistive robots          user’s trust and manipulate them to behave in ways
for PwD, these experts are often caregivers and clin-        they might not otherwise (e.g. share personal in-
icians who are already overburdened, and may lack            formation, purchase products). Researchers predict
the time and technical literacy to communicate their         that robots will be used to surveil users and mar-
expertise to a robot. Thus, these robots may not in-         ket products to them, but with access to far richer
teract with a PwD to their full potential or in the          and more intimate data than can be gathered by a
way that these experts intend.                               web-based system, resulting in more persuasive ad-

                                                         6
Kubota, Pourebadi, Banh, Kim, and Riek

vertisement [32].                                             integrate assistive technology into their lives in
    Personalizing robots could further exacerbate             preparation for the potential development of fu-
these risks by leading to the development of “Spy-            ture memory challenges [30]. In addition, dementia
bot” robots which gather personal information and             community health workers and family caregivers
can lead to more effective deception by “Scambot”             suggest incorporating features that PwD may al-
robots or manipulation by “Nudgebot” robots [55].             ready be familiar with (e.g. touch screens, common
For instance, a CAR might be perceived as more                objects) into robot design in order to increase their
trustworthy by a PwD if it resembles a family mem-            usability and acceptability among PwD [50, 89].
ber or clinician, which could inadvertently deceive           Furthermore, Moharana et al. [89]
users and give the robot more authority [89].                 found that PwD have greater trust in robots that re-
    Lying and deception are widely discussed con-             semble people they are already comfortable and fa-
cerns in both the dementia caregiving and person-             miliar with. Integrating familiar features into robot
alized technology communities [13, 33, 55, 87, 96].           design can help convey its capabilities to a PwD and
In human-robot interaction, deception can occur if a          build trust between a robot and PwD.
robot leads someone to believe something that is not              Personalizing CARs to PwD amplifies these con-
true. This deception may happen intentionally or              cerns and introduces many others. In this paper, we
unintentionally, and there are many ways it might             identify and discuss four major risks of personal-
occur when interacting with PwD, including Turing             izing CARs to PwD (see Figure 2), which include:
Deceptions and misconceptions of a robot’s capabil-           1) Safety risks that arise from inaccurate personal-
ities.                                                        ization, 2) (Human) autonomy infringement risks,
    Turing Deceptions: PwD may experience Tur-                3) Social isolation risks, and 4) Risks of being taken
ing Deceptions when interacting with a robot, i.e.            advantage of due to dark patterns in robot design.
believe they are interacting with a human when in             To support discussion of these risks, we introduce
fact they are interacting with a robot, and assume            three exemplar robots representative of those cur-
the robots have their own motives, goals, beliefs,            rently in use in dementia care, as shown in Figure 1.
and feelings [97, 105]. For instance, if a robot vi-
sually or aurally resembles a trusted caregiver or            3.1   Risk 1: Inaccurate personalization
clinician, PwD may be more willing to cooperate
with the robot, share personal information with it,
                                                                    can lead to safety risks
or otherwise act in a way that they would not other-          While carrying benefits such as autonomy, ML ap-
wise [89, 104]. While this may be desirable in some           proaches have the potential to cause physical or
cases (e.g. using a caregiver’s face or voice to en-          mental harm when they are not adequately person-
courage PwD to return to bed, take medication, or             alized to PwD, such as not understanding the con-
eat [89, 95]), this is also a form of deception.              text of care, perpetuating bias, or simply being in-
    Misconceptions of a robot’s capabilities. Another         accurate. One of the most significant risks of in-
major source of unintentional deception is a dis-             accurately personalized CARs is providing a PwD
connect between the actual capabilities of a robot            with inadequate care or care that is misaligned with
and the capabilities that users think it is capable           their stage of dementia. A robot may not neces-
of. This disconnect is problematic because over-              sarily understand the complexities of care, so there
or underestimation of a robot’s capabilities can im-          is a rising concern that automating these decisions
pede users from making informed decisions regard-             without human supervision will cause them to be
ing how robots are involved in their care [136]. This         ineffective or harmful. For example, the automa-
can also affect the level of trust that a user has in a       tion of a medical diagnosis system may cause phys-
robot, which may lead users to overtrust it and at-           ical harm without supervision of a human medi-
tribute it too much authority during an interaction,          cal professional [14], demonstrating the challenges
or undertrust it and not follow the guidance it pro-          of health automation technology even before intro-
vides. Either scenario may result in negative health          ducing the additional complexity levels of physi-
outcomes, or misuse of the robot [7, 145].                    cal embodiment, home settings, or cognitive impair-
    Vandemeulebroucke et al. [136] suggest that in-           ments. Also, the robot may provide self-care in-
creased use and familiarity of robots earlier in life         structions that do not account for a PwD’s comor-
can moderate such deception. However, the mem-                bidities, which may contribute to harm.
ory challenges of PwD may prohibit them from be-                  As a PwD’s condition progresses, they may re-
coming familiar with a robot in this way once the             quire different levels of support to use or under-
condition has progressed. To proactively help cir-            stand a CAR effectively. However, if a robot fails
cumvent this, an increasing number of older adults            to adequately understand or adapt to a PwD’s con-

                                                          7
Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care

    1 Safety Risks                                                  2 Infringement on Autonomy
                           How was your day?

                                                                                     From 8AM to 11AM go
                           What did you do today?

                                                                                       grocery shopping,
                           Did you take your medication?

                                                                                     from 12PM to 1PM go
                           Would you like to set an alarm?

                                                                                      to the bookstore, ...
                           When would you like to go to bed?

   Inaccurate personalization of CARs can contribute to            Personalized CARs may need to choose between
   safety risks if they are inaccurate, perpetuate bias, or        respecting a PwD’s autonomy or protecting their
   fail to understand the context of care.                         health.

    3 Social Isolation                                              4 Vulnerability to Dark Patterns in

         Today I went to
                                                                      Personalized Robotics
                                                   Good job!
         the bookstore
         and found my                                                                Please purchase this meal
         favorite book!                                                              kit our company sponsors.
                                                                                     What is your credit card
                                                                                     information?

                                                                         Of course! My credit card
                                                                         number is XXXX-XXXX-XX...

   PwD may become socially isolated if they prefer to              A personalized CAR may leverage robotic dark
   interact with CARs over other people, become overly             patterns, such as by being customized to suit a
   attached to these robots, or their caregivers replace           PwD’s preferences to gain acceptance and facilitate a
   human care with a robot’s.                                      bond.

Figure 2: Personalizing CARs to PwD introduces many ethical concerns, including 1) Safety risks that arise
from inaccurate personalization, 2) (Human) autonomy infringement risks, 3) Social isolation risks, and 4)
Risks of being taken advantage of due to dark patterns in robot design.

ditions, this can limit their ability to fully utilize an          pression and anxiety [6, 43]. PwD may also be un-
assistive robot. This failure may be considered an                 able to communicate what they want or need to the
error of omission (if the robot did not adapt its be-              robot if it fails to account for their physical or cog-
havior at all) or commission (if the robot adapted its             nitive considerations, which can leave PwD feeling
behavior incorrectly) [108]. In either case, these er-             as though they have lost their autonomy and dig-
rors may have negative effects including reducing                  nity [43]. Although this problem could be avoided
the usability of the robot, which can reduce a PwD’s               through caregiver supervision, caregivers are of-
use of the robot, lower a PwD’s self-confidence in                 ten already overburdened by existing caregiving re-
their cognitive abilities, and possibly lead to de-                sponsibilities, and having to monitor an adaptive

                                                               8
Kubota, Pourebadi, Banh, Kim, and Riek

robot would simply add further cognitive load.                supporting their freedom, independence, and pri-
    ML algorithms for therapeutic interventions               vacy. Although PwD may not be able to execute all
could demonstrate biases that unintentionally ex-             their decisions (i.e., agent autonomy), they often can
clude or harm PwD. In the case of robots for PwD,             express their interests (i.e., choice autonomy), which
most existing work frames dementia and aging as               caregivers or assistive robots can consider in order
a series of losses, rather than acknowledging the             to make choices that support their values [119]. The
full life and identity of the PwD. This narrow un-            choices a person makes reflect their unique identity,
derstanding of PwD may perpetuate existing biases             personality, and lifestyle (i.e., actual autonomy). It is
about PwD, limit an algorithm’s performance, and              important for family members, caregivers, and tech-
ultimately place the mental and physical health of            nology developers to support PwD’s choice auton-
PwD at risk [125]. Thus, it is vital to develop con-          omy and respect their actual autonomy longitudi-
crete guidelines for assessing the potential mental           nally [119]. Studies suggest using personalized as-
and physical impact of inaccurate personalization             sistive robots can promote the autonomy of PwD
on PwD and ways to avoid harm.                                and support person-centered care [65].
    While researchers will ideally test ML algo-                  However, just as caregivers may be forced to
rithms extensively before using them in real world            choose between respecting the autonomy or pro-
applications, there are still limitations to what these       tecting the health and safety of a PwD, personal-
algorithms can achieve, such as for uncommon sce-             ized assistive robots may also be forced to make this
narios (i.e. “edge cases”) or populations not re-             decision. This places personalized robots for PwD
flected in test data (e.g. PwD). Developers may               in a peculiar position where their actions (or lack
be tempted to naively apply an algorithm to the               thereof) may depend on how autonomy and control
context of personalizing robots to PwD. However,              are distributed between themselves and PwD.
if that algorithm was not tested or validated with
                                                                   Consider a scenario encountered in our prior
this population and possible scenarios were not con-
                                                              work, told to us by a dementia caregiver [89]. A
sidered in the design while personalizing the algo-
                                                              PwD, who also has diabetes and cancer, was feeling
rithm, it may lead to inaccurate physical and men-
                                                              ill, and only wanted to eat popsicles, to the point
tal health assessments of PwD (e.g. depression,
                                                              where she wanted to have more than ten each day.
detection of pain in non-verbal individuals) which
                                                              Even though the popsicles brought her joy, consum-
can cause serious harm to their mental and physi-
                                                              ing too many popsicles upset her stomach and detri-
cal health. For instance, pre-trained models of fa-
                                                              mentally affected her blood sugar. Caregiver par-
cial analysis technology such as Facial Alignment
                                                              ticipants in our study suggested an assistive robot
Network (FAN) achieve relatively high accuracy for
                                                              could “be the bad guy,” denying the frequent pop-
older adults without dementia, but the accuracy
                                                              sicle requests, so that the caregiver did not have to.
drops significantly for PwD [125].
                                                              On the one hand, offloading emotional labor from
    Caregivers may rely on such algorithms to auto-           an overwhelmed caregiver may be beneficent; how-
matically identify and alert them of agitation and            ever, the scenario raises questions regarding the au-
aggression in PwD so they can reliably intervene              tonomy of the PwD. A fully autonomous robot may,
in a timely and appropriate manner. However, if               indeed, be configured to limit the consumption of
the algorithm was not developed with or trained               sweets and keep the PwD on a strict diet to promote
on data from PwD, a system may not alert a care-              their wellbeing. However, situations like this can
giver of agitation or aggression until the harm has           also create conflicts between the autonomy of the
already occurred (e.g. causing distress, emotional            PwD and the beneficence of the caregiver [119].
withdrawal, physical harm) [72]. On the other hand,
a system may give the caregiver false alarms, which               One approach to sharing autonomy is to always
may cause them unnecessary stress and cause them              give users ultimate control of assistive robots. In
to become desensitized to alarms, so they are un-             the context of supporting older adults, Sharkey and
prepared to react in the case of a true agitated or           Sharkey suggest this will have positive effects on a
aggressive episode.                                           user’s sense of autonomy and can reduce the risks
                                                              of infringing on their privacy [116]. However, PwD
                                                              may have impaired judgment, so respecting their
3.2   Risk 2: Infringement on the auton-                      autonomy may be at the cost of their own health.
      omy of PwD                                              Furthermore, personalized robots that must wait for
                                                              approval from a PwD before acting may be lim-
As discussed in Section 2.2, respecting the auton-            ited in their ability to protect users, such as if the
omy of PwD empowers them to construct their lives             robot recognizes a dangerous situation but cannot
based on their values and personality. This entails           autonomously take steps to prevent it (e.g. if a PwD

                                                          9
Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care

tries to reach a tall cupboard by climbing on a pre-           sonalized robots may be particularly adept at dis-
carious chair [116]). Thus, it is infeasible and poten-        tracting or redirecting a PwD from their potentially
tially harmful to give PwD full control over assistive         unsafe desire (e.g. to eat a popsicle), such as by
robots.                                                        knowing what alternatives they might like or how
    On the other hand, assistive robots may act in             to change the topic of conversation. This can pre-
opposition to or without PwD feedback. Sharkey                 serve the health of PwD, but it effectively restricts
and Sharkey [116] suggested that assistive robots              their autonomy. Thus, roboticists need to consider
might help PwD as “autonomous supervisors” to                  developing personalized assistive robots that suit a
help protect the safety of PwD. This can happen by             PwD’s individual personality to support their ac-
designing robots such that they autonomously take              tual autonomy as well as their needs and choices
steps to prevent the dangerous situation, or restrain-         to support their choice autonomy. However, when
ing PwD from performing a potentially dangerous                designing CARs, roboticists should be mindful of
action. In the case of the PwD who loves popsi-                the fact that the cognitive limitations that restrict a
cles, a personalized robot might recognize her di-             PwD’s agent autonomy may also limit how they can
etary restrictions and choose to protect the PwD’s             interact with these robots.
health by limiting her popsicle consumption, even
if doing so defies her wishes. As PwD often have
impaired judgment abilities, a robot may be unable             3.3   Risk 3: Social Isolation
to obtain accurate (or any) feedback when trying to            Social isolation has significant effects on the phys-
make decisions, which further raises the risk of au-           ical and mental health of PwD. Anxiety, boredom,
tonomy infringement [56, 85]. However, the ethical             depression, and lack of meaningful activities are
problem here is that restraining a PwD to prevent              prevalent among PwD living in assisted living fa-
potential harm could be “a slippery slope towards              cilities [3, 18, 53]. Given that having strong social
authoritarian robotics” [116].                                 connections has been shown to protect against vari-
    Another alternative to sharing the autonomy be-            ous adverse health outcomes, including depression
tween assistive robots and PwD suggests develop-               [111], it is important that the social support needs of
ing robot technology with the aim of having robots             PwD are considered when designing assistive tech-
provide care to the older adults while considering             nology.
their autonomy. This means that instead of overrid-                 In an effort to encourage social connectedness
ing the PwD, the system should allow them to make              for PwD, caregivers use technology such as robots
decisions about their daily activities, and warn them          to connect PwD to family members [92] and to pro-
to stop a potentially dangerous activity, if needed            vide companionship to PwD [69]. To better address
[63]. Including PwD in decision making can de-                 the social support needs of a PwD and encourage
crease infantilization and improve PwD’s indepen-              social interactions, many researchers are exploring
dence [63]. So rather than just outright refusing to           how to personalize these systems and tailor them to
give the PwD a popsicle, the robot might try to ex-            an individual’s interests and capabilities. However,
plain why it cannot give her a popsicle right now or           while personalized robots are intended to be more
distract her from the topic altogether [50]. While a           effective, using them in the context of dementia care
downside to this is that the PwD may not pay at-               may pose more risks than benefits, including prefer-
tention to the robot, understand the suggestions the           ring to interact with robots over other people, over-
robot is making, or simply not trust the capabilities          attachment to these robots, and the supplanting of
of the robot to make reliable suggestions, a person-           human interaction by a robot.
alized CAR may be able to more effectively under-                   A personalized assistive robot that aims to com-
stand and coordinate with a PwD to reach a satis-              bat social isolation among PwD may have the un-
factory outcome.                                               intended consequence of over-attachment. For in-
   How a personalized robot should behave in                   stance, it might emulate the “perfect companion”
these situations is still an open question, as each of         by learning the likes and dislikes of a PwD. A PwD
these approaches requires considering the tradeoffs            might find the companionship of such a robot to be
between a PwD’s autonomy and safety. While a per-              preferable to another person’s, so they might choose
sonalized assistive robot will likely be forced to de-         the company of these robots over other people. This
cide which tradeoffs to make, the ideal solution will          problem becomes even more pronounced as people
be much more nuanced than simply adapting to a                 with cognitive impairments may believe they are in-
user’s preferences. However, as with human care-               teracting with another person when they are actu-
givers, a robot will likely be expected to prioritize a        ally interacting with a robot (i.e., a Turing decep-
PwD’s health and safety over their autonomy. Per-              tion) [104]. While some people believe that robots

                                                          10
Kubota, Pourebadi, Banh, Kim, and Riek

can help mitigate feelings of isolation and help im-            nologies (e.g. maximizing engagement, prioritizing
prove social connectedness (e.g. serving as a social            advertising revenue over user well-being) [55].
facilitator, establishing virtual visits with family and             In the field of user interface design, dark pat-
friends) [116], many scholars question whether the              terns are user experience (UX) and user interface
relationship between a PwD and a robot can be con-              (UI) interactions designed to mislead or trick users
sidered meaningful or moral [96, 116, 136].                     to make them do something they do not want to do.
    If PwD prefer the companionship of a robot to               In existing technologies such as online social media,
that of another person, there is also the risk of PwD           designers have been known to leverage dark pat-
becoming overly attached to the robots. A personal-             terns, or use their knowledge of human behavior
ized CAR could understand how and when a PwD                    and the desires of end users, to implement decep-
would be receptive to social cues such as touch and             tive functionality that is not in the user’s best inter-
eye contact in order to establish and maintain a                est [17, 48]. For instance, on social media platforms,
bond with the PwD [133]. Over attachment can lead               dark patterns may be used to increase engagement
to distress and loss of therapeutic benefits when               with the platform, increase ad revenue, or get users
robots are taken away, and further exacerbate social            to share personal information. While these behav-
isolation [136].                                                iors are beneficial for the platform, they can be detri-
    As CARs become more adept at providing care                 mental to users, as over-engagement with these me-
and more personalized to suit a PwD’s individual                dia can lead to addiction, social isolation, anxiety,
needs, they can help relieve some care responsibil-             and depression [28, 74].
ities of human caregivers [136]. However, some re-                   In the context of CARs that personalize their in-
searchers are concerned that robots that are adept              teractions based on the data collected from a PwD,
at providing care to PwD could lead to reduced in-              there may be dark patterns that designers could use
teraction between a PwD and caregivers [136]. Fur-              to take advantage of PwD. Thus, as robots become
thermore, a robot that is highly personalized to a              more sophisticated and autonomous, it is important
PwD may be able to provide comprehensive care to                to research how personalized robots that collect per-
a PwD, potentially replacing human caregivers en-               sonal information from users may be designed to
tirely [116, 136]. Caregivers may also trust a person-          leverage or exploit this data to facilitate deceptive
alized robot to be more proficient at providing care            interactions with PwD.
than a robot that is not personalized, leading them
                                                                     Dark patterns in robotics is a largely unexplored
to leave the PwD under the care of a robot for longer
                                                                area. Lacey et al. [77] discuss how cuteness of
periods of time, further reducing human interaction
                                                                robots can be a deceptive tactic that roboticists use
and exacerbating the potential for social isolation.
                                                                to gather information from users. For example, Blue
    In addition to the aforementioned risks of
                                                                Frog’s Buddy is an emotional robot whose market-
highly personalized care robots, non-personalized
                                                                ing website states: “How not to resist to his cute-
(or poorly personalized) robots may also lead to so-
                                                                ness and not want to adopt him?”. Prior research
cial isolation, such as by causing confusion or lower-
                                                                has found that “cute” technology is “lovable” and
ing the confidence of PwD. For instance, a CAR that
                                                                fosters an affectionate relationship [46, 77]. Among
fails to appropriately adapt to a PwD’s capabilities
                                                                PwD, a personalized CAR may be customized to
could cause them to lose confidence in their com-
                                                                suit a user’s preferences (e.g. have a “cute” appear-
municative or cognitive abilities. This can lead to
                                                                ance or friendly personality) to gain acceptance and
anxiety or depression, and cause them to withdraw
                                                                facilitate a bond.
from their friends and family [6].
                                                                     However, a PwD may therefore more read-
                                                                ily share sensitive information with a personalized
3.4   Risk 4: Vulnerability to Dark Pat-                        robot, unwittingly give them access to private ac-
      terns in Personalized Robotics                            counts, or be manipulated into purchasing other
                                                                products from the robot’s developer. Additionally,
Dementia gradually diminishes an individual’s                   because a personalized robot could have informa-
communication abilities and judgment, making it                 tion on the wants and needs of a PwD, and PwD
more difficult for them to avoid, prevent, and report           and caregivers often have low technology familiar-
deception. While some researchers believe that in-              ity, developers may have the power to intentionally
troducing assistive robots for dementia care can re-            make turning off the robot or disengaging from the
duce the abuse many PwD experience [96], it is not              robot difficult. It is important that dark patterns in
a stretch to imagine a scenario where a robot could             the context of personalized CARs among PwD are
take advantage of a PwD, particularly if these robots           further explored to avoid negative consequences for
follow a model similar to existing adaptive tech-               PwD and to hold technology creators accountable.

                                                           11
Somebody That I Used to Know: The Risks of Personalizing Robots for Dementia Care

4     Additional Ethical Considera-                           recting a PwD’s perception of the world [33]. In
                                                              fact, human caregivers may deceive PwD to help
      tions                                                   improve their sense of self-agency and autonomy
                                                              [15, 21, 23, 136, 115]. For instance, in our prior work
In addition to the risks discussed in Section 3, there
                                                              [50], a professional dementia caregiver told us about
are some additional ethical considerations when de-
                                                              a PwD who used to be an accountant. The profes-
veloping personalized CARs for PwD. These in-
                                                              sional caregiver allowed her to think that she was
clude: a) how a robot can practice beneficence to
                                                              the current accountant of their dementia caregiv-
PwD, b) where responsibility falls should harm to
                                                              ing organization, thereby respecting and acknowl-
a PwD occur because of a robot, and c) how a robot
                                                              edging her domain expertise. On the other hand,
can acquire informed consent from a PwD. While
                                                              many researchers argue that deceiving people in
these considerations are not necessarily unique to
                                                              such a way is a “moral failure” because this alters
personalized CARs for PwD, it is important that
                                                              the PwD’s perception of reality and may lead them
roboticists keep them in mind in order to under-
                                                              to believe a different reality than those around them
stand how these robots may impact users in real
                                                              [33, 113, 121, 136].
world environments. Thus, we explore each of these
considerations in this section.
                                                                   However, these experiences beg the question of
                                                              whether it is appropriate for assistive robots to ac-
4.1   How can a robot practice beneficence                    tively deceive PwD, as a human caregiver might.
      toward people with dementia?                            Thus, while these robots could leverage the knowl-
                                                              edge, background, and expertise of a PwD in order
Human caregivers are often very intentional with              to respect their autonomy, whether or to what extent
their language and actions in order to set a PwD              they (or human caregivers) should deceive a PwD is
up for success and practice beneficence to PwD (e.g.          still an open question. Some scholars argue that this
minimizing confusion or agitation). For instance,             deception is benign and permissible as long as it is
they will purposefully phrase their sentences to be           in the best interest of a PwD [49, 146]. In addition,
short and simple, and ask closed questions such as            regardless of whether a PwD can tell if a robot’s
those that can be answered with a “yes” or “no” re-           empathic response is real or not, PwD may still ex-
sponse. In addition, caregivers will use non-verbal           perience real feelings of comfort and companion-
cues such as gestures or visual aids to help com-             ship [26] which many argue is acceptable [22, 117].
municate with PwD, particularly as a PwD’s verbal             On the other hand, some people express discomfort
communication abilities deteriorate with the pro-             with the idea that PwD might perceive and engage
gression of the disease. This can help improve com-           with robots (even non-personalized ones) as living
prehension of PwD, increase their ability to respond          agents [16].
successfully, and reduce the chances of causing frus-
tration or confusion [27, 50].                                    It is generally agreed that assistive robots
    Even so, frustrating or confusing interactions            should, whenever possible, practice nonmaleficence
are largely inevitable, especially for people with            and not bring harm to people [37, 73, 137]. Indeed,
advanced dementia. These individuals may have                 under the beneficence principle, assistive robots
difficulty processing abstract language or not even           should actively behave in a person’s best interest.
recognize they have dementia (i.e., anosognosia),             This might entail telling white lies to promote a
which can lead to reduced confidence or perceiving            PwD’s dignity, provide comfort, or avoid confusion
themselves as being “faulty”. In addition, as the dis-        or distress [115]. But while it is possible that a per-
ease progresses and prospective memory becomes                sonalized CAR may be more effective at practic-
weaker, technologies that were previously helpful             ing beneficence, they cannot necessarily avoid frus-
(e.g., reminder technologies) may become less ef-             trating or confusing interactions with PwD. It can
fective and can cause tensions and frustration be-            be difficult to avoid frustrating or confusing inter-
tween a PwD and caregivers [50]. Therefore, it is not         actions even for human caregivers, so we propose
a stretch to imagine that even the most accurately            that these robots can practice beneficence by: a) tak-
personalized CARs are likely to inadvertently cause           ing appropriate precautions to mitigate frustrating
confusing or frustrating feelings in interactions with        or confusing interactions before they occur, b) tak-
PwD.                                                          ing steps to alleviate feelings of frustration or con-
    It is not uncommon for human caregivers to                fusion should they occur (even if that means noti-
deceive PwD, often coming from a place of com-                fying a human caregiver), and c) striving to ensure
passion with the goal of minimizing disorienta-               that these feelings are no worse than that which the
tion or distress that might come along with cor-              PwD might experience with a human caregiver.

                                                         12
Kubota, Pourebadi, Banh, Kim, and Riek

4.2   Responsibility for harm                                   ing, but a caregiver does not want them to use an
                                                                oven, and a clinician suggests avoiding desserts),
There has been much debate around who or what                   this can add an additional layer of complexity and
should be held responsible if a machine causes harm             uncertainty when trying to attribute responsibility
to a person, particularly in healthcare contexts [70,           for harm, should it occur.
135]. Traditionally, care providers are required to                 So, how does one determine who should be held
assume responsibility for the outcome of a medical              responsible in the case that a personalized robot
intervention [134]. Historically, if a machine causes           harms a PwD? It is imperative that the field of au-
harm to a person, there is a clear entity at fault. For         tonomous systems protect users from misattributed
instance, an operator controlling a machine may be              responsibility and avoid moral crumple zones [35].
blamed if they make a mistake, or a manufacturer                Instead, there is an increasing emphasis on “respon-
may take responsibility for defective hardware.                 sible robotics” which places the responsibility on
    However, when considering personalized CARs                 the researchers and developers [135]. This requires
for PwD, there are many gray areas that arise due to            that an organization determines ethical issues that
impaired reasoning abilities and the “responsibility            arise from use of the robot, as well as to assign peo-
gap” (i.e., the inability to trace responsibility to any        ple to resolve those issues [135]. Furthermore, some
particular entity due to the unpredictable nature of            researchers suggest that determining how to regu-
an autonomous robot’s future behavior) [86]. With               late the responsible use of these robots will require
the introduction of personalized robots into demen-             more thorough exploration and testing across pop-
tia caregiving, there needs to be a sense of moral,             ulations and cultures with multidisciplinary studies
legal, and/or fiscal responsibility in order to ensure          and collaborations [37, 136].
that PwD and other users of personalized assistive
robots are safe.
    There is much discussion about who should be                4.3   Acquiring consent from people with
held responsible for the actions of an autonomous                     dementia
robot.    Some researchers suggest that the au-
tonomous systems themselves should be held re-                  Informed consent is a person’s adequate compre-
sponsible [59]. However, others argue that ma-                  hension and subsequent voluntary choice to par-
chines cannot understand the consequences of their              ticipate in some event, such as a medical interven-
actions and thus hold the concept of responsibil-               tion [25]. It is important for both human and arti-
ity meaningless [82]. Others argue that the manu-               ficial agents providing healthcare services to obtain
facturers (e.g. researchers, developers, designers)             consent (or assent) from PwD to protect a person’s
should take responsibility for the robots they cre-             wellbeing and agency. However, in the context of
ated, since they have a professional responsibility to          dementia, the problem of acquiring informed con-
follow proper ethical and professional design proto-            sent is difficult because it is challenging to deter-
cols before getting into the hands of users [82, 135].          mine whether their condition affected their capac-
    Alternatively, others adopt antiquated views of             ity of giving informed consent [63], and their ca-
user blaming [62], suggesting users are responsible,            pacity to provide consent may change as their de-
as they supervise and manage the robots, and they               mentia progresses. Difficulty obtaining consent can
are the ones that the system is learning from [9, 135].         be especially problematic for personalized health in-
Elish [35] coined the term “moral crumple zone” to              terventions such as assistive robots personalized to
describe such scenarios, in which responsibility for            PwD, as data collection and processing are essential
harm caused by an autonomous agent may be mis-                  for a robot to learn a person’s preferences.
attributed to a human who in fact had little control                To help address this challenge in the medical and
over the agent’s actions.                                       research spaces, organizations have developed var-
    The responsibility gap can make it difficult to             ious recommendations for acquiring informed con-
attribute responsibility for harm caused by au-                 sent from PwD. In general, there are three ways
tonomous systems [86]. Inherently, personalized                 to acquire informed consent from or on behalf of
assistive robots must learn to behave in ways that              PwD: (i) direct consent from a person with accept-
were not explicitly defined by a human program-                 able level of competence and cognitive capacity, (ii)
mer. This can lead to unpredictable robot behavior,             proactive consent through advanced directives (i.e.
so nobody, from the programming team to the end                 externalizations of PwD’s wishes, decisions, and
user, can be seen as clearly responsible for a robot’s          choices about future actions), or (iii) through proxy
behavior. As a personalized robot learns from more              decision making (e.g. assent from a third party) [63].
people and must base its decisions on potentially               Ienca et al. [63] suggest that the combination of the
conflicting information (e.g. if a PwD enjoys bak-              three may better protect a PwD’s autonomy. In the

                                                           13
You can also read