Social Robots and Children's Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment

Page created by Alice Rowe
 
CONTINUE READING
Social Robots and Children's Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
This paper is a work in progress and is in draft form. The authors are working on developing it further. For now, you
 can cite it as a We Robot 2021 paper presentation. Feel free to reach out to authors for an updated version. 8/24/2021

Social Robots and Children’s Fundamental Rights: A Dynamic
Four-Component Framework for Research, Development, and Deployment

VICKY CHARISI∗† , European Commission, Joint Research Centre, Spain
SELMA ŠABANOVIĆ∗ , Indiana University Bloomington, USA
URS GASSER, Berkman Klein Center, Harvard University, USA
RANDY GOMEZ, Honda Research Institute, Japan
This paper aims to introduce a dynamic four-component framework towards the development of trustworthy robots for children
which are aligned with the United Nations convention of the rights of the child. The development of this framework is based on a
pilot project initiated by UNICEF on Artificial Intelligence and Child’s Rights and it is based (i) on insights from a set of empirical
cross-cultural participatory studies with = 76 children from Japan, Uganda and Greece (aged 6-16 years old) and a survey with
 = 171parents in the US and Japan; and (ii) on the existing literature on child development and child-robot interaction interpreted
through the lens of policy guidelines on Artificial Intelligence and Child’s Rights as proposed by UNICEF. Our analysis identified
four robot-specific technical components for consideration regarding social robots for child’s rights, namely (i) connectivity; (ii)
autonomy; (iii) embodiment; and (iv) social embeddedness. We illustrate these components with indicative examples based on the
general principles of children’s fundamental rights. The proposed framework aims to support transparency in future implications of
robots for children and to facilitate researchers, developers, and all relevant stakeholders, including children, to contribute to the
development of an ecosystem with robots that promote children’s rights.

CCS Concepts: • Human-centered computing → HCI theory, concepts and models; Empirical studies in HCI.

Additional Key Words and Phrases: Embodied artificial intelligence, Social robots, Child’s rights, Participatory user studies

ACM Reference Format:
Vicky Charisi, Selma Šabanović, Urs Gasser, and Randy Gomez. 2021. Social Robots and Children’s Fundamental Rights: A Dynamic Four-
Component Framework for Research, Development, and Deployment. In Proceedings of WeRobot2021, September 23–25 (WeRobot2021).
ACM, New York, NY, USA, 30 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn

∗ Both
 authors contributed equally to this research.
† The
 views expressed are purely those of the author and may not in any circumstances be regarded as stating an official position of the European
Commission

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components
of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to
redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
© 2021 Association for Computing Machinery.
Manuscript submitted to ACM

 1
Social Robots and Children's Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
 Charisi and Šabanović , et al.

1 INTRODUCTION
Our lives are increasingly algorithmically mediated by Artificial Intelligence (AI) systems, and social robots are one
of their emerging areas of application. For the purposes of this paper, we use the notion of “social robots” to refer to
embodied, AI-enabled systems with embedded sensors and actuators, capable of perceiving and processing human
social signals, such as human intention communication, and complex social structures, and responding to them in
a socially meaningful way. We acknowledge that typically the term “social” commonly refers to biological entities,
and to qualities such as theory of mind and shared intentionality [87] that enable the development of human social
cognition. However, the use of the term “social” as an attribution to robotic artefacts is a well-established convention
in the field of human-robot interaction [28]. This convention has been mainly based on two facts that are supported
by the relevant literature. First, the embodiment of robotic artefacts and their physically interactive presence within
the human environment can successfully stimulate specific fundamental building blocks of human social cognition,
such as shared attention and empathy [73]. Second, which partially results from the first, humans tend to attribute
anthropomorphic characteristics, such as intentionality, to robotic artefacts even if they are aware that robotic artefacts
are machines designed by humans [103]. Robots are therefore often perceived and behaved towards as social by their
human interaction partners.
 As prototypes and commercial products, social robots are increasingly reaching vulnerable populations including
children, who are in a critical period of their development with rapid changes in their cognitive, meta-cognitive and
socio-emotional abilities [58]. Additionally, children’s predisposition for attribution of anthropomorphic features to
inanimate objects may affect their perceptions and acceptance of robots in ways different from adults [94]. Children
develop through social interactions, and evidence shows they can perceive robots as part of their social groups [77],
which means that robots can affect their development.
 Due to the effectiveness in engaging children’s social cognition, robots can be used to successfully scaffold children’s
learning and entertainment and the demand for social robots that interact with children is likely to increase in the
coming years. However, the profound impact that this technology has on children contribute to an increase in concerns
and pressing considerations which first need to be addressed. The societally appropriate and beneficial adoption of social
robots for children will only be possible through responsible design, development and deployment, which eventually
would lead to an ecosystem of trustworthy robots that support and prioritize children’s fundamental rights.
 While in the field of autonomous systems and robotics there is already work on methodologies for embedding ethics
and human values by design (e.g. [21, 26, 100], one of the first global initiatives that aims to systematize the design of
AI-based systems that respect and promote children’s fundamental rights is UNICEF’s work on Policy Guidelines for AI
and Child’s Rights [31]. This initiative reflects on the existing convention of the Rights of the Child to identify elements
that are relevant to AI-based technologies and indicate actions that should be considered in order for AI systems to be
aligned with the Rights of the Child. Towards this direction, UNICEF initiated a piloting phase for the application of the
proposed guidelines in policies or AI-based products especially designed for children. In this wider context, we were
invited to pilot a set of guidelines for the social robot Haru [41], which is currently being developed by the HONDA
Research Institute and we describe in more detail in section 3.2. The project involved a set of empirical studies as well
as activities for the inclusion of the community of child-robot interaction e.g. [25]

 2
Social Robots and Children's Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment

2 GOAL AND GENERAL APPROACH
This paper aims to introduce a framework for research, design and deployment of social robots that are aligned with
and promote children’s fundamental rights. The development of the framework was based on scientific activities we
carried out in the context of piloting UNICEF’s requirements for AI and child’s rights [31], with a particular focus on
children’s narratives and desiderata regarding robotic systems. Additionally, we considered the existing relevant work
in the field of child-robot interaction interpreted through the lens of children’s fundamental rights.
 We conducted a series of small-scale empirical participatory research studies with children aged 6-16 from Tokyo,
Uganda, and Greece with the active participation of their educators. It is beyond the scope of the paper to present
detailed methodologies and results of each pilot study. Instead, we present a summary of the main points of the
studies in order to contextualize the conception and development of our framework. For those studies, we followed an
ethnography-based, iterative approach to identify the underlying principles for robot design by considering children’s
individual and cultural characteristics. We also tried to address issues of possible biases by considering the contributions
of a multi-disciplinary team with an increased geographical distribution.
 Regarding the organization of this paper, based on our stance to prioritize children’s voices, we first present the
summary of the empirical participatory pilot studies. This is followed by a review of the relevant existing work in the
field of child-robot interaction which we interpret in the context of the general principles of children’s fundamental
rights. Based on the input from our empirical studies with children, the relevant literature and the UNICEF’s guidelines,
we introduce our four-component framework for the facilitation of robot design and development with the consideration
of children’s rights. We conclude with a discussion of our approach and with proposals for possible future research
directions regarding robots and children by considering children’s fundamental rights.

3 EMPIRICAL PILOT STUDIES
The empirical studies performed in our project included children, and at times their parents and educators, through
participatory activities that gave them opportunities to commence and develop a conversation and a reflection on
various issues in relation to their understanding of social robots. For our empirical participatory studies we focused on
three requirements as proposed by UNICEF, namely

 • Prioritize fairness and non-discrimination for children
 • Protect children’s data and privacy
 • Provide transparency, explainability and accountability for children

 Our main methodological approach for the exploration of the requirement of fairness, inclusion and non-discrimination
was based on the paradigm of participatory research with children, including their educators; for our better under-
standing about the protection of children’s privacy in the context of robotic technology, we performed a participatory
research with children and a survey with parents; lastly, for the evaluation of our system, regarding the requirement
of robot explainability, we conducted an experimental, controlled proof-of-concept empirical study. Throughout the
process of those pilot studies, we created a highly collaborative structure with regular interactions among all the
involved researchers, developers and educators who participated in all the iterations of the research and the technical
developments of the robot. An underlying common element for all our studies was the use of the Haru robot (for a
description of the robot, see section 3.2. In this section, we elaborate the overall methodology of the empirical pilot
studies, and we present a summary of the context and the goal of each pilot study, the methods, and some indicative
 3
Social Robots and Children's Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment
WeRobot2021, Miami, Florida,
 Charisi and Šabanović , et al.

priliminary results of each study. A detailed description of the first pilot study appears as a published paper in [24],
while the rest of the findings will be published in the near future.

3.1 Methodology
We adopted an ethnography-inspired methodological approach which included a small targeted sample of children in
school contexts with the participation of their educators, as well as a survey performed with parents. In order for us
to promote inclusion and non-discrimination for our understanding of children’s perceptions of robotic artefacts, we
purposefully included participants from four countries: Uganda, Greece, Japan, and the United States. The fact that we
included children from a rural area in Bududa, Uganda came with particular methodological and ethical challenges,
since, to our knowledge, this is the first study conducted in a rural African area on the specific topic. Apart from the
ethical approval of the study from the local office of Bududa District, we tried to address these challenges via our
immediate interaction with the children and their educators before, during and after the conduct of our participatory
study as indicated by Spiel et al. [83]. To reduce methodological bias, we provided detailed and structured protocols for
the deployment of the activities which were co-designed together with the local educators in order to align with the
local cultures and resources of the participating schools. For all the studies with children we obtained the informed
consent forms by the parents or legal representatives as well as assent by the participant children.
 More specifically, we conducted four small-scale pilot studies with = 24 children in Uganda, = 33 children
in Japan, = 16 children in Greece, and a survey with = 71 parents in the United States and = 100 parents in
Japan. Of those, two studies were based on participatory design techniques [19] with the participation of children’s
educators in the form of Participatory Action Research [11], one was a proof-of-concept controlled experimental pilot
study, and the last one was based on data collected via a survey. For the participatory studies, we used the technique of
story-telling, which was a familiar technique for all our participant children, it required the minimum of the resources
and infrastructure and it was within the educational culture of all our participating schools. According to the principles
of the Participatory Action Research paradigm, we co-designed the story-telling activities together with the educators
and we used stories from children’s everyday situations, imaginary situations and robot-related situations. The activities
were designed to be conducted in 4 sessions per school.
 For the data collection we used the resources available in each school. Our final data-set included video-data and
children’s inputs in a written or digital form. We conducted thematic content analysis and we identified themes that
were specific for each study and themes that appear globally in all studies. In this paper we only elaborate on the global
themes that appear across studies as discussed in 3.7.

3.2 The Haru Robot
In all our studies, Honda’s Haru robot was used as a common example of a social robot for participants. Haru is an
experimental tabletop robot for investigating embodied mediation through the robot’s expressiveness-centered design [?
]. Haru’s design philosophy is focused on empathic interaction through its expressive behavior to explore the potential
of supporting long-term, sustainable human-robot interaction. The robot has a total of 5 degrees of freedom (i.e., base
rotation, neck forward-and-backward movement, rotation and tilting of outer eyes, and push-and-pull of the inner
eye module). Haru’s eyes include a TFT screen for visual media display. Moreover, it uses two sets of addressable LED
matrices in the border of the eyes to convey attention and in the spherical body to display its mouth. The robot can
communicate using text-to-speech (TTS) and non-verbal vocalization through its internal speaker. A software tool
to build animation routines through the combination of movements and audio-visual modalities is provided to allow
 4
WeRobot2021, Miami, Florida,
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment

researchers to design nuanced robotic expressions [? ]. Lastly, the Haru robot comes with an AI perception suite for
understanding the world (i.e., human and environment) and a programmable decision-making module for programming
and automating the interaction process.

Fig. 1. The Haru Robot which was selected by UNICEF as one of the application to pilot the Policy Guidelines for AI and Child’s
Rights.

 One of the target use populations for the Haru robot is children, in diverse contexts including homes and classrooms,
interacting with the robot individually and in groups. It is therefore our aim to use the studies presented here to guide
the further development of the robot for interaction with children.

3.3 Empirical pilot study 1: Children’s perceptions of fairness in robotic scenarios in Japan and Uganda
In order for us to embed elements of fairness into our robotic system, we considered inclusion as a fundamental element
for our pilot study. Previous research indicates that children in different societies may perceive fairness in different
ways [76]. With our study, we aimed to explore children’s perceptions of the notion of fairness in robotic scenarios in
Japan and in Uganda. We were specifically focusing on including under-represented populations in terms of ethnic and
cultural background, socio-economic level, and cases that might be unusual to include in child-robot interaction studies.
 Methods This study was conducted in two phases. First, we conducted a Participatory Action Research study with
story-telling activities for = 4 at a school in Japan with = 20 children (15-16years old) and = 4 
at a school in Uganda with = 24 children (6-16 years old) (see fig. 2). We used storytelling to facilitate children’s
narratives on fairness and to identify areas of alignment and disconnect. The protocols for the two schools was the
same but the educators had the freedom to adapt elements of the protocol to the specific cultural context. This was
followed by the second phase of the participatory design study in Japan with = 10 of the children that participated
in the first phase. Based on the story-telling results of the first phase, we empirically evaluated a tablet-based system
which could be used by children to design story-telling activities and robot behaviours for the physical Haru Robot (fig.
2). Following this study, we developed a prototype which included a child-friendly screen-based platform to facilitate
children’s designs of the Haru robot as a story-teller. The system consists of a tablet-based interface of a virtual robot
which functions as a low-cost mediator for the physical robot design (fig. 3). = 10 children in Japan participated in a
single session and were asked to modify an existing story we had already prepared and implemented to the physical
robot. With this session, we aimed to gain a first understanding of the feasibility of our prototype in order for us to use
it as a low-cost method for inclussivity in robot designs for children.
 Summary of initial results We transcribed children’s narratives and we performed a thematic data analysis. Initial
results indicate that in the first phase of story-telling activities both groups, in Tokyo and in Bududa, referred to similar
aspects of fairness or unfairness, namely psychological, physical and systemic; however, children in Tokyo focused more
 5
WeRobot2021, Miami, Florida,
 Charisi and Šabanović , et al.

 Fig. 2. Participatory Action Research studies in Bududa, Uganda and Tokyo, Japan

 Fig. 3. Participatory design study with the Haru Robot in Tokyo, Japan

on psychological and mental aspects (e.g. being ignored by the robot) while children in Uganda emphasised physical
and material aspects of fairness (e.g. not getting access to a space, being physically constrained). Both groups increased
their emphasis on mental aspects in robot-related scenarios. In addition, we observed that children’s narratives in both
groups referred to a combination of robot-features such as its connectivity and its embodiment. All children expressed
their interest to further explore fairness and unfairness as experienced by children with different cultural backgrounds
and the need for inter-group contact. This study provided a set of children’s robot-related desiderata as a basis for the
development of design principles and technical requirements for robot companions that prioritize fairness for children.
In the second phase of this pilot study, we observed that children were able to navigate our system and use it in an
effected way. The activities were designed to be collaborative and the setup we designed supported children’s small
group collaboration during the design phase. Our plan was to distribute the platform to the school in Uganda as well,
 6
WeRobot2021, Miami, Florida,
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment

but COVID-19 restrictions did not allow the conduction of the study at the school in Uganda. This has been included in
our future steps. The system is currently being designed to support the connectivity principles for the system which
aims to function as a tool for the connectivity of the two groups of children (in Tokyo and Uganda) as indicated by
the participant children in the first phase of the pilot study. Lastly, the thematic data analysis revealed that children’s
narratives about fairness include aspects of the robot’s connectivity, embodiment and autonomy as well as scenarios
that indicated different forms of the robot’s integration into children’s everyday routines. We considered these themes
as the seed for further research.

3.4 Empirical pilot study 2: Children’s perceptions of robots in relation to inclusion, privacy and fairness
 in Greece
Following our participatory study in Japan and Uganda on the topic of fairness in robotic scenarios, we designed and
conducted an additional participatory study to explore further notions that are related to the requirements proposed
by UNICEF in the context of robot scenarios, namely inclusion, privacy and non-discrimination. Since the first pilot
was conducted with children in Japan and Uganda, in order for us to achieve increased geographical and cultural
distribution, we invited children from Greece to take part in our second pilot study. The main goal of this pilot study
was for us to understand weather the methods and the initial indications we had from the first study could be used in a
different context for a wider set of notions related to the child’s rights in the context of robotics.
 Methods We recruited = 16 children aged 15 years old in Greece and we used the same methods and protocols as
our previous study in Japan and Uganda. This study was conducted in four sessions which were based on our existing
protocols as designed for the study on fairness. However, in the current study we aimed to introduced additional
notions related to the child’s rights. The additional topics would emerge from an introductory session. The protocols
were adapted through a co-design session with the local educators. In all sessions, we used story-telling activities to
introduce children to the current developments in social robotics (session 1), and to facilitate children’s narratives on
non-discrimination (session 2), inclusion (session 3) and privacy (session4) in robot-related scenarios.

Fig. 4. Content thematic analysis: Percentage of the themes ( = 25) emerged during the discussion of the importance of children’s
rights in the context of robotics
 7
WeRobot2021, Miami, Florida,
 Charisi and Šabanović , et al.

Fig. 5. Content thematic analysis: Percentage of themes occurence per session. Session1: Introduction; Session2: Non-discrimination;
Session3: Inclusion; Session4: Privacy
 .

 Summary of initial results We transcribed the narratives of the participant children and We performed a thematic
content analysis for all the sessions. For the introductory session, we analysed children’s reference to various elements
of the rights of the child in relation to robotic technology. A preliminary analysis shows that children’s narratives
include aspects of non-discrimination, privacy, inclusion, education, accountability, well-being and safety (fig. 4. Based
on those initial results we selected the focus of the rest of the sessions (fairness and non-discrimination, inclusion and
privacy). Because of the different foci of the sessions, we developed an annotation scheme with two parts. The first part
included themes specific for the topics of inclusion, privacy and fairness while the second part included robot-features
namely its level of connectivity and autonomy, its embodiment and the ways that a robot can be embedded in children’s
social lives. Figure 5 shows the percentage of appearance of each of those themes in children’s narratives per session.
We observe that the integration of the robot in the social fabric of children’s everyday life seem the topic that emerged
more in all sessions. To refer to this theme, we introduce the term “social embeddedness” which we explain in more
detail in sections 3.7 and 5. A detailed analysis of this pilot study will be published as a separate paper.

3.5 Empirical pilot study 3: Parents’ perceptions of privacy in robotics context in Japan and the US
Privacy is one of the fundamental rights of the child, as defined by the United Nations. In prior literature on the
acceptability of social robots for in-home storytelling activities, privacy (i.e. the potential for robots to collect and
disseminate sensitive information) was a concern parents had in relation to the potential use of robots by children
[? ]. Along with understanding children’s perspectives on social human-robot interaction, we also wanted to better
understand how parents, who are likely to have a central role in giving children access to robots, thought about robot
design and privacy. With this in mind, we performed online surveys with parents in the US and Japan, asking specifically
about their perceptions of the robot Haru, its capabilities, and how they might want to use Haru in ways that conserve
their child’s, and family’s, privacy. These surveys will be followed up with more in depth interviews in the two countries.
 Methods We sent surveys to participants who self-identified as parents on MTurk (for US participants) and Crowd-
Works (for Japanese participants). We had 71 respondents on MTurk (M=39, F+ 32) and 100 (M=47,F=49,NA=4) respon-
dents on CrowdWorks. Prior to answering the surveys, participants viewed a brief introductory video of the Haru
robot. The survey asked them about their understanding of Haru’s sensing capabilities, the way it collects, stores, and
disseminates data about people it interacts with, and where participants would want to use the robot.
 8
WeRobot2021, Miami, Florida,
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment

 Summary of initial results In the US, the most common places parents wanted to use Haru was in the living room
(n=41), office (n=30), and kitchen (n=23), and least common was the bathroom (n=6) and child’s nursery (n=2). This is
despite the fact that parents often use various types of digital monitoring equipment (e.g. cameras, motion sensors) in
nurseries. Parents generally thought Haru would store data internally (n=42), and much less in the cloud (n=10). They
mostly thought users and owners of the robot would have access to the data (n=31). The most common reason they
pointed out for picking the specific area of the house they wanted to use the robot in was because there was "little
to no risk of privacy invasion" (n=16), while "invasion of privacy" (n=24) was the most common reason for choosing
not to put the robot in an area. Parents in Japan similarly expected data to mostly be stored within the robot (n=49),
and thought that data would be accessed by the family (n=59) or the company (n=55). In Japan, the robot’s desired
placement was also the living room (n=85), followed by the guest room (n=42) and kid’s room (n=35), showing less
reticence to have robots interacting with kids in more private spaces of the home.
 These results suggest that parents, particularly in the US, are concerned about privacy in relation to social robot use
with children in the home. Furthermore, some of their expectations of how data is stored and used pose challenges
to robot design and existing technological capabilities, particularly the idea that data be stored and processed mainly
within the robot itself, which can be limiting to the robot’s capabilities. Overall, following parents’ preferences suggests
robots should be designed to operate in multi-user settings, such as living rooms, which will require them to be able to
distinguish multiple family members from each other and act accordingly. Privacy sensitive designs will also require
less use of cloud services to store and process data, which is a direction many current robot design employ.

3.6 Empirical pilot study 4: Children’s perspectives of robot explainability in a child-robot interaction
 proof-of-concept study
Transparency of the decision-making of an AI system is important for the development of trust especially in the case
of child users. For this reason explainable AI has been widely acknowledged as a crucial feature for the practical
deployment of AI models [7]. An explainable AI system is one that produces details or reasons to make its functioning
clear or easy for humans to understand. However, in the case of child users, we need to understand what types of
explanations are suitable for them, depending on the context and on the child’s developmental level, and when it is the
optimum moment to provide the explanation. Towards this end, we built upon our previous studies on child-robot
collaborative problem solving [22], in which the Haru robot interacted with children to support their problem-solving
skills with the use of the Tower of Hanoi task. We designed three types of task-related explanations for the Haru robot
which were given (or not) to the child for the support of the problem-solving process. This research will contribute to
the development of child-friendly explainable robots for collaborative problem-solving tasks.
 Methods We built upon our previous studies [22] to design a proof-of-concept empirical study with = 13 children
in Japan in the setting of child-robot collaborative problem solving task with the Tower of Hanoi task (fig. 6). We
designed three types of task-related explanations for the Haru robot which were given (or not) to the child for the
support of the problem-solving process as indicated in table 1. We used mechanical explanations to link cause and effect
for the task solution, contrastive explanations to contrast two solutions (i.e. why p rather than q) [64] and counterfactual
explanations to indicate the result of an alternative choice [93]. In a post-intervention session, we asked the participant
children about their opinion on the explanations of the robot. We collected two types of data: (i) behavioural data of
task performance which consisted of logged data during the task performance as well as the child’s initiated interaction
with the robot and (ii) post-intervention interview data about children’s perceptions of the explainable robot. In this
paper we focus on the second type of data namely the children’s narratives during the post-intervention interviews.
 9
WeRobot2021, Miami, Florida,
 Charisi and Šabanović , et al.

 Fig. 6. Proof-of-concept study on explainability in robots for collaborative problem-solving activities in Tokyo, Japan

 Table 1. Types of explanatory interventions

 Child’s previous perfor- Pedagogical goal Explanation of the algorithm
 mance
 Optimal and Fast No intervention No explanation
 Optimal and Slow Improve the learner’s confidence Mechanical explanation
 Suboptimal and Fast Introduce the learner into planning Contrastive explanation
 Suboptimal and Slow Explain the reason a movement is wrong Counterfactual explanation

 Summary of initial results We performed a thematic content analysis to extract children’s perceptions of a robot
that provides explanations during a collaborative problem-solving task. While most of the children found the robot’s
mechanical explanations useful mainly because this was connected to their better task performance, there were children
that considered explanations as a robot feature that makes the robot’s decision-making more transparent, which they
connect with their better understanding of the intentions of the robot. One of the children indicated that “understanding
the robot’s intentions regarding the solution of the Tower of Hanoi task was useful for our collaboration”. This suggests
that the robot explanation was not only perceived as a task-related help behaviour of the robot but also it affected the
child’s perception about the potential collaboration with the robot. In addition to the verbal interaction with the robot
and the role of the explanations, one child mentioned that “With Haru, there was more movement, a little more laughter. I
like it better when there is movement”. The embodiment of the specific robot allows the movement of the neck and the
eyes which seem to affect children’s perceptions about their interaction with the robot. When asked whether the robot
could be integrated in other activities than the problem-solving task they experience with the physical robot, one of
the children mentioned “I would like to do the morning greeting exercise with Haru. When I come home, my siblings
are at school and my parents are at work, so I’m all alone. I want Haru to do my homework with me”. Although this
child interacted with the robot in a very controlled setting within a specific context, there was an emerging expectation
of the integration of a robot into other daily activities such as daily routines at the school or leisure and homework
 10
WeRobot2021, Miami, Florida,
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment

activities at home. This finding is in accordance to findings in the existing work in the field of child-robot interaction
which raises concerns about the design decisions regarding the transparency of a robotic system and the approaches
for robot explainability towards child users.

3.7 Discussion of the empirical studies
In this section we presented a summary of the pilot participatory studies we conducted during the piloting of the
UNICEF’s guidelines on AI and Child’s Rights. The detailed presentation of the methods and the results of each pilot
study was beyond the scope of this paper. The summative presentation of the studies aimed to provide the basic
information and the context upon which the conceptualization and the development of our proposed framework
occurred. The results of the first pilot have been published in [24].
 These pilot studies had a threefold role. First, they helped us understand children’s perspectives on social robots
in relation to a set of child’s fundamental rights and certain features of the system that support those rights. More
specifically, we explored the ways in which we can support children’s inclusion and non-discrimination in the design
of robotic systems, we investigated the parents’ perspectives on privacy issues that emerge with the use of robots in
children’s everyday activities and we tested the impact of certain types of robot explanations on children’s perceptions
and acceptance of the Haru robot as a learning companion for problem-solving activities.
 Second, the design and the conduction of those pilot studies contributed to our further understanding on method-
ological approaches for performing cross-cultural studies, by including children from typically under-represented
geographical regions and cultural backgrounds, such as our participants from Bududa in Uganda. While researchers in
the field of child-robot interaction have already indicated methodological practices for cross-cultural studies [9], to our
knowledge, this is one of the first attempts that children from a rural area of Africa are included. For this reason, an
ethnography-inspired approach was considered as a suitable methodological paradigm and inclusion of and constant
interaction with the local educators in the form of Participatory Action Research seemed to be important for the design
and the conduction of the pilot study in Africa.
 Third, we prioritized and facilitated children’s participation in the current dialogue for social robots and children’s
fundamental rights and we provided the opportunity for children who are not typically included in child-robot interaction
studies to approach the topic, formulate their own opinion which then was considered as an input for robot design.
 Lastly, we considered the results of our studies as an input for the development of technical requirements and a
theoretical framework for social robots that would align with children’s fundamental rights and minimize possible
emerging risks for child-users. Considering children’s narratives, we observed certain robot features that appear
regularly, across studies and across cultures such as the degree of the robot’s connectivity, whether the robot functions
autonomously, the role of the embodiment (e.g. sensors and appearance of the various modules), and the ways that the
robot behaviour is designed to be integrated in the children’s social context. Table 2 indicates the four components
with the corresponding themes that appeared in the narratives of the children and a quotation example per component.
In section ??, we introduce the proposed framework and we indicated connections to the Rights of the Child in the
context of robotic systems as we discuss in 4.1.
 Limitations and future work While these pilot studies provide a number of first indications regarding children’s
perceptions on topics that relate to their fundamental rights in the context of social robots, they were small-scale
studies which do not allow the generalization of the results. Our future work includes the involvement of a larger
number of children from additional geographical regions with a special emphasis on the inclusion of children from
under-represented areas and cultural backgrounds. However, conducting child-robot interaction studies in remote
 11
WeRobot2021, Miami, Florida,
 Charisi and Šabanović , et al.

 Table 2. Themes, categories and indicative examples based on the thematic content analysis across pilot studies

 Theme Categories Examples
 Connectivity Degree of connectivity “The robot knows the location of my home and
 checks the route for me” (Japan, 15)
 Autonomy Perception, decision-making, con- “The robot could understand and help me. It could
 trol be my friend” (Greece, 15)
 Embodiment Head, body, limbs, motion, speech “The robot can carry disabled children to the school
 classroom” (Uganda, 12)
 Social embedded- Single user, small group, large group “The robot can see what you can do with your
 ness friends” (Japan, 15)

geographical areas comes with certain challenges because of the various limitations regarding the local infrastructure
and internet connection. One of the solutions to address this issue is the development of low-cost hardware, upon which
we are currently working. As a starting point, we plan to use a hybrid device which combines a low-cost screen-based
representation of the Haru robot on a device with limited amount of sensors. From a methodological perspective, the
fact that the our knowledge with research on child-robot in rural areas in Africa is still limited, required an increased
number of iterations. Based on the method of story-telling and the protocols we developed for this study, we aim to
iterate with additional emerging aspects in relation to child’s rights and robotics. Our ambition is to support children
from under-represented and disadvantaged areas to have an active role in the development of AI and robotics that will
probably affect them in the future.

4 CONSIDERATION ON CHILD’S FUNDAMENTAL RIGHTS IN THE CONTEXT OF CHILD-ROBOT
 INTERACTION
4.1 Social robots for children
In this section, we relate insights from prior studies on social robot interaction with children to the United Nations
principles of children’s rights [91]. The consideration of ethically-aligned robots is not a new area for research and
policy. However, since citizens and legal entities will increasingly be subject to actions and decisions taken by or with
the assistance of AI systems, governmental institutions are pressing for the development of regulatory frameworks
for AI. The recently-published White Paper on AI of the European Commission [32] explains that it is important to
assess whether the current EU legislation can be enforced adequately to address the risks that AI systems create or
whether adjustments are needed to specific legal instruments. Towards this end, further initiatives such as the European
Commission’s draft on Artificial Intelligence Act have already started to consider possible directions for a regulatory
framework for AI. With a focus on children, UNICEF has recently indicated that although AI is a force for innovation, it
also poses risks for children and their rights. However, the majority of AI policies, strategies and guidelines make only
a cursory mention of children. To help fill this gap, UNICEF is currently exploring approaches to uphold children’s
rights in the context of AI and to create opportunities for children’s participation.
 Although UNICEF’s initiative, to our knowledge, is one of the first and most systematic initiatives for the consideration
of children for AI policies, it considers guidelines that cover a wide range of AI-based technologies. However, embodied
AI and robots bring unique opportunities but also robot-specific risks for children; while this calls into question how
existing protection regimes might be applied, it is still unclear how rules for children’s protection and interaction with
 12
WeRobot2021, Miami, Florida,
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment

robots should look like. We therefor provide an initial discussion of the how the United Nations Convention of the
Rights of the Child (UNCRC) [90] could apply to interactions with social robots.
 The UNCRC covers all aspects of a child’s life and sets the civil, political, economic, social and cultural rights that all
children everywhere are entitled to. This Convention was adopted by the United Nations General Assembly in 1989 and
is the most widely ratified human rights treaty in the world. The convention consists of 54 articles, four of which play a
fundamental role in realising all the rights in the Convention for all children and are known as "General Principles".
These are the following:

 • Non-discrimination (article 2): All children should enjoy their rights and should never be subjected to any
 discrimination;
 • Best interest of the child (article 3): In all actions concerning children whether undertaken by public or private
 social welfare institution, courts of law, administrative authorities or legislative bodies, the best interest of the
 child shall be a primary consideration;
 • Right to life survival and development (article 6): State parties shall ensure to the maximum extend possible the
 survival and development of the child;
 • Right to be heard (article 12): State parties shall assure to the child who is capable of forming his or her own
 views the rights to express those views freely in all matters affecting the child, the view of the child being given
 due weight in accordance with the age and maturity of the child.

 Along with the UNCRC, we consider closely the UN General Comment 25 (UNGC25) [89] and UNICEF AI and child’s
rights documents [31], which were published more recently and in which we find a consensus in relation to the types
of children’s rights relevant to digital technologies and AI. Starting from the specific principles pointed out in these
documents, we discuss the risks and the opportunities presented by the use of social robots in relation to what we know
from the child-robot interaction (cHRI) literature.

4.2 Non-discrimination
The UNCRC states that children’s rights should be respected without discrimination based on their race, sex, language,
ethnic origin, and other attributes. The UN’s Comment 25 further defines non-discrimination as “equal and effective
access to digital technology” and also points to potential challenges to this principle due to algorithmic bias, exposure
to hateful and hurtful digital communications, and the potential for access to content that propagates negative racial,
gender, etc. stereotypes through digital technology.
 One way to address discrimination and embedded bias in robotic systems is to ensure diversity in the groups that are
involved in their design and development. This is hardly a radical notion. Yet, in the field of child-robot interaction, the
majority of empirical studies include participant children with certain common characteristics, such as their national
and/or cultural origins, while the field has only recently started to exhibit sensitivity to the inclusion of children
from underrepresented populations more diverse regions of the world, or more varied socio-economic and cultural
backgrounds (e.g. [24])
 Inclusion of children with diverse backgrounds and from various contexts in the design and evaluation of robots can
help address algorithmic bias from the perspective of providing more diverse data sets, which has recently been shown
to create significant issues in digital systems that rely on them [18], including mis-identification of users who belong
to groups less represented in the data. Robots already have certain perception related bias in relation to children – in
 13
WeRobot2021, Miami, Florida,
 Charisi and Šabanović , et al.

general it can be more difficult to perceive children’s speech and behaviours, and it can also be more challenging with
specific types of children (e.g. availability of language of interaction in robot) [56].
 More inclusive design and evaluation can serve to ensure that children with diverse capabilities are able to participate
in interactions with robots by providing relevant interaction modalities. The design of robots who can perceive and
produce sign language [1], interact with and teach language to deaf children [75], as well as robots who can interact
with and provide social scaffolding for neurodiverse children [27, 50] is a step in this direction. Robots can also provide
new ways for children with diverse capabilities to interact with social others and their environment – telepresence
robots can provide opportunities for greater social inclusion to children in remote communities [66], or can provide
mobility to those who might not be able to attend school in person due to illness (e.g. [68, 96]). In this way, robots can
be used to increase access and inclusion of more diverse groups of children.
 Social robot bias can also affect children through their physical presence, which may represent certain social
stereotypes in the robot’s appearance and behaviour. Researchers have shown that gender stereotyping is salient in
robots [33], even among children [65], and can have an effect on children’s own perception of gender steroetypes more
generally [82]. Other forms of stereotyping, such as that based on race [10] or ethnicity [34], also occurs with robots.
Incorporating social stereotypes in robot design can serve to support a robot’s function in the short term (e.g. a female
robot might be seen as more empathetic than a male-presenting robot), but such design shortcuts could also have
negative longer term effects on children’s understanding of people’s diverse capabilities and their own social gender
identity.
 When thinking of access, it is also important to consider the robot as a component in a broader socio-technical
infrastructure, which requires other technologies (e.g. wireless connection) and people (e.g. experts with both technical
and local knowledge) to use robots effectively in the child’s location. The inclusion of a broader range of locations and
child and adult experts in the design and development of robotic technologies is therefore imperative if robots are to
become a robust and widely available resource around the world.

4.3 Best interests of the child
In the design and deployment of robots, we need to prioritize how a robot’s embodiment, social presence, and other
aspects of robotics can best serve children’s needs and also consider where there might be areas in which robots might
inadvertently cause children to come to harm, or where their interactions with robots might prioritize others’ gain.
When considering the best interests of the child, the UN Comment 25 identifies the need for the child to “receive
appropriate information, and be protected from harm including through content blocking as needed". The UNGC25
discuss “protection from exploitation” of children using digital media, which include concerns about digital media
control by corporations; access and use of data created through engagement with digital tech; and no neuro- and
affective marketing to children. With this in mind, UNICEF suggests we need to establish “proportional and appropriate
limits to participation focused on harm reduction,” and ensure that there is “no limitation or punishment to children
based on views expressed.”
 We can consider how cHRI may lead to the collection of data on children’s behaviours, ideas, etc. by corporations
through robots and used for their profit. Children may engage with robots in various types of contexts, including for
education, entertainment, and social interaction. During such interactions, it is possible to collect data on children’s
attitudes, beliefs, responses to different stimuli, and also to influence children’s behaviour and ideas. Children might be
particularly vulnerable to such influence as they might engage with robots more as a peer or social companion rather
than a technology, and divulge data that adults may not [15, 16]. To counter this possibility, it is important to think
 14
WeRobot2021, Miami, Florida,
Social Robots and Children’s Fundamental Rights: A Dynamic Four-Component Framework for Research, Development, and Deployment

of ways to limit the collection of data from children during everyday interactions, and regulate its use to prioritize
children’s benefit over that of other stakeholders.
 We should also consider the effects of robot errors in functioning when they interact with children. The negative
impacts of mis-identification or bias in interactions can be particularly acute in interactions with social robots, which
engage children in affective and socially salient interactions. A robot that is not be able to understand a child with a
certain accent will not be able to provide equitable service to this child. It may also case psychological harm, as the child
might interpret the robot’s inability to interact with them as their own insuffficiency or fault, rather than a technical
glitch [88].
 It is also possible that a robot might give incorrect information or guidance to child due to its own limited knowledge
capabilities. This calls us to engage with the potential limits of research on trust in cHRI. While a focus of much of
HRI design is on getting people to trust robots, it is particularly important to consider how we might allow children to
understand the limits and in some cases even limit their trust of the technology. In order to allow robots to be used with
a child’s best interests in mind, we should further consider developing a curriculum for teaching digital literacy with
robots to children so they can better understand and place in their lives their interactions with these new technologies.

4.4 Right to life, survival and development
The UN’s General Comment 25 explains that it is important to ensure children are protected from “inappropriate
content, online abuse and exploitation”, and that technology should not be used as a substitute for personal interaction.
It is also important to ensure availability of appropriate use strategies for stage of child development. UNICEF more
broadly suggests that we need to have mechanisms for assessing and continually monitoring the impact of AI systems
on children in AI policies and strategies. Digital technologies can promote children’s right to culture, leisure and play
by, according to UNGC25, providing “access to broader array of cultural resources” and an “attractive and engaging
medium for children.” The UNGC25 points out that such interactions should be need to involve “data protection, privacy
and safety by design.”
 Children have been identified as a promising potential user group for robots because of their openness to novelty
and active imaginations, as well as their strong tendency to anthropomorphize, which can all support interactions with
robots. In designing robots, we must consider the potential harms that might come to a child due to the robot’s existence
between the world of the animate and the inanimate – it’s “third ontology” in the terminology of HRI researchers
[54, 57]. Due to its design and embodiment, the robot might appear to the child to be animate, but might exhibit more
mechanistic behaviours, or lack certain responses or functions that are exhibited by animate beings. In terms of child
development, it’s important that the robot design make the machine nature of the robot clear to the child, to take into
account potential effects on the child’s moral and social development [52, 53, 80].
 An area of significant development in cHRI is educational robotics, which pertains directly to a child’s “right to
education” as a means to their development. UNGC25 explains that there is a need to “extend educational programs
to include digital technology and related challenges and opportunities and make access to education more available”
through digital technology. Robotics as a type of AI system has several opportunities for use in such an expanded
view of children’s education at various ages. Robots themselves are being developed as education technology and and
a way for more diverse students to get engaged in STEM [14, 69]. Robots also figure in some research as a way for
children to access educational materials through a different mode of interaction, with the robot as a peer or tutor, or a
way to present educational materials in a more socially interactive and physically co-present way in comparison to
computers or ipad applications [63, 97]. As mentioned above, robots can also provide tele-education opportunities for
 15
WeRobot2021, Miami, Florida,
 Charisi and Šabanović , et al.

children to access remote locations through the robot. It is, of course, always important to consider the robot not as a
replacement for human instruction, even though robots have been shown to be more efficacious in learning outcomes
than non-embodied digital technologies, but as an addition to instructional interactions with peers and teachers.
 There is also the possibility that robots can help children expand their horizons. Robots as entertainment and
education tools can provide children with access to novel experiences of other places, cultures, and species [5, 44] or
provide new forms of learning [22]) and play [4, 101, 102]. Researchers have recently started thinking about how robots
can be designed with a "growth mindset” in mind [30]], and how they can support children’s creativity [45].
 However, as we develop such educational and entertainment cHRI, we should consider whether the inevitably limited
interaction capabilities of robots will end up limiting children’s autonomy and development in ways we cannot yet
foresee. For example, the design of a storytelling robot that displays only Western fairytales can restrict rather than
support diverse forms of play and cultural expression in cross-cultural communication, while a collaborative robotic
platform for storytelling that takes in input from children of many cultures can expand access and global understanding.
 In addition, we need to consider the impacts of robot bias towards children, not only of robots designed for children
to use but of robots that children might have access to in their homes, schools and public spaces more generally.
Voice-activated assistants already interact with children, but do not necessarily adapt the type of data the provide or
collect to their younger users. If we consider the possibility of having autonomous cars in the future, how will they
be able to interact with children, either as pedestrians on the street, or as passengers, taking into consideration their
own needs and perspectives (e.g. [23]? Lastly, it is also important to consider the need to support children’s autonomy
throughout their interactions with robots, and make sure that the robot does not end up with the role of task-master,
not allowing the child to procrastinate, create their own leisure time and pacing.

4.5 Respect for the views of the child
The UN AI guidelines document suggests that the meaningful participation of children in the design of hew technologies
and in the development of policies relating to AI technology design and use, needs to be supported. This type of
work should include, per the UNGC25, efforts to make sure children have equitable and age-appropriate access to
information: the "availability of information based on child’s stage of development, language and cultural background;
plurality of content in different formats; and diverse and not just commercial content". The UNGC25 also suggest that
children should receiving “training and support... to participate in digital interaction” and be able to appropriately and
successfully express their views about and with the use of digital media.
 Researchers are increasingly engaging children in the design of robots. Work in STEM education provided early
results here, pointing to novel uses for robots that go beyond physical assistance and enable social interaction and
strengthen interpersonal ties [39, 81]. In our own educational research, we’ve seen the importance of providing children
with opportunities to adapt and personalize robots to their own interests and needs [43]. We have also seen the potential
of including children in the design of robots to give them a greater sense of self-efficacy and contribution to society, as
well as to help them learn the connections between social factors and technical design [44]. In one classroom in the
United States, children designed telepresence robots that could patrol hallways as a way to address concerns about
the rise of gun violence in American schools [42]. This suggest that children’s participation is important not only
for improving the design of technology to better suit their needs, but that such participatory activities have in their
own right a potential benefit for the intellectual, mental, and social development of children. To support this type of
development, researchers have started identifying robot design and evaluation methods specifically suited to use with
children (e.g. [4, 6, 23, 102]).
 16
You can also read