An Enriched Emoji Picker to Improve Accessibility in Mobile Communications - Maria Teresa Paratore
←
→
Page content transcription
If your browser does not render page correctly, please read the page content below
Bari, 09-03-2021 An Enriched Emoji Picker to Improve Accessibility in Mobile Communications Maria Teresa Paratore mariateresa.paratore@isti.cnr.it, claudia.buzzi@iit.cnr.it, marina.buzzi@iit.cnr.it, barbara.leporini@isti.cnr.it
Bari, 09-03-2021 Computer Mediated Communication Computer Mediated Communication (CMC) is defined as a form of human communication through networks of computers. ▪ CMC may be synchronous (instant-messaging applications) or asynchronous (email, social media, blogs) ▪ It has become pervasive and ubiquitous (It replaced most of face-to- face interactions during the Covid-19 pandemic) People who cannot access or have limited access to this kind of communication suffer limitations in their social life (school, work, leisure, etc.) 2
Bari, 09-03-2021 Textual CMC Pros Cons ▪ Keeping in touch with Lack of empathy, which is the colleagues, friends and relatives main drawback with respect to ▪ Synchronous or asynchronous face-to-face communication exchange of information ▪ Ubiquitous 3
Bari, 09-03-2021 Adding Empathy to Text ▪ CAPITALIZATION to simulate loud voice ▪ Typing nonverbal interjections (ehm, uh-oh, aargh!) ▪ Voluntary delays in interactions ▪ Emoticons: :-) :-( ;-) ▪ Emojis: Emoticons are sequences of characters and are NOT subjected to any standardization Emojis are single character pictographs, supervised and standardized 4
Bari, 09-03-2021 Emojis’ Unicode Classification – Rel.13.1 Smileys & People & Body Components Animals & Food & Drink Emotions Nature 156 2049 9 140 129 Travel & Places Activities Objects Symbols Flags 215 84 250 220 269 Total: 3521 emojis – 156 directly related to emotions 5
Bari, 09-03-2021 Emojis - Main Problems ▪ The huge number of emojis makes it difficult to search for one specifically ▪ Redundancy (i.e. many similar pictographs to describe the same emotional state) ▪ Different renderings on different platforms/applications Ambiguities between sender and receiver ▪ Misleading textual definitions Ambiguities when using a screen reader 6
Bari, 09-03-2021 Encoding & Rendering – Some Samples Unicode Character Windows10 Twitter Description U+1F642 slightly smiling face U+1F60A smiling face with smiling eyes U+263A smiling face U+1F603 grinning face with big eyes U+1F600 grinning face U+1F601 beaming face with smiling eyes 7
Bari, 09-03-2021 Picking an Emoji – Usability and Accessibility Use Case: Adding an “emotional” emoji while typing a tweet. • The user has to browse among a huge number of images placed upon a grid • A screen reader will read the default description associated each emoji • Default textual descriptions may be misleading or confusing (e.g. the Unicode description, “Face with Look of Triumph”: ) 8
Bari, 09-03-2021 Developing a Model for a Novel Emoji Picker Our Goals ▪ Simplify the process of expressing emotions through emojis ▪ Improve accessibility for users with visual impairments ▪ Reduce communication ambiguities (for any kind of user) Spatial Theories of GUI Emotions Auditory 9
Bari, 09-03-2021 Theory of Emotions - Spatial Model 12-Point Circumplex Model of Affect (12-PAC) Y axis: arousal (excitement) X axis: valence (pleasure) 10
Bari, 09-03-2021 Choosing Sounds for the 12 Core Emotions Basic emotions for non-verbal communication are translated into six well-defined facial expressions for primary emotions: anger, disgust, fear, sadness, surprise, happiness. • The Ekman Faces are a validated set of photographs that represent “universal emotions” (i.e. cross-cultural) by means of facial expressions. 11
Bari, 09-03-2021 Choosing Sounds for the 12 Core Emotions Auditoy counterparts of the Ekman Faces. The Montreal Affective Voices (MAV) Multi-dimensional Semantic Space (Cowen et Al.) • Eight basic emotions: anger, disgust, fear, pain, sadness, surprise, happiness, Space of 24 semantic dimensions and sensual pleasure 2032 vocal bursts classified in an interactive map • Validated set of 90 audio samples (8 emotions + 1 neutral burst, spoken by 10 different actors ) 12
Bari, 09-03-2021 13 https://s3-us-west-1.amazonaws.com/vocs/map.html#
Bari, 09-03-2021 Choosing Sounds for the 12 Core Emotions For each gender, 8 vocalizations from the MAV model (surprised, satisfied, serene, relaxed, unhappy, disgusted, upset, scared) + To complete the 12-PAC model, 4 vocal bursts extracted from the samples provided by Cowen’s research (excited, elated, fatigued, 14 gloomy)
Bari, 09-03-2021 The Resulting Model: Spatial Distribution & Audio Stimuli 15
Bari, 09-03-2021 The Resulting Widget 16
Bari, 09-03-2021 Testing the Model - Android GUIs 1/2 17
Bari, 09-03-2021 Testing the Model - Android GUIs 2/2 18
Bari, 09-03-2021 Testing the Model - Questionnaire • SUS standard questionnaire • Additional questions on the model a. The position of the emotions on the screen helped me find what I wanted to pick (Q1) b. The audio cues (exclamations) helped me find what I wanted to pick (Q2) c. The audio cues (exclamations) gave a good description of the emotions (Q3) • Demographic questions (gender, age, level of education and if they had visual impairments) SUS and model questionnaire used 5-point likert scale 19
Bari, 09-03-2021 Results – SUS Scores SUS Scores 120 100 Evaluation M SD 80 Group 60 Sighted 79.46 21.6 (N=14) 40 Visually 69.25 18.41 20 Impaired (N=10) 0 Sigthed Visually Impaired 20
Bari, 09-03-2021 Results – Model Evaluation Q1 - The position of the emotions on the screen helped me find what I wanted to pick Q2 - The audio cues (exclamations) helped me find what I wanted to pick Q3 - The audio cues (exclamations) gave a good description of the emotions • Positive overall evaluation - Both sighted and visually impaired participants rated all three aspects significantly above the midpoint of the Likert scale (neither agree or disagree) • No significant differences in ratings on Q1, Q2, Q3 between sighted and visually impaired participants. • Q1 is affected by gender (with women more positive than men) • Other demographic characteristics do not affect overall results • No significant differences due to age, but the number of participants in each age group was probably not sufficient for a robust analysis 21
Bari, 09-03-2021 Conclusions Usability Model Evaluation Values obtained for the SUS Spatial and auditory score are encouraging, model’s evaluation was since the results collected positive for both sighted showed that both sighted and visually impaired users. and visually impaired participants rated the emoji picker’s usability more than acceptable. 22
Bari, 09-03-2021 Future Work Exporting our testing app to the iOS Ambiguities in the interpretation platform, will enable us to recruit more of emojis may persist on the testers and make a comparison recipient’s side. This topic will be between different assistive a subject of our research in the technologies (i.e., TalkBack vs near future (e.g. a recipient’s side VoiceOver). “translation layer”). Finding effective solutions to express the emotions’ intensity and add more customization options (e.g., personalized associations between emotions and vocal bursts). 23
Bari, 09-03-2021 Thank you! Any questions? mariateresa.paratore@isti.cnr.it https://play.google.com/store/apps/details?id=it.cnr.iit.emojipicker
You can also read