[bohyemianote: May.16.2021] CHI 2021

CHI 2021, May 8–13 online

2021 CHI (May 8 to 13) needed to be online this year due to COVID 19. Fortunately, although I registered very late in early May, I could still participate in the conference because it was online. Despite some challenges, thanks to the support of my design team, AI Platform Design & Research at Microsoft, I could successfully register for the conference and learned a lot over the last week. I was lucky!

This was my first time participating in CHI even though I had previously attended Infovis conferences, such as the 2019 IEEE InfoVis conference in Vancouver. With my experience at InfoVis as a reference, I started my (virtual) journey at 2021 CHI on Sunday, May 9th.

During this journey, I mainly focused on content related to ML, AI, Explainable AI/Interpretable ML, Data Visualization, Personal Informatics and more.

The opening keynote was titled “Chieko Asakawa: “See What I Mean: Making Waves with the Blind.” Chieko Asakawa is an IBM fellow and a distinguished professor at CMU. Her research area is accessibility. In her keynote, she presented how technology improved the quality of her life as a blind person. She said her biggest fear was losing independence, and the two most inconvenient parts of her life are “inaccessible information and uncomfortable mobility.”

One of her accessibility challenges resulted from a lack of buttons with labels on mobile apps. Because of the absence of button labels, the text reader kept speaking “Button, Button, Button…” instead of identifying what she was selecting. The non-native English speakers in the audience, including myself, were able to indirectly make sense of her difficulties because of our past experiences encountering information and language barriers. Despite some similarities, her information barriers could be completely different from mine, which is why we need to directly learn from people about their perceptual differences. She also suggested “Ways to find problems and fix them by blind users” as one of the directions for the topic, “Toward Accessible Apps.” Her other great project is AI suitcase; I recall being amazed by the demo. I highly recommend viewing her presentation for yourself.

Another inspiring keynote was the one titled “Ruha Benjamin: Which Humans? Innovation, Equity, and Imagination in Human-Centered Design.” The presenter, Ruha Benjamin, is a Professor of African American Studies at Princeton University. Her presentation was broken into two sections. In the first section, she covers questions and issues raised in the HCI community, design field, and more. She explained that frictionless design could hide social frictions; she cited issues in minimalist design as similar challenges, and the perspectives of graphic designer Cheryl D. Miller, who compared dominant graphic design principles to social oppressions.

After introducing these cases, Ruha stated: “This is not just a personal interpretation unique to Miller, but started as a political intention to reinforce hierarchies which then became naturalized as simply good design over time”; “dominant designs hide and perpetuate the violent frictions of our world”; and “science and technology are powerful tools for naturalizing racist and sexist hierarchies.”

Watching this part of her presentation, mixed thoughts came to my mind, such as “Are we really designing an IT system for humans; users, some other audiences, even for designers or engineers?”, “Don’t we excessively and blindly pursue universalism such as standardization in developing and designing an IT product through advocating user-friendly approaches or frictionless designs?” and more.

In the second part, she started a story with the title, “Racism distorts how we see and how we are seen”. In a section, “Everyday anti-blackness”, she unveiled disturbing examples, including “Florida police caught using mug shots of black men for target practice”, and “Both white and black preschool teachers are biased against black boys.” According to one of my recent readings, DATA FEMINISM (by Catherine D’lgnazio and Lauren F.Klein), “Black women are over 3 times more likely than white women to die from pregnancy or childbirth-related causes.” I was as surprised when I read that part of the book as when I saw the examples of “anti-blackness” in her lecture.

Transitioning her topic to the problems online, she also gave us interesting questions “How racist division continues to be naturalized… How we continue to perpetuate it and make it seems as these are immutable characteristics of people.”, “Science and technology are powerful tools for naturalizing racist and sexist hierarchies.” Given hidden cases, technology also fails to recognize the unique challenges of transgender individuals and other LGBT people since it is often designed by those with a limited understanding of diversity along the lines of ethnicity and sexuality. Technology subtly enables this kind of discrimination.

There is still a long road ahead in terms of answering all the questions about ethical concerns within the IT industry. Addressing the unique challenges of socially disadvantaged groups that modern technology overlooks. While we have made headway in some regards, there is still much room for improvement as far as raising awareness about this issue and educating people thoroughly about it is concerned.

This is her recent book, RACE AFTER TECHNOLOGY. Personally, I would like to reread a book, ALGORITHMS OF OPPRESSION (by Safiya Umoja Noble). Safiya’s lecture at NY New Museum in 2018 was also inspiring to me, which raised a lot of questions about how newer technologies, including search engine systems, can lead to wide-ranging oppression spanning race, gender and sexuality in our society.

“How to Design AI to Work Together with People” was the agenda of the panel talk on Sunday. The panel guests included Dakuo Wang, Pattie Maes, Xiangshi Ren, Ben Shneiderman, Yuanchun Shi, Qianying Wang. They participated in an open debate with the audience. Due to a technical problem, I couldn’t enter the room. I hope to be able to attend similar discussions given by the panel later or CHI’s video upload later if they will.

Besides keynotes and panel talks, there were many learning opportunities. Although I was unable to participate in this workshop, I would recommend interested people to search for workshop blogs relating to Explainable AI, “Operationalizing Human-Centered Perspectives in Explainable AI“, https://hcxai.jimdosite.com/

“SIG: Special Interest Group on Visualization Grammars” was also interesting. I am quite a newbie in this topic; thus, this SIG was a great chance to learn recent issues regarding visualization grammars. We covered these questions, “What is a visualization grammar? What/how can we learn from real-world use and adoption of visualization grammars? “How do we balance formalism, ease & expressibility?” ….and more. Here is the SIG link, https://sig-visgrammarnetlify.app/

An integral part of CHI were the paper sessions. There were several 10 mins presentation and discussion sessions on various topics. Among a lot of outstanding works, the two best papers seemed to be great to review.

[B] Designing Interactive Transfer Learning Tools for ML Non-Experts

https://youtu.be/bYG9EATbnsw https://dl.acm.org/doi/10.1145/3411764.3445096

[B] Understanding Data Accessibility for People with Intellectual and Developmental Disabilities

https://youtu.be/LZncJYXRY8U https://dl.acm.org/doi/10.1145/3411764.3445743

Furthermore, I observed a couple of Covid-19 projects. In particular, this paper looked interesting. I am unsure if my assumption is correct or not. However, this paper, “Many Faced Hate: A Cross Platform Study of Content Framing and Information Sharing by Online Hate Groups” should best be read in conjunction with this paper (Honorable Mention):

[H] Viral Visualizations: How Coronavirus Skeptics Use Orthodox Data Practices to Promote Unorthodox Science Online

https://youtu.be/zVlwJQu8pRo https://dl.acm.org/doi/10.1145/3411764.3445211

If you go to the page below and register for free, you can access all videos and materials organized along with the 2021 CHI daily schedules. https://programs.sigchi.org/chi/2021

The lists below were selected based on my interests, ML, AI, Explainable AI/Interpretable ML, Data Visualization, Personal Informatics and more. Feel free to refer to it for your post 2021 CHI!

Tags: [HS]: Highly related to my jobs and interests, [B]: Best Paper, [H]: Honorable Mention

2021 CHI homepage

https://programs.sigchi.org/chi/2021

https://acmchi.delegateconnect.co/

[AI/ML] Human, ML & AI

[HS] Manipulating and Measuring Model Interpretability

https://youtu.be/OCYTLkQOV2E https://dl.acm.org/doi/10.1145/3411764.3445315

[HS] Engaging Teachers to Co-Design Integrated AI Curriculum for K-12 Classrooms

https://youtu.be/VBOXeOxA5Gw https://dl.acm.org/doi/10.1145/3411764.3445377

Towards Fairness in Practice: A Practitioner-Oriented Rubric for Evaluating Fair ML Toolkits

https://dl.acm.org/doi/10.1145/3411764.3445604

Effect of Information Presentation on Fairness Perceptions of Machine Learning Predictors

https://youtu.be/AGd8ik4k5Bw https://dl.acm.org/doi/10.1145/3411764.3445256

[AI/ML] Computational AI Development and Explanation

[HS] Data-Centric Explanations: Explaining Training Data of Machine Learning Systems to Promote Transparency

https://youtu.be/pURSOBUP0KY

https://dl.acm.org/doi/10.1145/3411764.3445736

[HS] Player-AI Interaction: What Neural Network Games Reveal About AI as Play

https://youtu.be/2ud2125rZwk https://programs.sigchi.org/chi/2021/program/content/47452

[HS] Assessing the Impact of Automated Suggestions on Decision Making: Domain Experts Mediate Model Errors but Take Less Initiative

https://youtu.be/iqahsetvD58 https://dl.acm.org/doi/10.1145/3411764.3445522

[HS] [H] Evaluating the Interpretability of Generative Models by Interactive Reconstruction

https://youtu.be/zbPeMT-ssXo https://dl.acm.org/doi/10.1145/3411764.3445296

[HS] [H] Expanding Explainability: Towards Social Transparency in AI systems

https://youtu.be/OkxvNoZyoDw https://dl.acm.org/doi/10.1145/3411764.3445188

[HS] Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs

https://youtu.be/CGbKmlTzRLI https://dl.acm.org/doi/10.1145/3411764.3445088

Method for Exploring Generative Adversarial Networks (GANs) via Automatically Generated Image Galleries

https://youtu.be/PvXMX7RIylI https://dl.acm.org/doi/10.1145/3411764.3445714

Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks

https://youtu.be/zMYS17AafA0 https://dl.acm.org/doi/10.1145/3411764.3445562

AutoDS: Towards Human-Centered Automation of Data Science

https://youtu.be/yXnUzM22Tps https://dl.acm.org/doi/10.1145/3411764.3445526

Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance

https://youtu.be/0k7FgDUIGjs https://dl.acm.org/doi/10.1145/3411764.3445717

Whither AutoML? Understanding the Role of Automation in Machine Learning Workflows

https://youtu.be/mmptWeCS4Bk https://dl.acm.org/doi/10.1145/3411764.3445306

[AI/ML] UX and Interaction Design and Research: Techniques, Insights & Prototyping / Reflection, Behavior, Change & Learning

[HS][B] Designing Interactive Transfer Learning Tools for ML Non-Experts

https://youtu.be/bYG9EATbnsw https://dl.acm.org/doi/10.1145/3411764.3445096

[AI/ML] Case Studies: HCI in Practice

[HS] Towards Explainable AI: Assessing the Usefulness and Impact of Added Explainability Features in Legal Document Summarization

https://youtu.be/9zZLmi8Wrmo https://dl.acm.org/doi/10.1145/3411763.3443441

[HS] AI Trust Score: A User-Centered Approach to Building, Designing, and Measuring the Success of Intelligent Workplace Features

https://youtu.be/PWUCEowum8o https://dl.acm.org/doi/10.1145/3411763.3443452

[VIS] Novel Visualization Techniques

[HS] Quantitative Data Visualisation on Virtual Globes

https://youtu.be/YChLrxlL8ss https://dl.acm.org/doi/10.1145/3411764.3445152

[HS][H] Data@Hand: Fostering Visual Exploration of Personal Data on Smartphones Leveraging Speech and Touch Interaction

https://youtu.be/KAjCiMAKf4I https://dl.acm.org/doi/10.1145/3411764.3445421

[HS] Datamations: Animated Explanations of Data Analysis Pipelines

https://youtu.be/d0mn9iGDKbo https://dl.acm.org/doi/10.1145/3411764.3445063

Collecting and Characterizing Natural Language Utterances for Specifying Data Visualizations

https://youtu.be/CYRVJFBLmH0 https://dl.acm.org/doi/10.1145/3411764.3445400

It’s a Wrap: Toroidal Wrapping of Network Visualisations Supports Cluster Understanding Tasks

https://youtu.be/VwP9Lb60qZU https://dl.acm.org/doi/10.1145/3411764.3445439

[VIS] Understanding Visualizations

[HS][H]Viral Visualizations: How Coronavirus Skeptics Use Orthodox Data Practices to Promote Unorthodox Science Online

https://youtu.be/zVlwJQu8pRo https://dl.acm.org/doi/10.1145/3411764.3445211

[HS] Mapping the Landscape of COVID-19 Crisis Visualizations

https://youtu.be/GaUQPf9pNkQ https://dl.acm.org/doi/10.1145/3411764.3445381

User Ex Machina : Simulation as a Design Probe in Human-in-the-Loop Text Analytics

https://youtu.be/1VVkQ7pSc3k https://dl.acm.org/doi/10.1145/3411764.3445425

[H] Fits and Starts: Enterprise Use of AutoML and the Role of Humans in the Loop

https://youtu.be/1ftEYAMGVVY https://dl.acm.org/doi/10.1145/3411764.3445775

Understanding Narrative Linearity for Telling Expressive Time-Oriented Stories

https://youtu.be/hRQlRshA8OA https://dl.acm.org/doi/10.1145/3411764.3445344

[B] Understanding Data Accessibility for People with Intellectual and Developmental Disabilities

https://youtu.be/LZncJYXRY8U
https://dl.acm.org/doi/10.1145/3411764.3445743

Does Interaction Improve Bayesian Reasoning with Visualization?

https://youtu.be/58WFIWHQ2CQ https://dl.acm.org/doi/10.1145/3411764.3445176

[VIS] Designing Effective Visualizations

[HS] Learning to Automate Chart Layout Configurations Using Crowdsourced Paired Comparison

https://youtu.be/MqW_5b_R-jw https://dl.acm.org/doi/pdf/10.1145/3411764.3445179

[HS] Data Animator: Authoring Expressive Animated Data Graphics

https://youtu.be/W7xt4A_NE_0 https://dl.acm.org/doi/pdf/10.1145/3411764.3445747

[HS] mTSeer: Interactive Visual Exploration of Models on Multivariate Time-series Forecast

https://youtu.be/ET25ixDYhgs https://dl.acm.org/doi/10.1145/3411764.3445083

Leveraging Text-Chart Links to Support Authoring of Data-Driven Articles with VizFlow

https://youtu.be/qCb3aw7pQXo https://dl.acm.org/doi/pdf/10.1145/3411764.3445354

Integrated Visualization Editing via Parameterized Declarative Templates

https://youtu.be/0jI6ADaPNdQ https://dl.acm.org/doi/10.1145/3411764.3445356

ConceptScope: Organizing and Visualizing Knowledge in Documents based on Domain Ontology

https://youtu.be/x-n3iC-9bsk https://dl.acm.org/doi/10.1145/3411764.3445396

[Personal Data&Health/UX/Perception/Journalism] Personal Health Data

“They don’t always think about that”: Translational Needs in the Design of Personal Health Informatics Applications

https://youtu.be/L255Qb8aqbA https://dl.acm.org/doi/10.1145/3411764.3445587

[Personal…]Health & Behavior Change

Self-E: Smartphone-Supported Guidance for Customizable Self-Experimentation

https://youtu.be/KQwozp6sOJc https://dl.acm.org/doi/10.1145/3411764.3445100

[Personal…] Combining Digital and Analogue Presence in Online Work

Journalistic Source Discovery: Supporting The Identification of News Sources in User Generated Content

https://youtu.be/KAlvX8P2clg https://dl.acm.org/doi/10.1145/3411764.3445266

[Personal…] Health, Communication, and Social Life

Do Politicians Talk about Politics? Assessing Online Communication Patterns of Brazilian Politicians

https://youtu.be/oLVSZb31q7

https://dl.acm.org/doi/10.1145/3412326

[Personal…] Video, XR, Perception, & Visualization

[H] From Detectables to Inspectables: Understanding Qualitative Analysis of Audiovisual Data

https://youtu.be/QM8SmArEOBM https://dl.acm.org/doi/10.1145/3411764.3445458

Hyemi Song, Sr. Designer (Data Vis)@Microsoft, Bohyemian Lab / Former Data Vis. Specialist@MIT Senseable City Lab, UX designer@Naver, MFA@RISD hyemisong.com