[bohyemianote: May.16.2021] CHI 2021
2021 CHI (May 8 to 13) needed to be online this year due to COVID 19. Fortunately, although I registered very late in early May, I could still participate in the conference because it was online. Despite some challenges, thanks to the support of my design team, AI Platform Design & Research at Microsoft, I could successfully register for the conference and learned a lot over the last week. I was lucky!
This was my first time participating in CHI even though I had previously attended Infovis conferences, such as the 2019 IEEE InfoVis conference in Vancouver. With my experience at InfoVis as a reference, I started my (virtual) journey at 2021 CHI on Sunday, May 9th.
During this journey, I mainly focused on content related to ML, AI, Explainable AI/Interpretable ML, Data Visualization, Personal Informatics and more.
The opening keynote was titled “Chieko Asakawa: “See What I Mean: Making Waves with the Blind.” Chieko Asakawa is an IBM fellow and a distinguished professor at CMU. Her research area is accessibility. In her keynote, she presented how technology improved the quality of her life as a blind person. She said her biggest fear was losing independence, and the two most inconvenient parts of her life are “inaccessible information and uncomfortable mobility.”
One of her accessibility challenges resulted from a lack of buttons with labels on mobile apps. Because of the absence of button labels, the text reader kept speaking “Button, Button, Button…” instead of identifying what she was selecting. The non-native English speakers in the audience, including myself, were able to indirectly make sense of her difficulties because of our past experiences encountering information and language barriers. Despite some similarities, her information barriers could be completely different from mine, which is why we need to directly learn from people about their perceptual differences. She also suggested “Ways to find problems and fix them by blind users” as one of the directions for the topic, “Toward Accessible Apps.” Her other great project is AI suitcase; I recall being amazed by the demo. I highly recommend viewing her presentation for yourself.
Another inspiring keynote was the one titled “Ruha Benjamin: Which Humans? Innovation, Equity, and Imagination in Human-Centered Design.” The presenter, Ruha Benjamin, is a Professor of African American Studies at Princeton University. Her presentation was broken into two sections. In the first section, she covers questions and issues raised in the HCI community, design field, and more. She explained that frictionless design could hide social frictions; she cited issues in minimalist design as similar challenges, and the perspectives of graphic designer Cheryl D. Miller, who compared dominant graphic design principles to social oppressions.
After introducing these cases, Ruha stated: “This is not just a personal interpretation unique to Miller, but started as a political intention to reinforce hierarchies which then became naturalized as simply good design over time”; “dominant designs hide and perpetuate the violent frictions of our world”; and “science and technology are powerful tools for naturalizing racist and sexist hierarchies.”
Watching this part of her presentation, mixed thoughts came to my mind, such as “Are we really designing an IT system for humans; users, some other audiences, even for designers or engineers?”, “Don’t we excessively and blindly pursue universalism such as standardization in developing and designing an IT product through advocating user-friendly approaches or frictionless designs?” and more.
In the second part, she started a story with the title, “Racism distorts how we see and how we are seen”. In a section, “Everyday anti-blackness”, she unveiled disturbing examples, including “Florida police caught using mug shots of black men for target practice”, and “Both white and black preschool teachers are biased against black boys.” According to one of my recent readings, DATA FEMINISM (by Catherine D’lgnazio and Lauren F.Klein), “Black women are over 3 times more likely than white women to die from pregnancy or childbirth-related causes.” I was as surprised when I read that part of the book as when I saw the examples of “anti-blackness” in her lecture.
Transitioning her topic to the problems online, she also gave us interesting questions “How racist division continues to be naturalized… How we continue to perpetuate it and make it seems as these are immutable characteristics of people.”, “Science and technology are powerful tools for naturalizing racist and sexist hierarchies.” Given hidden cases, technology also fails to recognize the unique challenges of transgender individuals and other LGBT people since it is often designed by those with a limited understanding of diversity along the lines of ethnicity and sexuality. Technology subtly enables this kind of discrimination.
There is still a long road ahead in terms of answering all the questions about ethical concerns within the IT industry. Addressing the unique challenges of socially disadvantaged groups that modern technology overlooks. While we have made headway in some regards, there is still much room for improvement as far as raising awareness about this issue and educating people thoroughly about it is concerned.
This is her recent book, RACE AFTER TECHNOLOGY. Personally, I would like to reread a book, ALGORITHMS OF OPPRESSION (by Safiya Umoja Noble). Safiya’s lecture at NY New Museum in 2018 was also inspiring to me, which raised a lot of questions about how newer technologies, including search engine systems, can lead to wide-ranging oppression spanning race, gender and sexuality in our society.
“How to Design AI to Work Together with People” was the agenda of the panel talk on Sunday. The panel guests included Dakuo Wang, Pattie Maes, Xiangshi Ren, Ben Shneiderman, Yuanchun Shi, Qianying Wang. They participated in an open debate with the audience. Due to a technical problem, I couldn’t enter the room. I hope to be able to attend similar discussions given by the panel later or CHI’s video upload later if they will.
Besides keynotes and panel talks, there were many learning opportunities. Although I was unable to participate in this workshop, I would recommend interested people to search for workshop blogs relating to Explainable AI, “Operationalizing Human-Centered Perspectives in Explainable AI“, https://hcxai.jimdosite.com/
“SIG: Special Interest Group on Visualization Grammars” was also interesting. I am quite a newbie in this topic; thus, this SIG was a great chance to learn recent issues regarding visualization grammars. We covered these questions, “What is a visualization grammar? What/how can we learn from real-world use and adoption of visualization grammars? “How do we balance formalism, ease & expressibility?” ….and more. Here is the SIG link, https://sig-visgrammarnetlify.app/
An integral part of CHI were the paper sessions. There were several 10 mins presentation and discussion sessions on various topics. Among a lot of outstanding works, the two best papers seemed to be great to review.
[B] Designing Interactive Transfer Learning Tools for ML Non-Experts
[B] Understanding Data Accessibility for People with Intellectual and Developmental Disabilities
Furthermore, I observed a couple of Covid-19 projects. In particular, this paper looked interesting. I am unsure if my assumption is correct or not. However, this paper, “Many Faced Hate: A Cross Platform Study of Content Framing and Information Sharing by Online Hate Groups” should best be read in conjunction with this paper (Honorable Mention):
[H] Viral Visualizations: How Coronavirus Skeptics Use Orthodox Data Practices to Promote Unorthodox Science Online
If you go to the page below and register for free, you can access all videos and materials organized along with the 2021 CHI daily schedules. https://programs.sigchi.org/chi/2021
The lists below were selected based on my interests, ML, AI, Explainable AI/Interpretable ML, Data Visualization, Personal Informatics and more. Feel free to refer to it for your post 2021 CHI!
Tags: [HS]: Highly related to my jobs and interests, [B]: Best Paper, [H]: Honorable Mention
2021 CHI homepage
[AI/ML] Human, ML & AI
[HS] Manipulating and Measuring Model Interpretability
[HS] Engaging Teachers to Co-Design Integrated AI Curriculum for K-12 Classrooms
Towards Fairness in Practice: A Practitioner-Oriented Rubric for Evaluating Fair ML Toolkits
Effect of Information Presentation on Fairness Perceptions of Machine Learning Predictors
[AI/ML] Computational AI Development and Explanation
[HS] Data-Centric Explanations: Explaining Training Data of Machine Learning Systems to Promote Transparency
[HS] Player-AI Interaction: What Neural Network Games Reveal About AI as Play
[HS] Assessing the Impact of Automated Suggestions on Decision Making: Domain Experts Mediate Model Errors but Take Less Initiative
[HS] [H] Evaluating the Interpretability of Generative Models by Interactive Reconstruction
[HS] [H] Expanding Explainability: Towards Social Transparency in AI systems
[HS] Beyond Expertise and Roles: A Framework to Characterize the Stakeholders of Interpretable Machine Learning and their Needs
Method for Exploring Generative Adversarial Networks (GANs) via Automatically Generated Image Galleries
Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks
AutoDS: Towards Human-Centered Automation of Data Science
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance
Whither AutoML? Understanding the Role of Automation in Machine Learning Workflows
[AI/ML] UX and Interaction Design and Research: Techniques, Insights & Prototyping / Reflection, Behavior, Change & Learning
[HS][B] Designing Interactive Transfer Learning Tools for ML Non-Experts
[AI/ML] Case Studies: HCI in Practice
[HS] Towards Explainable AI: Assessing the Usefulness and Impact of Added Explainability Features in Legal Document Summarization
[HS] AI Trust Score: A User-Centered Approach to Building, Designing, and Measuring the Success of Intelligent Workplace Features
[VIS] Novel Visualization Techniques
[HS] Quantitative Data Visualisation on Virtual Globes
[HS][H] Data@Hand: Fostering Visual Exploration of Personal Data on Smartphones Leveraging Speech and Touch Interaction
[HS] Datamations: Animated Explanations of Data Analysis Pipelines
Collecting and Characterizing Natural Language Utterances for Specifying Data Visualizations
It’s a Wrap: Toroidal Wrapping of Network Visualisations Supports Cluster Understanding Tasks
[VIS] Understanding Visualizations
[HS][H]Viral Visualizations: How Coronavirus Skeptics Use Orthodox Data Practices to Promote Unorthodox Science Online
[HS] Mapping the Landscape of COVID-19 Crisis Visualizations
User Ex Machina : Simulation as a Design Probe in Human-in-the-Loop Text Analytics
[H] Fits and Starts: Enterprise Use of AutoML and the Role of Humans in the Loop
Understanding Narrative Linearity for Telling Expressive Time-Oriented Stories
[B] Understanding Data Accessibility for People with Intellectual and Developmental Disabilities
Does Interaction Improve Bayesian Reasoning with Visualization?
[VIS] Designing Effective Visualizations
[HS] Learning to Automate Chart Layout Configurations Using Crowdsourced Paired Comparison
[HS] Data Animator: Authoring Expressive Animated Data Graphics
[HS] mTSeer: Interactive Visual Exploration of Models on Multivariate Time-series Forecast
Leveraging Text-Chart Links to Support Authoring of Data-Driven Articles with VizFlow
Integrated Visualization Editing via Parameterized Declarative Templates
ConceptScope: Organizing and Visualizing Knowledge in Documents based on Domain Ontology
[Personal Data&Health/UX/Perception/Journalism] Personal Health Data
“They don’t always think about that”: Translational Needs in the Design of Personal Health Informatics Applications
[Personal…]Health & Behavior Change
Self-E: Smartphone-Supported Guidance for Customizable Self-Experimentation
[Personal…] Combining Digital and Analogue Presence in Online Work
Journalistic Source Discovery: Supporting The Identification of News Sources in User Generated Content
[Personal…] Health, Communication, and Social Life
Do Politicians Talk about Politics? Assessing Online Communication Patterns of Brazilian Politicians
[Personal…] Video, XR, Perception, & Visualization
[H] From Detectables to Inspectables: Understanding Qualitative Analysis of Audiovisual Data