Skip to main navigation Skip to search Skip to main content

Affective conveyance assessment of AI-generative static visual user interfaces based on valence-arousal emotion model

  • Jing Chen
  • , Huimin Tao
  • , Jiahui Wu
  • , Quanjingzi Yuan
  • , Lin Ma
  • , Dengkai Chen
  • , Mingjiu Yu

Research output: Contribution to journalArticlepeer-review

Abstract

Generative AI can rapidly create user interfaces (UIs) with distinct emotional tones, yet few studies rigorously test how effectively such UIs convey emotion. Using the Valence–Arousal (VA) framework, we prompted generative AI to produce 40 static visual UIs targeting specific emotions and evaluated them with a mixed-methods protocol in which participants completed Check-All-That-Apply (CATA) descriptors while eye-tracking recorded saccade speed and pupil diameter. Analyses showed that UIs generated from different prompts formed three perceptual categories—positive valence, negative/high arousal, and negative/low arousal—with partial overlap between positive prompts (e.g., “Delighted” and “Relaxed”) and clearer distinctions for negative prompts (“Alarmed”, “Bored”), a pattern mirrored by differences in scanning speed. These findings indicate that AI-generated UIs can embed meaningful affective cues that shape how users feel when viewing on-screen elements, and the combination of subjective and physiological measures offers a practical framework for emotion-focused UI evaluation while motivating further work on refining prompt specificity, incorporating diverse emotion models, and testing broader user demographics.

Original languageEnglish
Article number103261
JournalDisplays
Volume91
DOIs
StatePublished - Jan 2026

Keywords

  • Affective conveyance assessment
  • Emotion model
  • Generative AI
  • User interface

Fingerprint

Dive into the research topics of 'Affective conveyance assessment of AI-generative static visual user interfaces based on valence-arousal emotion model'. Together they form a unique fingerprint.

Cite this