BIG DATA & SOCIETY, cilt.13, sa.1, 2026 (SSCI, Scopus)
The article addresses the age-based biases embedded in artificial intelligence systems (AI ageism) from a sociotechnical perspective. Building on studies about digital ageism and research on the sociocultural imprint of generative AI, we interviewed five popular generative AI chatbots as communicative partners to expose and question potential ageist and sexist stereotypes (namely, those surrounding digital users) embedded in/reproduced by these systems, using sexism as a touchstone for comparison with ageism. Results show that the chatbots follow two double standards. The first concerns the "political correctness" to which they might have been socialized: the chatbots avoid (digital) sexism but not (digital) ageism, probably because of different levels of cultural sensitivity among designers, trainers, and users, or as a reproduction of biased training data. The second concerns the chatbots' utilities: these are differently connoted depending on the AI users' age and gender, demonstrating the interference of ageist and sexist stereotypes with how generative AI systems organize and reproduce information about human sociality.