Entangled Futures of Languages and Technologies: Speculative and Inclusive Approaches to AI and Linguistic Diversity
Mariam Nadirashvili
UC Riverside
Spanish
This project examines the speculative potential of large language models (LLMs) to engage with the epistemologies embedded in minoritized languages, leveraging these frameworks for innovative knowledge generation. By integrating perspectives from SOCALAB (Spanish for California) at UC Riverside, the Department of Speculative Trainers at the Center for Artificial Intelligence and Experimental Futures (CAIEF) at UC Davis, and The Center for the Humanities and Machine Learning (HUML) at UC Santa Barbara, the initiative reimagines LLMs as co-speculative companions that can potentially overcome biases and create knowledge that benefits from the ways knowledge representations differ across languages. Through advanced prompting strategies, interdisciplinary collaboration, and a focus on multilingual scales (e.g., languages with few speakers like Danish, those undergoing processes of recovery like Asturian, minoritized languages like Spanish in the US, languages with meaning-based characters like Chinese, etc.), the project explores how LLMs can be utilized to address global challenges, such as climate change and social inequities, by fostering new methodologies and epistemic horizons, both from the perspectives of diverse LLM users and machine learning processes.
This project aims to strengthen digital dialectology and foster companionship between humans and AIs for affirmative speculation, addressing questions such as: How can speculative technologies like LLMs help researchers imagine futures beyond climate profiteering? How can humans cultivate relationships with LLMs to develop specific protocols and actionable plans for solutions? How can the worldviews embedded in minoritized languages and epistemologies contribute to the design of these solutions?