Generative Artificial Intelligence (GenAI) is widely regarded as a transformative tool in education, providing rapid access to vast amounts of information. However, there are concerns regarding its potential to disseminate misinformation and undermine Indigenous data sovereignty—issues that are critical for Indigenous communities when AI-generated texts misrepresent their identities and knowledge. Machine learning models have been shown to perpetuate biases, often marginalising historically unrepresented groups. The exclusion of Indigenous voices in the development of GenAI raises significant ethical concerns, particularly in relation to cultural misrepresentation and the appropriation of Indigenous narratives.
As AI-driven tools such as ChatGPT become increasingly integrated into educational and public discourse, their role in shaping perceptions of Australian First Nations peoples warrants critical examination. Our research has specifically investigated how GenAI responds when explicitly instructed—problematically—to adopt the persona of an Australian First Nations person. This study employs a collaborative autoethnographic methodology to examine how four researchers reflect and respond to the ways GenAI tools represent Australian First Nations peoples. Through collective and culturally grounded analysis of the researchers’ individual experiences with AI-generated content, the study critically explores the ethical and representational challenges posed by GenAI.
Findings revealed that GenAI outputs were often superficial, generalised, and culturally insensitive. The First Nations content analysis identified a tendency to homogenise Australian First Nations identities, reinforcing stereotypes rather than authentically reflecting Australian First Nations perspectives. This raises concerns about digital colonialism and the misappropriation of Australian First Nations knowledge, as AI-generated content often draws from Western narratives rather than Australian First Nations worldviews.
Researcher reflections further emphasised ethical risks, misinformation, cultural inaccuracy, and the lack of complexity as key concerns, stressing the need for transparent, culturally responsive AI practices. This study contributes to the discourse on AI ethics and Australian First Nations representation.