Logo image
Generative AI and Australian First Nations Representation: Ethical Concerns and Cultural Implications
Journal article   Open access   Peer reviewed

Generative AI and Australian First Nations Representation: Ethical Concerns and Cultural Implications

Natalie McMaster, Renee Morrison, Ree Jordan and Hope O'Chin
Australiasian Journal of Technology Education, Vol.10, pp.1-15
2025
pdf
Generative+AI+&+Australian+First+Nations+Representation336.56 kBDownloadView
Published VersionCC BY V4.0 Open Access
url
https://ajte.org/index.php/AJTE/article/view/122View
Published Version Open

Abstract

Humanities and social sciences curriculum and pedagogy (excl. economics, business and management) Higher education Generative Artificial Intelligence AI generated texts First Nations Indigenous Generative AI misappropriation culture expectations for technology education

Generative Artificial Intelligence (GenAI) is widely regarded as a transformative tool in education, providing rapid access to vast amounts of information. However, there are concerns regarding its potential to disseminate misinformation and undermine Indigenous data sovereignty—issues that are critical for Indigenous communities when AI-generated texts misrepresent their identities and knowledge. Machine learning models have been shown to perpetuate biases, often marginalising historically unrepresented groups. The exclusion of Indigenous voices in the development of GenAI raises significant ethical concerns, particularly in relation to cultural misrepresentation and the appropriation of Indigenous narratives.

As AI-driven tools such as ChatGPT become increasingly integrated into educational and public discourse, their role in shaping perceptions of Australian First Nations peoples warrants critical examination. Our research has specifically investigated how GenAI responds when explicitly instructed—problematically—to adopt the persona of an Australian First Nations person. This study employs a collaborative autoethnographic methodology to examine how four researchers reflect and respond to the ways GenAI tools represent Australian First Nations peoples. Through collective and culturally grounded analysis of the researchers’ individual experiences with AI-generated content, the study critically explores the ethical and representational challenges posed by GenAI.

Findings revealed that GenAI outputs were often superficial, generalised, and culturally insensitive. The First Nations content analysis identified a tendency to homogenise Australian First Nations identities, reinforcing stereotypes rather than authentically reflecting Australian First Nations perspectives. This raises concerns about digital colonialism and the misappropriation of Australian First Nations knowledge, as AI-generated content often draws from Western narratives rather than Australian First Nations worldviews.

Researcher reflections further emphasised ethical risks, misinformation, cultural inaccuracy, and the lack of complexity as key concerns, stressing the need for transparent, culturally responsive AI practices. This study contributes to the discourse on AI ethics and Australian First Nations representation.

Details

Metrics

5 File views/ downloads
73 Record Views
Logo image