We know ‘the system’ has long failed Aboriginal people – so why not cyberpunk it?
Indigenous disruption of cruel government policies could further the cause of self-determination and might even help save the planet
The purpose of a system is what it does” is a concept coined by the cybernetician Anthony Stafford Beer. It describes how the intentions of a system’s creators, or even the perceived primary functions of a system, don’t always have a causal relationship with how the system behaves or is interacted with in the system’s lifecycle.
Beer’s idiom has implications for not just technological systems, but the way in which we can understand failings in social service systems, justice systems etc. However, it is important to note that technology rarely exists within a socioeconomic vacuum; political, social and economic factors influence the ways in which technology is integrated into other systems. Technological systems aren’t mutually exclusive from justice systems, cultural factors and so on.
We see poignant examples of Beer’s idiom in the ways machine learning, facial recognition technology and cameras have been adopted by the justice system here in Australia, with technologies such as Clearview AI, which allows those with an account to scan a photo of an unknown person and locate additional images and identifying information about them from across the internet, raising ethical questions of individual privacy. The rhetoric of course is that the police need to access these technologies to be able to keep citizens safe from some amorphous danger. However, another unrelated incident occurred in Queensland where the police accessed and shared the private details of a domestic violence victim in their database with the victim’s abusive partner. This example is just one of many highlighting how the rhetoric or intention of keeping us safe isn’t necessarily the output of these systems.
Could the argument be made that the inherent properties and functions of technologies themselves be the reason systems can be used for purposes other than the stated intentions? The story of the AK-47 assault rifle comes to mind: a rifle designed for resiliency in many different climates and environments and one that is relatively easy to construct and mass produce. An engineering masterpiece. The inventor, Mikhail Kalashnikov, himself stated that he felt remorse for the way in which terrorist groups had adopted the weapon as he intended it as a tool for defence. However, the major divergence from most consumer-level digital technologies here is obvious: weapons are made to kill, maim or intimidate. These aren’t “side-effects” of the technology. Is there a responsibility then on corporations and engineers to be considerate of the socioeconomic and geopolitical contexts in which their product is designed, developed and distributed?
I only have more questions for these questions, but I think the inverse of the issue is possible: an Indigi-futurism/cyberpunk disruption. Indigenous futurism is a subgenre of science fiction that explores possibilities through speculative histories and/or futures of Indigenous people. It critiques projections of Indigenous people as lacking the will or capacity to understand and use technology and explores what ideas like decolonisation might look like for Indigenous communities when applied. Cyberpunk is similarly a genre of science fiction set in dystopic futures with corporations blurring the boundary between business and government. It also describes technologies such as artificial intelligence and cybernetic augmentation (machine parts that complement human biological functions) as ubiquitous in everyday society, and usually has some characters or groups who form some form of resistance to these imposed systems of control.
Indigenous people already exist in the dystopias of science fiction literature, or at least in their prototypes. Indigenous people have been (and continue to be) the first to experience dystopic government/corporate overreach, from the testing of nuclear and biological weapons through to being the test subjects for cruel social and cultural upheaval programs such as the stolen generations, the Northern Territory intervention, and countless other occurrences of sacred site destruction and land dispossession. Beer’s idiom persists in these examples, as there has always been a moral justification, no matter how flimsy, made for these actions. How often have we heard “it was for their own good”, or when public sentiment has shifted to acceptance “it was bad, but they had good intentions”?
The inverse of these systems emerges from these histories, from the tension of those systems imposed upon us, and the systems that are borne from our over 60,000 years of surviving in some of the harshest conditions on the planet.
We should be concerned about the ways in which technologies are being adopted for nefarious purposes. Technologies such as the blockchain, cryptocurrencies and non-fungible tokens come with considerable economic risks, scammers and environmental costs. Additionally, artificial intelligence and machine learning technologies are being used by police forces for facial recognition to identify protestors at marches and many other ethically dubious purposes.
However, I hold maybe a naïve optimism for Indigi-futurism and the ways in which we can cyberpunk these systems. We can hide our heads in the sand or pass on responsibility for glacial-speed policy responses to respective governing bodies, or we can hack these systems and adapt them to create economic development opportunities, new modes of artistic expression or even further the cause of Indigenous self-determination. Combining these systems with our culture’s 60,000 years of knowledge – we might even be able to save the planet.
The purpose of a system is what it does” is a concept coined by the cybernetician Anthony Stafford Beer. It describes how the intentions of a system’s creators, or even the perceived primary functions of a system, don’t always have a causal relationship with how the system behaves or is interacted with in the system’s lifecycle.
Beer’s idiom has implications for not just technological systems, but the way in which we can understand failings in social service systems, justice systems etc. However, it is important to note that technology rarely exists within a socioeconomic vacuum; political, social and economic factors influence the ways in which technology is integrated into other systems. Technological systems aren’t mutually exclusive from justice systems, cultural factors and so on.
We see poignant examples of Beer’s idiom in the ways machine learning, facial recognition technology and cameras have been adopted by the justice system here in Australia, with technologies such as Clearview AI, which allows those with an account to scan a photo of an unknown person and locate additional images and identifying information about them from across the internet, raising ethical questions of individual privacy. The rhetoric of course is that the police need to access these technologies to be able to keep citizens safe from some amorphous danger. However, another unrelated incident occurred in Queensland where the police accessed and shared the private details of a domestic violence victim in their database with the victim’s abusive partner. This example is just one of many highlighting how the rhetoric or intention of keeping us safe isn’t necessarily the output of these systems.
Could the argument be made that the inherent properties and functions of technologies themselves be the reason systems can be used for purposes other than the stated intentions? The story of the AK-47 assault rifle comes to mind: a rifle designed for resiliency in many different climates and environments and one that is relatively easy to construct and mass produce. An engineering masterpiece. The inventor, Mikhail Kalashnikov, himself stated that he felt remorse for the way in which terrorist groups had adopted the weapon as he intended it as a tool for defence. However, the major divergence from most consumer-level digital technologies here is obvious: weapons are made to kill, maim or intimidate. These aren’t “side-effects” of the technology. Is there a responsibility then on corporations and engineers to be considerate of the socioeconomic and geopolitical contexts in which their product is designed, developed and distributed?
I only have more questions for these questions, but I think the inverse of the issue is possible: an Indigi-futurism/cyberpunk disruption. Indigenous futurism is a subgenre of science fiction that explores possibilities through speculative histories and/or futures of Indigenous people. It critiques projections of Indigenous people as lacking the will or capacity to understand and use technology and explores what ideas like decolonisation might look like for Indigenous communities when applied. Cyberpunk is similarly a genre of science fiction set in dystopic futures with corporations blurring the boundary between business and government. It also describes technologies such as artificial intelligence and cybernetic augmentation (machine parts that complement human biological functions) as ubiquitous in everyday society, and usually has some characters or groups who form some form of resistance to these imposed systems of control.
Indigenous people already exist in the dystopias of science fiction literature, or at least in their prototypes. Indigenous people have been (and continue to be) the first to experience dystopic government/corporate overreach, from the testing of nuclear and biological weapons through to being the test subjects for cruel social and cultural upheaval programs such as the stolen generations, the Northern Territory intervention, and countless other occurrences of sacred site destruction and land dispossession. Beer’s idiom persists in these examples, as there has always been a moral justification, no matter how flimsy, made for these actions. How often have we heard “it was for their own good”, or when public sentiment has shifted to acceptance “it was bad, but they had good intentions”?
The inverse of these systems emerges from these histories, from the tension of those systems imposed upon us, and the systems that are borne from our over 60,000 years of surviving in some of the harshest conditions on the planet.
We should be concerned about the ways in which technologies are being adopted for nefarious purposes. Technologies such as the blockchain, cryptocurrencies and non-fungible tokens come with considerable economic risks, scammers and environmental costs. Additionally, artificial intelligence and machine learning technologies are being used by police forces for facial recognition to identify protestors at marches and many other ethically dubious purposes.
However, I hold maybe a naïve optimism for Indigi-futurism and the ways in which we can cyberpunk these systems. We can hide our heads in the sand or pass on responsibility for glacial-speed policy responses to respective governing bodies, or we can hack these systems and adapt them to create economic development opportunities, new modes of artistic expression or even further the cause of Indigenous self-determination. Combining these systems with our culture’s 60,000 years of knowledge – we might even be able to save the planet.