When it comes to the adverse impacts of the COVID-19 pandemic in Canada, Indigenous communities are among the most vulnerable. Those impacts have been exacerbated by Canada’s increased reliance on technology and internet connections for communication, government processes, health care and social services. It has drawn new focus to the “digital divide” identified in Indigenous communities that have struggled to access the services they need in rural and remote areas of the country.

While the Canadian government has developed a Connectivity Strategy that commits to improving access to the internet in Indigenous communities, it must also consider and work together with Indigenous communities in all areas of Canada’s tech policy, including artificial intelligence (AI) policies. Canada must leverage the lessons learned from technological development in the COVID-19 pandemic so as not to further alienate or discriminate against vulnerable groups, especially when it comes to data-driven technologies such as AI.

More specifically, when it comes to AI, a technological capability that simulates human capability, Canada should integrate Indigenous knowledge and leverage decolonization theory to mitigate the adverse impacts of AI on Indigenous communities while simultaneously supporting efforts toward more equitable AI.

Although AI offers great opportunities to increase efficiency in service delivery and redirect resources toward tasks that require human cognition, implementing it without first addressing issues of bias or racism in society and government decision-making can cause added harm to Indigenous communities.

Today, AI is replacing tasks that are repetitive and data-oriented — many of which exist in government-delivered public services. As the Canadian government continues to explore the use of AI in government programs and services, there is a particular need to address biases and shortcomings in the way these programs and services are currently delivered.

What is particularly concerning about AI for Indigenous communities is the issue of bias. The data that AI systems use to learn and make decisions can represent historical or social inequities, as those who are responsible for building or operating AI systems can consciously or unconsciously introduce their own biases into these systems. Although AI offers great opportunities to increase efficiency in service delivery and redirect resources toward tasks that require human cognition, implementing it without first addressing issues of bias or racism in society and government decision-making can cause added harm to Indigenous communities.

 

AI in Canada

In March 2017, Canada became the first country in the world to launch a national AI strategy, committing $125 million over five years. The funds were largely invested to increase the number of AI researchers and graduates across Canada; establish three clusters of scientific excellence; and deepen our understanding of the societal impact of AI and its economic, ethical, policy and legal implications. In April 2019, the Canadian government implemented the Directive on Automated Decision-Making to ensure that its use of AI in administrative decisions improves service delivery in a manner that is legal, transparent and accountable. However, there remain some questions about the Directive — for example, how individuals might seek recourse for decisions made by AI (as evidenced by the introduction of automated decision-making in Canada’s Temporary Resident Visa applications).

As Canada continues developing AI research and talent, we must acknowledge our historical and ongoing colonial relationship with Indigenous Peoples. We will need to consider Canada’s relationship with Indigenous Peoples, including the policies that continue to marginalize and extract from Indigenous communities, to ensure that the development and implementation of AI in Canadian service delivery does not infringe on individuals’ rights or cause undue harm.

While new technologies present novel opportunities for Canadians, as demonstrated by the renewable energy industry, it is often at the expense of Indigenous communities. Experts argue that AI has the potential to be particularly harmful to vulnerable communities. Automation and machine learning can leverage data that can privilege some social groups over others, both reinforcing and deepening systemic inequities and biases. An example of this can be seen in what Margaret Hu describes as Algorithmic Jim Crow: “Equal vetting and database screening of all citizens and non-citizens will make it appear that fairness and equality principles are preserved on the front end. Algorithmic Jim Crow, however, will enable discrimination on the back end in the form of designing, interpreting and acting upon vetting and screening systems in ways that result in a disparate impact.”

Furthermore, private-sector organizations engaged in the development and deployment of AI technologies responding to COVID-19 showcase how non-state actors are increasingly gaining power in government decision-making. For example, BlueDot, a Canadian software company that uses natural language processing and machine learning to gather insights on the spread of infectious diseases, and was among the first organizations to raise the alarm about the emerging COVID-19 pandemic. In March 2020, the Canadian government announced it would leverage BlueDot’s disease analytics platform to model and monitor the spread of COVID-19.

The challenge is that private-sector organizations are not bound by the same responsibilities nor accountable to the public in the same way that the Canadian government is. Therefore, when it comes to the use of AI in the public sector, the government must take very specific care to establish standards to protect individuals’ rights, which private-sector organizations must meet. For example, Public Services and Procurement Canada and the Treasury Board of Canada can review their existing list of interested and pre-qualified AI suppliers to determine which companies can provide responsible AI services from an Indigenous perspective.

 

The Indigenous perspective

To avoid reproducing the current imbalance of power and the subsequent cycle of harm toward Indigenous Peoples, Canada must apply a decolonial lens to its development of the national strategy for AI. As it stands, AI can extend colonial practices of exploitation, extraction and control by limiting Indigenous Peoples’ sovereignty over their data. Decolonization theory offers the tools to address the potential adverse impacts of AI on Indigenous communities and deconstruct this power imbalance. These tools include decentring Western thought and ideas, examining current processes to prioritize the needs of marginalized communities, and recognizing the value of alternative or marginalized forms of knowledge.

Because Indigenous communities are directly affected by technological developments and the adverse impacts of AI, they are likely to have a unique perspective on these ethical challenges and potential solutions.

Within the Canadian context, Indigenous communities’ knowledge, insights, practices and experiences should be prioritized in the development of AI and AI policy, including research and design processes. Dick Bourgeois-Doyle refers to this knowledge as “Two-Eyed AI”. Two-Eyed AI draws on the concept of “Two-Eyed Seeing” developed by Mi’kmaw Elders to approach complex issues with the strengths of both Indigenous ways of knowing and Western knowledge. This approach is designed to support integrated thinking and acknowledge the value of Indigenous skills and perspectives. The integration of Indigenous knowledge in AI and AI policy, through methods such as Two-Eyed Seeing, works to decentre Western thinking in the sector and address the colonial nature of AI, which will, in turn, support ongoing efforts toward ethical and equitable AI in Canada.

Indigenous knowledge also offers new perspectives on defining problems in and developing an understanding of AI. Because Indigenous communities are directly affected by technological developments and the adverse impacts of AI, they are likely to have a unique perspective on these ethical challenges and potential solutions. Similarly, as Karina Kesserwan’s work on Indigenous perspectives in AI highlights, the features of Indigenous languages and oral traditions, understanding of inanimate and animate entities, and concept of stewardship for future generations can be leveraged to promote sustainability and better outcomes for future generations.

 

What can Canada do?

As Canada strives to be a leader in the AI realm, policy-makers should consider ways to integrate Indigenous knowledge through co-development and collaboration with a diverse range of Indigenous communities. In 2019, the federally funded Canadian Institute for Advanced Research (CIFAR), tasked with developing the Pan-Canadian AI Strategy, held a workshop on Indigenous Protocol and AI. This workshop culminated in a position paper on AI from an Indigenous perspective and is designed to be a jumping-off point for the creation and design of more ethical AI. While this work is a good start, there is concrete action that Canada should take in developing policies related to AI:

  • Address biases and systemic racism in policies, programs and services before introducing AI to deliver those same policies, programs and services.
  • Ensure that Indigenous experts and community representatives are included, and properly compensated, throughout each step of the policy-development process to incorporate Indigenous perspectives.
  • Establish a Council of Indigenous leaders, scholars and community representatives to review Canada’s national strategy for AI.
  • Promote the inclusion of Indigenous content not only into all areas of Canada’s AI strategy but into all areas of computer science training at a post-secondary level.
  • Mandate cultural competency training for AI experts and policy-makers involved in the Pan-Canadian AI Strategy.
  • Require private-sector actors that are developing and supplying AI products and services to complete cultural competency training and understand the direct socio-economic impacts of the tools they develop.

 

The Canadian government has a unique opportunity to leverage Indigenous perspectives in the development of AI and AI policy. This opportunity is compounded by the government’s responsibility to Indigenous Peoples, outlined in Section 35 of the Canadian Constitution and the UN Declaration on the Rights of Indigenous Peoples, which Canada has committed to upholding. At the same time, because private companies do not have the same incentives as governments to consider vulnerable and marginalized communities, it is up to the Canadian government to establish the above recommendations as a baseline requirement for private-sector actors to be considered as suppliers for AI goods and services.

By placing greater value on Indigenous perspectives in the development of AI and AI policy, Canada can excel in its efforts to develop ethical and equitable AI in Canada.

 

This piece has been researched and written by two non-Indigenous settlers on the traditional territory of many nations including the Mississaugas of the Credit, the Anishnabeg, the Chippewa, the Haudenosaunee and the Wendat peoples and while based on research, the recommendations put forth can only offer this perspective. We must disclose this positionality as we work to not only unlearn our colonial mentalities but also actively engage with anti-colonial methodologies within this context.