Immigration, Refugee, and Citizenship Canada (IRCC) is currently exploring the use of artificial intelligence (AI) to assist in processing the high volume of immigration applications received every year. Anyone following immigration news over the past two years is aware of the increase of refugee claimants in Canada largely due to the shift in US immigration policy, but the Financial Post also recently published that Canada’s population increased by half a million over the past year, the highest increase since 1957. The majority of this population growth has been driven by immigration which has helped to offset the impact of an aging population and increasing labour shortages.
The effect of the influx of immigration to Canada is that our systems are backlogging and IRCC is struggling to find a way to deal with the increasing demands of our immigration system and labour needs. When I started working in immigration we thought that a backlog of six weeks to receive a positive Labour Market Opinion [now Labour Market Impact Assessment (LMIA)] was horrendously long. We couldn’t have imagined that the back log would continue to increase so that employers can now sometimes wait for up to six months to receive an LMIA for a critical employee, and that employers would need to pay CAD$1000 for the privilege of waiting.
Labour shortages don’t just affect the private sector, they effect the government as well. So, more applications need to be processed, with fewer people to do the work, when time is of the essence. What is the solution? Right now, IRCC suggests AI. However, this is fraught with problems and experts are arguing that there is potential for Human Rights violations, particularly when this technology is being applied to applications for vulnerable people such as refugees.
The challenge is that someone needs to teach the AI system by inputting immigration data and case results into the AI system. The AI system will then use this raw data to understand how to make application decisions going forward. There is an assumption that AI will be rational and impartial, which it will, but only to the degree that the data entered is unbiased. Human and systemic bias exists in the raw data that is entered. The fear is that the use of AI will simply perpetuate bias as it cannot at this time critically think or evaluate the information entered.
In addition, the parameters of the established criteria may unintentionally result in over or under inclusivity, giving unexpected results. Unless someone is carefully monitoring the output of the system many cases may be decided erroneously.
The result of this conundrum is that we need AI but we can’t rely on it. AI is neither the savior or the devil, but a very useful tool when implemented in a considered and reflective manner. As immigration delves into the world of AI, the reviewing process should work to ensure a few things:
- that expectations of the system and its limitations are realistic;
- that there is thoughtful and specific selection criteria entered into the system, and that the selection criteria is proactively updated pursuant to changes in law, courts decisions, and shifts in socially acceptable perspectives; and
- that prior to a decision being rendered a processing officer reviews the application to ensure that the decision makes sense and is supported in law.
Officers already use clear selection criteria and check box systems to decide applications and the current result is a lot of inconsistency. Moving this basic assessment criteria into an AI system to evaluate and flag complex issues should actually make the process more predictable, but we need processing officers to critically consider and evaluate the results. We need to ensure that the human factors of logic, empathy and common sense remain entrenched in the immigration process. After all, how can you make a Humanitarian and Compassionate application when there is no human to apply compassion?