CANADIAN IMMIGRATION IN THE ERA OF ARTIFICIAL INTELLIGENCE: WILL COMPUTERS TAKE PLACE OF MAN IN ADOPTING DECISIONS?

CANADIAN IMMIGRATION IN THE ERA OF ARTIFICIAL INTELLIGENCE: WILL COMPUTERS TAKE PLACE OF MAN IN ADOPTING DECISIONS?


Last week, the American public television channel PBS presented a wide-ranging show dedicated to sensational advances in information systems in recent years, which now have not only the ability to store huge amounts of data, but also to independently analyze them and make predictions on this basis. Just a few days before, in Bucharest, Mr. Brad Smith, President of Microsoft defined Artificial Intelligence as “a computerized system that can learn from experiences (of some games, for example) or data, by identifying patterns in the data with which it is fed and thus making decisions.”  He stressed that the last decade has marked an important qualitative leap, namely the endowment of computers with the ability to understand the world and, as such, to transform it.

This progress was made possible by the massive increase in computing power, by the emergence of the “cloud” which made it possible to manage this computing power (thus eliminating the need for each of us to build our own server and data center at considerable expenses), as well as through the explosion of digital data, which means that we will inaugurate 2020 with 25 times more data than in 2010.

There is talk today about the revolution that will be sparked by the next move to the G5 networks that will create a world in which our fridge communicates with the oven and dishwasher, but also with the library, to transmit information to Amazon’s “Alexa” or to Siri, from where they are directed to the grocery store, or the bookstore, as well as to the bank for making payments, with the consequence that our entire household becomes accessible to the suppliers, thus orienting their marketing policy, but exposing us at the same time to a massive interference in our personal life, even if it starts from the best intentions, such as sending messages designed to rationalize our diet.

This is not science fiction, but the image of radical transformations that will occur in the immediate future. And artificial intelligence (AI) has already penetrated, often unnoticed, not only in the private sector, but also in the structure and activity of officials in different fields. Some countries are very advanced in this regard, such as Estonia, which has implemented a computer program that replaces the judge in the files that are suitable for standardization (in the opinion of the authorities of that country), in particular the litigation regarding claims under seven thousand euros. . Also Estonia is the country that has fully computerized the issuing of identity documents, the voting system and the tax field, thus reducing by 80% the personnel required in the past for these operations

Of course, Canada couldn’t let being left behind.  In his presentation at the recent National Conference of CAPIC (Canadian Association of Professional Immigration Consultants) Mr.  Mario Bellissimo, a Toronto based Barrister and Solicitor,  has outlined with deep analytical spirit not only the framework of the  changes made by IRCC (Immigration, Refugee and Citizenship Canada) since 2014, but also their legal and general human consequences, voicing legitimate concerns and providing possible solutions to avoid depersonalization of the Immigration system. With the kind permission of the author, I will continue to reproduce, within the limits of the available space, the main ideas of his presentation, with special reference to the effects of AI on some fundamental aspects of the selection process.   Following his line of thinking,  we will briefly examine how the introduction of AI / ML in the field of immigration can lead to: 1) the loss of discretion in the assessment of individual cases; 2) undermining the principle of “procedural fairness”; 3) the potential perpetuation of past prejudices and discrimination and 4) lack of transparency.

(1) The risk of eliminating human factor intervention in the assessment of individual situations was highlighted by Dr. Vic Satzewich from McMaster University, in the only academic study devoted to the IRCC decision-making process, which highlighted the major importance of human intervention in “implementing the government policy ” that involves not only observing general rules, but also taking into account the individual circumstances of each case. Developing this idea, Mr.  Bellissimo has been wondering to what extent it is acceptable to make room for artificial intelligence: to entrust it with a supporting role, or even the sorting of information, or to allow AI to make decisions all by itself? To give an answer, he used an example in the field of sponsoring spouses, arguing that a negative decision taken by the AI ​​based on the data previously collected in the situation of candidates with multiple divorces , may be incorrect in light of the individual circumstances of a case in which all those divorces are the result of abusive behavior of the former spouses. Thus, the Supreme Court of Canada has ruled, for example, that a person suffering from a condition that would impose high costs of medical care cannot be declared medically inadmissible, if all the individual circumstances that define his/her situation are not taken into account. The question is: can AI comply with this directive, no matter how rich the amount of previous data it was supplied with?

I would add another situation that seems to me to be significant: IRPA, the Canadian Immigration Act (Immigration and Refugee Protection Act) provides the possibility for officials in charge of selecting economic immigrants to approve an applicant even if he/she does not accumulate the required minimum score, if officer considers that they still have real chances of to become successfully established in Canada. In the past, this provision was quite frequently applied by the immigration officers conducting the selection interviews, as a result of studying the file, but also and mainly further to their direct interaction  with the applicant. Since the Express Entry system initiated the introduction of AI technology, I have not heard of such cases, although I am convinced that many candidates would have deserved it.

(2) The risk that the predictive analysis will affect the procedural fairness is obvious if we take into account the fact that the principle of “procedural fairness”, which is the basis of all common-law systems (including the Canadian one), requires that every individual who is the subject of an administrative procedure ( including the selection of immigrants) to have the opportunity to present their case completely and correctly, and the decisions that affect it be the result of a fair, impartial and open evaluation process.

The question that arises is whether the introduction of artificial intelligence can, for example, supplement the obligation of the immigration officials to inform the candidates not only about the doubts that trouble them, but also about the mechanism that generated these doubts. It is hard to believe that predictive analysis based on previous data and experiences, which are biased by the mere way they were set up, can satisfy these requirements that continue to be absolutely mandatory in light of the constant practice of the Supreme Court of Canada. Summoning the famous decision in the Baker case, Mr. Bellissimo points out that the entire decision-making process will be upset by this biased configuration, even if the final decision is made by an official and not by a machine.

It is even more difficult to believe that the AI ​​will be able to satisfy the requirement to explain to the person in question the motivation behind the decision, which in the end affects the candidate’s possibility of attacking such a decision and ultimately empties of its full content the requirement for the applicants to participate themselves in the decision-making process, as established by the Federal Court in the case of El Maghraoui.  In Mr. Bellissimo’s opinion, this shortcoming will in the future be the main argument of those who will challenge the decisions taken by, or with the help of, artificial intelligence.

3) The risk of perpetuating the prejudices of the past is another possible consequence of the introduction of AI in the selection of immigrants, as long as by the very confession of the immigration authorities, the set of technological tools introduced by the IRCC is based on the machine learning technique aimed at analyzing the results of “thousands of past applications”. In other words, insofar as these previous results incorporate prejudices or discrimination contrary to current moral and legal standards, there is a risk of their reproduction in future decisions, unless corrections are operated by the human factor , which is not easy as long as the decision-making process remains opaque.

Canadian courts have faced such situations in areas other than immigration (such as criminal justice), but the potential risk of perpetuating past discrimination cannot be overlooked even in the field  of immigration, given the manner in which the instruments used by IRCC are built.  For example, if the decision-making process performed by AI is based on 1000 past decisions on temporary visa applications, the question arises whether the database thus constituted does not also encompasses those decisions that were later revised precisely because of the discrimination they contained.  Also, how can the AI ​​correct the database by eliminating decisions that, even if not challenged, contain discriminatory solutions?

Mister Bellissimo rightly suggests that it is to the advantage of immigration authorities to give up the secrecy with which the recourse to AI ​​/ MLs has been surrounded so far, and to involve the practitioners who can better identify the risk factors in this process.

(4) As is clear from the above, the lack of transparency is another major concern regarding the use of AI / ML technology. Little is known about the selection of information that IRCC introduces as part of the machine learning process, or about the manner and limits in which this information is shared with other entities. However, such issues should become of the public domain if it is really intended to ensure fair treatment of applicants. In Mr. Bellissimo’s opinion, in order to ensure the protection of the interests of all participants in the immigration process, the details of the implementation of the AI ​​should be submitted to Parliament’s approval.

The necessary conclusion is that the introduction of artificial intelligence is here to stay, but that this process should lead neither to the elimination of the human factor from the decision-making process, nor to the violation of the principles of the rule of law referred to above. This requires the collaboration of all stakeholders, starting with the immigration authorities, but without excluding the community of practitioners, socio-cultural organizations and each of us, Canadian citizens and residents who are all immigrants or immigrants’ descendants.

Share

Comments are closed.