Canada’s New Federal Directive Makes Ethical AI a National Issue

At the perfect intersection of technology and civil service, every government process will be an automated one, streamlining benefits, outcomes, and applications for every citizen within a digitally-enabled country.

With that approach comes a significant layer of protocol that is necessary to ensure citizens feel empowered regarding decision-making processes and how their government addresses needs from a digital perspective. Right now, Canada is leading the world in AI, thanks largely to huge government investments like the Pan-Canadian Artificial Intelligence Strategy. The growing field is pervasive right now—there is hardly an industry it has not disrupted, from mining to legal aid. Government is no different. In fact, government might be one of the most obvious choices as to where automated decision processes can save time and money.

The dilemma that arises from a government’s enactment of AI is an amplification of the problems surrounding how any organization embraces the burgeoning technology: How do you ensure this AI platform or service fairly and adequately caters to the needs of its clients? A company like Facebook uses AI for a number of reasons, such as ad targeting or facial recognition in photos. Sure, their algorithms may result in creepily-accurate ads popping up in the newsfeed, but the ethics of their machine learning solutions really only affect a person’s privacy—or lack thereof in recent years.

canada_ai 5

A government, on the other hand, must take into consideration a vast array of details as they begin to adopt new technologies like AI. Governments deal with privacy, of course—but they also deal with health care, immigration, criminal activity, and more. So the problem for them revolves less around “Which kind of AI solution should we use?” and more around “Which ones shouldn’t we use?”

The AI platforms a government can’t touch are the ones that offer little to no transparency and are riddled with bias and uncertainty. If a decision from the government is rendered through an automated process, a citizen has a right to understand how that decision came to be. There can be no protection of IP and no closely-guarded source code. For example, if an applicant for a potential criminal pardon is denied that pardon by an AI system trained with historical data, that applicant deserves to understand exactly why they may have been turned down.

The Canadian government’s solution to this issue is the Directive on Automated Decision-Making, released earlier this week. Alluded to in late 2018 by then-Minister of Digital Government Scott Brison, it is a manual describing how the government will use AI to guide decisions within several departments. At the heart of the directive is the Algorithmic Impact Assessment (AIA), a tool that determines exactly what kind of human intervention, peer review, monitoring, and contingency planning an AI tool built to serve citizens will require.

A machine’s guide to ethics

The Canadian government’s path to implement ethical practices into their AI decision-making processes began roughly 14 months ago when Michael Karlin, the team lead for data policy at the Department of National Defence, noticed a blind spot in how the government handled their data. A team was formed to develop a framework, and it grew from there, working entirely in the open through platforms like GitHub and receiving feedback from private companies, other governments, and post-secondary institutions such as MIT and Oxford.

When that team, which is now led by Canada’s chief information officer Alex Benay, looked around at other governments, they realized there was no tool in place that could accurately measure the impact an automated process may have on the citizens it was created to serve. There are obviously inherent risks depending on how AI operates with its clients—some of them are low-cost, while others are extremely important.

“If a chatbot is going to tell you a skating rink is open and you show up and it’s not open and that chatbot was wrong, that’s low risk,” Benay explains. “But if you’re going to automate immigration decisions, that’s as high of a risk as we can get.”

The newly-released directive quantifies these risks and offers up the AIA as a tool for companies building AI government solutions that determines exactly what kind of intervention they may need. Companies access the AIA online and fill out a 60-plus question survey about their platform, and once finished, they will be returned with an impact level. The survey asks questions like “Does the system enable override of human decisions?” and “Is there a process in place to document how data quality issues were resolved during the design process?”

AIA
Two more example questions from the AIA.

The impact level of an AI platform contracted by the government is ranked one through four. The higher the rank, the more of an impact the automated decisions process has on the rights, health, and economic interests of individuals and communities. Processes that make decisions involving criminal activity or an individual’s ability to exit and enter the country, for example, will immediately be assessed with an impact level of three or four.

At certain impact levels, there must be intervention in several forms. If an automated decision process receives an impact assessment of level four, it will require two independent peer reviews, a public plain language notice, a human intervention failsafe, and re-occurring training courses for the system.

“The AIA assessment is really a tool that shows your level of severity and how you should treat the issue,” says Benay. “You probably don’t need peer review for that skating rink problem, but will definitely need a peer review group before you automate borders or social services.”

Despite the fact that this directive may make it seem like the government is doing everything they can to prepare for the onslaught of tech disruption, it’s actually the exact opposite. With a topic as nuanced as ethics and AI, the government’s top priority is transparency and accountability.

“What we’re trying to do with the AIA is ensure that humans remain in control so that they can appeal decisions.”

“What we don’t want is a black box algorithm from a company that doesn’t want to tell us how that algorithm made the decision,” says Benay. “That’s not something we can abide by from a service approach where people are supposed to have recourse if a service they received was unjust.”

The AIA itself is a major achievement for not only the Canadian government but governments around the world. It is the first tool of its kind and was adopted by Mexico four months ago, despite not even being officially released until this week. Benay and his team are in talks with several members of the D9 including Portugal and the U.K. as they look to implement the tool.

As important as the directive and AIA are to set the pace for the implementation of ethics as they relate to government practices, the regulations may be a point of contention for some of Canada’s preferred AI partners, especially enterprise firms such as Amazon Web Services, Microsoft, and others.

Sourcing the problem

In order for a company to provide an automated decisions process to the government, the new directive forces the hand of the company to release its source code to the government. It makes sense in theory, as it all relies on “the concept of explainability,” as Benay puts it.

“We need to show citizens how a decision was made through an algorithm,” he says. “If we get to a point where, for example, there is a court case involving that automated decision, we might need to release the algorithm.”

This point of contention is perhaps the true heart of the ethical AI discussion. There needs to be a balance between transparent decisions and the protection of IP. If an AI company wants the government as a client, they have to realize transparency is valued above all. That kind of thinking could shy some contractors away.

Think of it like this: In theory, if Amazon were to provide an AI solution to the Canadian government, the only thing separating any citizen from accessing all of Amazon’s source code—at least for that particular project—is a freedom of information request. In the age of closely-guarded white-labeled solutions and “borrowed” code, the ability to easily access source information is a difficult problem to dissect. These accesses will be monitored on a case-by-case basis to guard the “trade secrets or commercial confidential information” of a third-party supplier, but even that will be up for interpretation. Those who access that source code are still guarded by copyright, but code can be manipulated enough to obfuscate it and create headaches for the original publishers.

ElementAI_Toronto_Office_Techvibes-5
The Toronto offices of Element AI, a preferred AI vendor for the Government of Canada.

“It’s a delicate discussion,” Benay says. “I know IP is the bloodline of revenue for these companies. But if you’re going to do business with the government and have direct interaction with citizens, you have added responsibility as a private sector company. That’s the balance we need to find. I’m not sure we found it. Maybe we did, but we’ll continue to work transparently.”

There are only a few ways source code will be protected from being released, and they also happen to relate to the governmental departments this new AI directive will not influence. All agents of parliament are protected, which more or less encompasses everything in the realm of security and competitive intelligence. At some point in the future, algorithms and AI will be used to help track down illegal activities, and for obvious reasons, the government cannot publically reveal the source code for those decisions.

“If you’re going to do business with the government and have direct interaction with citizens, you have added responsibility as a private sector company.”

Benay admits that everything surrounding this directive may not be in its final form. Because the ecosystem involving AI shifts so rapidly, the government is committing to updating the directive every six months, and the government is also in the process of establishing an AI advisory board to ensure future decisions are made responsibly.

“Normally we would have five, six, even seven years to work through something like this,” says Benay. “We have had to work through this in months. It’s a very new form of policy and governance perspective to work through issues this quickly. But you’re seeing the response. For our AI day last year, we had 125 people in the room. For AI day this year, we had 1,000.”

Trendsetters

Over the next few years, there will be no shortage of discussion surrounding ethics and AI. The real question will revolve around when that discussion turns from pre-emptive action about the future of automated decisions to defensive retaliation, trying to protect citizens and the economy from already-established AI solutions. A tool like the AIA is not a defensive mechanism just yet, but it’s as close to one that has ever been developed.

“There are too many things we don’t know,” says Benay. “You don’t want to get to a position where we are relinquishing control of decisions to machines without knowing everything about them. The scary part for us around the world is that a lot of governments don’t seem to realize the level of automation of their own values that they are putting forward when dealing with potential vendors that have black box code or IP.”

canadaaa

There is at least a bit of good news to take away. Every member of the D9 is deliberating whether to implement Canada’s AIA, and Mexico has already done so. The goal is for as many nations as possible to use it and develop a collective brain trust behind the directive.

“If we can get a decent amount of countries using this tool and growing it and nurturing it, we’re putting government in a better position to deal with automation, privacy, and all of these issues you’re seeing,” says Benay. “It’s very hard to be a medium size country in the world right now. With this tool out in the open, other countries can rally to it and begin using it. It’s a world first in a public sector government’s approach to AI.”

“It’s probably our most transparent piece of policy that we’ve ever devleoped for administrative policies in Treasury Board Secritariat.”

At a more local level, The Government of Canada is asking all qualified AI vendors to make a public pledge that they will only work with departments that have completed an AIA. A few companies have already taken the pledge, including MindBridge AI, ThinkData Works, and CognitiveScale, and Benay’s team will formally send out letters within a few weeks.

Public introspection regarding ethics and AI is nothing new. In the middle of 2018, a group of human rights and technology groups published The Toronto Declaration, asking everyone involved with AI processes to “keep our focus on how these technologies will affect individual human beings and human rights,” because “in a world of machine learning systems, who will bear accountability for harming human rights?” In 2017, Canadian AI experts backed a ban on killer robots as well, which is essentially the most straightforward way to support ethical AI. France recently joined Canada to publically commit to the ethical understanding of AI. Finally, some of the biggest Canadian AI firms, including Integrate.ai and Element AI continually and vocally support the intersection of ethics and AI.

“I don’t know if everyone understands how big of an issue the ethical side of AI is when it comes to understanding fairness and bias,” Integrate.ai CEO and founder Steve Irvine told Techvibes earlier this year. “It’s not that robots will take over the world. Instead, it will be using datasets from historical data that a lot of companies have collected, where there will be an inherent bias of the past. Without adjusting for that, we risk perpetuating the stereotypes we’ve been trying to progress away from for decades.”

Canada’s publication of this directive is an important step towards the normalization of how deeply ingrained ethics and AI should be. With massive corporations like Facebook and Google constantly under scrutiny in terms of how they handle privacy and data—while at the same time leading the creation and dissemination of some of the most impactful technology in the world—it’s important to constantly consider how new advances in technology influence a regular citizen’s life.

“You’re seeing a lot of things in the papers around the mega-corporations and how they choose to do privacy or not do privacy,” says Benay. “Not that battle lines are being drawn around values, but there are deep and hard conversations to be had around this stuff, and all we’re trying to do is make sure we’re well-positioned nationally to make sure we’re capable of having those conversations.”