Adopting AI in the Public Sector: Turning risks into opportunities through thoughtful design

Adopting AI in the Public Sector: Turning risks into opportunities through thoughtful design

AI carries with it the potential to assist in the delivery of crucial government services and improve decision-making systems
Illustration of people sitting at yellow desks working on devices that are sending signals to display screens.
Asher Zafar
Project Lead, Data Science
​Sarah Villeneuve
Alumni, Policy Analyst
April 25, 2018
Print Page

What can AI do for governments?

We’ve all heard about the potential for artificial intelligence (AI) and machine learning (ML) to make all sorts of tasks more efficient and effective—from driving to interpreting medical imaging. This same potential exists for improving government services and decision-making across many areas, including child protection,infrastructure maintenance, application adjudication, and decisions on whether or not to grant bail.

However, implementing AI within government to aid service delivery and decision-making carries the risk of transferring existing challenges related to transparency, systemic bias and privacy from current human-driven systems to new data stores and algorithms.

The lack of clarity within government as to how to manage these challenges has slowed the adoption of AI, but we consider these challenges worth tackling. If AI-enabled systems in government are designed and used appropriately, their unique capabilities could mitigate existing systemic shortcomings and develop more safe, fair, and efficient public services. That said, AI is not part of every solution, and government must identify valuable, ethical and feasible use cases. If the design and application of AI systems are not well thought out, they could exacerbate current systemic challenges.

The need for AI-enabled government

The Canadian federal government has shown leadership in preparing our economy to harness the benefits of AI, including funding business-led innovation superclusters, research institutes, CIFAR’s Pan-Canadian AI Strategy, and supporting skills training.

Share

AI can amplify or mitigate our status quo challenges with transparency, bias, and privacy in government services and decision-making.

Canadian governments should also focus on their own unrealized potential to benefit from AI applications. While Canada compares well to its peers in digital government and government AI readiness for public service delivery, Canada ranks 68th globally in government procurement of advanced technology in a survey of global executives, with inefficient government bureaucracy cited as the most problematic factor for doing business.

Governments around the world lag behind their private sector counterparts in AI adoption despite having socially beneficial applications. For example, microtargeting advertisements is a common practice, yet microtargeting government services to those who need them most is not.

There may be many causes of this lag, including risk, status quo bias, rigid existing policies, an organizational design and culture that are slow to change, and a talent base that is unsure of how to use such technologies safely. Political considerations may also be in tension with program and policy design considerations, making decisions about how best to deploy AI challenging.

Success factors for AI in government

Strategy and Implementation

AI-enabled government may sound premature in light of the nascent, but growing, state of digital government in Canada. However, given the rapidly increasing capabilities of AI systems to improve constituent outcomes, it isn’t crazy for governments to consider a proactive strategy. Organizations with proactive AI strategies, robust AI adoption, and strong digital capabilities achieve better performance.

Such a strategy should assess the public use cases of AI, and deploy AI systems where they could lead to more fair, effective and efficient services and decision-making. Governments should consider multidisciplinary teams that specialize in designing, procuring, and operationalizing AI systems, as leading governments have done with centralized digital services.

A proactive strategy must also address challenges and risks in the areas of AI system transparency, bias, and privacy. Below we put forward some new considerations for governments to address these issues. AI can amplify or mitigate our status quo challenges with transparency, bias, and privacy in government services and decision-making

Transparency

While some AI models can be challenging to interpret, good design and systems can—in some cases—make AI supported decisions more transparent and fair than those made by humans alone. Governments should consider leveraging their open government plans to release relevant, redacted open source code, data and/or reports from AI systems that are operationalized and include performance information. Useful public information on an AI system could include the following items, depending on how it is being applied and the sensitivity of the system:

  • The system’s purpose, governance, and intended outputs
  • Its performance against outcome metrics (e.g., accuracy, root-mean squared error, cost function, etc.) for different constituencies
  • Description of the training data and how it was sourced or engineered
  • Documented open source code and methodology documents which would explain how data fields and algorithms were used, and how models were tuned
  • Distribution and accuracy of outputs among constituent groups of interest (e.g., geography, demographic, gender, income, etc.)
  • Importance of each variable for the model
  • The variables that contributed most to a particular decision for a citizen (which is possible, even in complex neural networks)

This type of public reporting on performance and equity is not in place for many government services and decisions today, and AI can be an impetus to improve transparency. As more governments implement AI systems and release such information, we expect that standards and tools to make such reporting easier will emerge from a global ecosystem.

Bias and Equity

Algorithms have the risk of perpetuating our existing biases if designed inappropriately (e.g., using training data based on past discriminatory human decisions or proxying a marginalized group). Human decisions are influenced by conscious and unconscious bias that may not be traceable. For this reason, it is challenging to identify how discrimination may factor into any given human-made decision. While these same biases can manifest in the design and/or data used by AI systems, they can be identified and mitigated to augment human judgment in appropriate applications.

AI systems open the door to embed definitions of fairness and codified policies such as anti-discrimination requirements in them, and can report on or even optimize constituent outcomes within such requirements. Codifying and reporting on these requirements could be part of policy development, procurement, and consultation processes, and help make progress toward equity and efficiency more measurable and transparent.

Privacy

AI-enabled systems can also make it possible to keep individual information more secure. Greater centralization of data stores do present a risk of misuse at scale, however, they also allow governments to professionalize and standardize cybersecurity and privacy practices. A mechanism such as differential privacy is more effective with larger data stores, and can be considered as a means to safeguard integrity and foster public trust in government systems while enabling more access to data for evidence-based decision making. Such an approach can shift AI systems from privacy risks to solutions.

AI can amplify or mitigate our status quo challenges with transparency, bias, and privacy in government services and decision-making. We have proposed initial ideas on how thoughtful governance and design can mitigate these challenges and improve upon the status quo, however, more thinking and solutions are necessary so that governments can ethically reap the benefits of AI for their constituents.

If you would like to learn more about this emerging technology, we encourage you to read our introduction to AI for policymakers. To participate in, or learn more about our work in this area, please get in touch at aipublicpolicy@brookfieldinstitute.ca.

For media enquiries, please contact Nina Rafeek Dow, Marketing + Communications Specialist at the Brookfield Institute for Innovation + Entrepreneurship.

Asher Zafar
Project Lead, Data Science
​Sarah Villeneuve
Alumni, Policy Analyst
April 25, 2018
Print Page

Share