How To Use An AI Assurance Framework
by VideoTranslator Support| May 15, 2022
How To Use An AI Assurance Framework

Today we are going to be looking at AI Assurance Frameworks! Clearly, this is a fantastic topic that we are all super interested in 😃

Ok - it’s not exactly riveting stuff, but it is important stuff!

What Is An AI Assurance Framework?

The good folks over at the Department Of Customer Service have done really good work on this front. From their documentation:

“The AI Assurance Framework will help you design, build and use AI technology appropriately. The framework contains questions that you will need to answer at every stage of your project and while you are operating an AI system. If you cannot answer the questions, the framework will let you know how to get help.”

What is happening here is pretty clear when you have a look at the context. Like many governments worldwide, the people running the NSW (New South Wales) Government understand AI can be a very useful tool.

But how to deploy AI solutions at scale for enterprise use cases? That is where their AI Assurance Framework comes into place.

Who Should Use An AI Assurance Framework?

AI Assurance Frameworks are generally tailored to a specific audience. In government technology circles, this means:

  • project teams who are using AI systems in their solutions
  • operational teams who are managing AI systems
  • senior officers who are accountable for the design and use of AI systems
  • internal assessors conducting agency self-assessments and the AI review body

An AI Assurance Framework is NOT required if,

  • you are using an AI system that is a widely available commercial application, and
  • you are not customizing this AI system in any way or using it in a way other than as intended.

who-should-use-an-ai-assurance-framework.jpg

What Resources Might You Need In Addition To The AI Assurance Framework?

Generally, you should have access to a number of resources that can be used in addition to the AI Assurance Framework.

These include:

  • AI Ethics Policy Framework
  • AI Strategy Document

Typically, an AI Strategy Document should be in place which provides guidance around what the organisation/enterprise is looking to get out of AI. This is the starting point.

Next, once you have an idea about an AI project you are looking to pursue, that is when the AI Assurance Framework comes into play.

Finally, once you have a product/service that is being worked on, you need to think about whether the product or service addresses any AI Ethics issues that might have come up in the process.

Note, in the below image, AI Ethics is the last stop. This does not mean you don’t consider AI Ethics ramifications until the end of the project!

What it means is that you incorporate looking at AI Ethics all the way, and then use the last step to make sure the product is appropriate prior to large scale production usage.

ai-strategy-assurance-and-ethics.jpg

Operational vs. Non-Operational AI?

Operational AI

Operational AI systems are those that have a real-world effect. The purpose is to generate an action, either prompting a human to act, or the system acting by itself.

Operational AI systems often work in real-time (or near real-time) using a live environment for their source data.

Not all operational AI systems are high risk. An example of lower-risk operational AI is the digital information boards that show the time of arrival of the next bus.

Operational AI that uses real-time data to recommend or make a decision that adversely impacts a human will likely be considered High or Very high risk.

The question for operational AI is, whether AI should deliver the best outcome for the citizen, and key insights into decision-making, essentially the question is around community benefit.

Non-operational AI

Non-operational AI systems do not use a live environment for their source data. Most frequently, they produce analysis and insight from historical data.

Non-operational AI typically represents a lower level of risk. However, the risk level needs to be carefully and consciously determined, especially where there is a possibility that AI insights and outputs may be used to influence important future policy positions.

For non-operational AI, will the use of AI include safeguards to manage data bias or data quality risks, and follow best practices, with the critical question being around fairness.

AI Risk Factors - A Spectrum Of Risks!

In summary, AI Assurance Frameworks is a really important milestone in delivering on a complex AI project.

You need, (a) AI Strategy Guide, (b) AI Assurance Framework, and © AI Ethics Guide, and across these three it should be possible for you to be aware of, measure, and react to challenges in your enterprise AI project.

Once you have these, consider your risks across the AI spectrum. This means,

  • Non-Operational AI’s generally have a lower risk profile.
  • If it is a non-operational AI, is it fair?
  • Operational AI’s generally have a higher risk profile.
  • If it is an operational AI, does it have a community benefit?

Have a think around many of these issues! With future blog posts, we are looking at expanding on these topics.

Let us know if you would like to hear something specific, contact us hello@videotranslator.ai and we will do our best to help you!

Share on
Related Posts
© Video Translator 2024 (ABN: 73 602 663 141) - All Rights Reserved