The Australian Skills Quality Authority (ASQA) is committed to ensuring that Artificial Intelligence (AI) enhances the effectiveness and quality of our services while maintaining the highest standards of ethics, safety and public trust. Our intent is to leverage AI technologies to:

  • support our broader goal of working together for better regulation
  • support our ongoing transformation of digital and data systems and capabilities
  • drive innovation and improved operational efficiency.

How we will use AI

ASQA is engaging with AI in a way that is responsible and prioritises human rights, the protection of personal and sensitive data, and keeps humans at the centre of decision making. We commit to the responsible use of AI, adhering to ethical and governance standards and policy directives, and ensuring our approach is transparent as we work to leverage opportunities in AI technologies.

Monitoring and accountability 

ASQA's AI initiatives are overseen by the AI Accountable Official, who also serves as ASQAs Chief Security Officer and Privacy Champion. ASQA is establishing governance arrangements to ensure AI use is appropriately and ethically managed, governed, monitored and benefits are realised, including compliance with applicable Commonwealth laws and policies, such as requirements under the Policy for the responsible use of AI in government and the Protective Security Policy Framework.

Usage patterns

ASQA will employ AI in the follow ways:

  • Workplace productivity: use of tools such as automatic document summarisation and virtual assistants to streamline workflows and improve efficiency
  • Analytics for insights: use of tools to identify, produce or understand insights within structured or unstructured materials via comprehensive data analysis. 

ASQA does not currently employ AI capability in public-facing services, including our regulatory functions or decision-making processes. While AI may be used to assist in, for example, automating routine tasks and to streamline internal processes, any final decisions or actions are made by a human. This ensures that there is always a human directly involved (Human in the Loop, or HITL) to review and validate any outcome/s generated by AI systems, in order to maintain accountability and accuracy.

As ASQA deploys public-facing AI capability it will ensure its data and AI governance practices include privacy, ethics, and security assessments that aim to protect the public against any negative impacts of AI.

In coming years, ASQA plans to leverage AI technologies as part of its ongoing transformation of digital and data systems and capabilities.

Domains

ASQA's current AI focus is on workplace productivity, including tools to improve, automate and streamline routine tasks and to support data and analytical insights.

Ensuring responsible use

 ASQA safeguards against risks and ensures responsible use of AI through releasing at least an annual AI Transparency Statement – to provide visibility on how AI is used and managed.    

Contact information

For enquiries or feedback regarding ASQA's use of AI, please contact us at digitaltransformation@asqa.gov.au 

Review and updates

This AI Transparency Statement was last updated on 18 March 2026. It will be reviewed and updated annually or when significant changes occur.  

On this page