Whether you’re applying for an artificial intelligence-related job or not, it’s worth knowing about Australian government documents on the subject, as many areas of the APS are already using AI. Benefits of using AI are seen to include more efficient and accurate agency operations, better data analysis and evidence-based decisions, and improved service delivery for people and business.
The use of AI presents challenges that require a combination of technical, social and legal capabilities and expertise. These cut across core government functions such as data and technology governance, privacy, human rights, diversity and inclusion, ethics, cyber security, audit, intellectual property, risk management, digital investment and procurement
While many recognise the potential benefits of AI, it’s worth considering the downside, from minor to major. The Guardian’s Global technology editor, Dan Milmo, wrote that the British-Canadian computer scientist, Prof Geoffrey Hinton, “often touted as a ‘godfather’ of artificial intelligence, has shortened the odds (10% to 20%) of AI wiping out humanity over the next three decades, warning the pace of change is much faster than expected”.
Historian Yuval Noah Harari says, “AI isn’t a tool – it’s an agent.” Providing an historical perspective on AI developments, he explains in his book Nexus, A brief history of Information Networks from the stone age to AI: “The invention of AI is potentially more momentous than the invention of the telegraph, the printing press or even writing because AI is the first technology that is capable of making decisions and generating ideas by itself.” (p. 399) He cautions against adopting either a naïve or populist view of information. Rather we should “commit ourselves to the hard and rather mundane work of building institutions with strong self-correcting mechanisms.” (p. 404) Many of the documents listed below include ways to ensure these mechanisms are put in place.
To help with understanding what AI is, its applications, risks and how the Australian government is responding to developments, here are 19 documents (2019-2024) from a range of agencies, that cover policies, ethics, issues, guidelines, and opportunities. This knowledge can help inform applications and interviews.
APSC’s HR advice on AI applicants
As well as understanding the application and risks of AI, it’s also worth being aware of HR advice from the APSC about its use in job applications. How to spot an AI applicant suggests clues that can raise suspicions about a possible AI applicant. The clues suggest a person has excellent writing skills, which, while possible, can indicate “a level of perfection and polish that is uncommon and unrealistic for most human applicants”.
Then there’s the question of ethics in using AI for job applications. HR professionals need to be aware of the risks and challenges posed by AI applications, such as ensuring a fair and unbiased process, and verifying the authenticity of such applications.
Understanding AI
The Policy for the responsible use of AI in government uses the Organisation for Economic Co-operation and Development (OECD) definition:
“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
The CSIRO provides an explanation of AI. Resources include AI articles and details of their AI research.
The Australian Signals Directorate has produced AI resources for small, medium and large businesses and government organisations. These resources include four documents.
An Introduction to Artificial Intelligence, November 2023
The purpose of this document is to “provide readers with an understanding of what AI is and how it may impact the digital systems and services they use”.
Deploying AI Systems Securely April 2024
This document is for organisations deploying and operating AI systems designed and developed by another entity.
Engaging with Artificial Intelligence, January 2024
This publication provides organisations with guidance on how to use AI systems securely. The paper summarises important threats related to AI systems and prompts organisations to consider steps they can take to engage with AI while managing risk.
Guidelines for Secure AI System Development, November 2023
This document is aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs).
Key government AI documents 2019 – 2024
Commonwealth Ombudsman, Automated Decision-making Better Practice Guide, 2019
This guide states it “is intended to be a practical tool for agencies and includes a checklist designed to assist managers and project officers during the design and implementation of new automated systems, and with ongoing assurance processes once a system is operational. The principles in the guide apply whether an agency is building an automated system in-house or has contracted with an external provider to build the system.”
Australian Human Rights Commission: Whitepaper: Human Rights and Technology, 2019
Placing society at the core of AI development, this report analysed the opportunities, challenges and prospects that AI technologies present, and explores considerations such as workforce, education, human rights and our regulatory environment. Related publications are listed.
List of Critical Technologies in the National Interest, May 2023
Following a public consultation process, the List of Critical Technologies in the National Interest was developed. The list focuses on key enabling technology fields (with examples) that will have a high impact on the national interest. These fields represent technologies for which Australia:
- has research, intellectual or industrial strengths and capabilities to be supported and championed
- needs uninterrupted access through trusted supply chains
- must retain strategic capability or maintain awareness.
National framework for the assurance of artificial intelligence in government, A joint approach to safe and responsible AI by the Australian, state and territory governments, June 2024
Based on Australia’s AI Ethics Principles, this framework sets foundations for a nationally consistent approach to AI assurance. It will “assist governments to develop, procure and deploy AI in a safe and responsible way.”
The framework includes material on:
- Complementary initiatives, such as from the OECD, Bletchley Declaration on AI Safety, and Seoul Declaration for safe, innovative and inclusive AI.
- Governance.
- Data governance.
- Procurement.
- Standards.
- Implementing AI ethics principles, with links to relevant documents.
- Additional resources.
Australian Council of Learned Academies (ACOLA), The Effective and Ethical Development of Artificial Intelligence, An opportunity to improve our wellbeing, July 2019
This project examined the potential that artificial intelligence (AI) technologies have in enhancing Australia’s wellbeing, lifting the economy, improving environmental sustainability and creating a more equitable, inclusive and fair society.
Interim guidance on generative AI for Government agencies, July 2023
The Digital Transformation Agency explains that: “Generative AI is technology that generates content such as text, images, audio and code in response to user prompts.” Interim guidance on its use was developed for staff within Commonwealth government agencies.
Policy for the responsible use of AI in government, September 2024
The policy’s aim is to ‘ensure that government plays a leadership role in embracing AI for the benefit of Australians while ensuring its safe, ethical and responsible use, in line with community expectations’.
The policy includes a section on risk assessment for AI. Potential risks listed include accessibility, unfair discrimination, privacy concerns, security concerns, intellectual property and reputational risk.
Standard for AI transparency statements
Under the Policy for the responsible use of AI in government, agencies must make publicly available a statement outlining their approach to AI adoption as directed by the Digital Transformation Agency (DTA).
AI transparency statements help agencies to meet policy aims by providing a foundational level of transparency on their use of AI. They publicly disclose:
- how AI is used and managed by the agency
- a commitment to safe and responsible use
- compliance with the policy.
Voluntary AI Safety Standard, September 2024
The Department of Industry, Science and Resources provides a general introduction to AI, with links to several documents.
The Voluntary AI Safety Standard gives practical guidance to all Australian organisations on how to safely and responsibly use and innovate with artificial intelligence (AI).
This publication includes:
- the 10 voluntary AI guardrails and how to use them
- examples of when to apply the guardrails
- how the standard was developed
- the standard’s foundational concepts and legal context.
It also includes definitions, links to tools and resources, and information on how AI interacts with other business guidance and regulations.
The Australian Responsible AI Index 2024
Sponsored by the National Artificial Intelligence Centre (NAIC), this report provides insights into how Australian organisations are adopting Responsible AI (RAI) practices.
The index categorises organisations into four maturity levels – Emerging, Developing, Implementing, and Leading – based on their adoption of key RAI practices such as fairness, accountability, transparency, explainability, and safety.
Department of Prime Minister and Cabinet, How artificial intelligence might affect the trustworthiness of public service delivery? October 2023
This report identified that current trust in AI is low, and developing community trust would be a key enabler of government adoption of AI technology
Evaluation of whole-of-government trial into generative AI, October 2024
The Digital Transformation Agency (DTA) embarked on a whole-of-government trial into generative artificial intelligence (AI). It made Microsoft 365 Copilot (formerly Copilot for Microsoft 365) available to over 7,600 staff across 60+ government agencies. Insights from this trial evaluation will inform further adoption of generative AI across government. The results identified benefits and challenges.
The Artificial Intelligence Ethics Principles, October 2024
The Artificial Intelligence (AI) Ethics Principles guide businesses and governments to responsibly design, develop and implement AI. The eight principles are designed to ensure AI is safe, secure and reliable.
Select Committee on Adopting Artificial Intelligence (AI), November 2024
This committee inquired into and reported on the opportunities and impacts for Australia arising out of the uptake of AI technologies in Australia.
The report is structured as follows:
- Chapter 1 – Introduction and background;
- Chapter 2 – Regulating the AI Industry in Australia;
- Chapter 3 – Developing the AI industry in Australia;
- Chapter 4 – Impacts of AI on industry, business and workers;
- Chapter 5 – Automated decision-making; and
- Chapter 6 – Impacts of AI on the environment.