AI, Universal Design and Government: getting it right from the start

 

Artificial intelligence is already shaping how government communicates, designs services, analyses information and supports decision-making. That creates real opportunities - but it also creates real risks.

If AI is introduced without enough thought, it can make services harder to access, harder to understand, less transparent and less fair. It can create new barriers for people with disability, amplify bias, erode privacy and reduce trust.

A universal design approach helps shift the conversation. Instead of asking only whether AI is efficient or innovative, it asks: who might be excluded, who might be harmed, and what needs to be built in from the beginning so the system works for a wider range of people?

For Victorian government organisations, that approach also supports compliance with existing obligations around accessibility, discrimination, privacy, human rights and accountable public administration. Current Victorian guidance on generative AI in the public sector, Australian Government policy on responsible AI use in government, and privacy guidance from OAIC all point in the same direction: AI should be used carefully, transparently and with appropriate safeguards.

 

Start with the job the AI is actually doing

The first question is not “Can we use AI?” but “What exactly are we asking it to do?”

Some uses are relatively low risk. Others are not.

AI should be approached much more cautiously where it:

  • influences access to services
  • shapes eligibility, triage or priority decisions
  • affects complaints handling or compliance responses
  • is used in high-stakes public communication
  • deals with personal or sensitive information
  • interacts directly with the public in ways that may be misunderstood as authoritative

The Australian Government’s policy for responsible AI use in government takes a risk-based approach and focuses on AI at the use case level, not just the tool level. That is a useful mindset for local and state government too.

A good starting point is:

  • be clear about the purpose
  • be clear about the limits
  • be clear about who is accountable
  • be clear about where human judgement must remain in place

 

Do not force people into AI-only channels

This is one of the most important universal design principles for AI in government.

AI can help support service delivery, but it should not become the only door in. People need different ways to access information and support. Some people will prefer or need a human conversation. Some will need a phone option. Some may struggle with written prompts, chat interfaces, sensory load, digital confidence or trust.

That means:

  • AI should not be the only way to access a service
  • people should be able to move easily from AI to a human
  • public-facing AI should use plain English and predictable language
  • interfaces should work with assistive technology
  • outputs should be understandable, not just technically correct

This approach supports both accessibility expectations and anti-discrimination obligations. In Victoria, the Equal Opportunity Act 2010 protects people from discrimination in public life, and disability discrimination protections also apply more broadly.

 

Accessibility is not just about the interface

It is possible for an AI tool to sit inside an accessible website and still produce inaccessible results.

That is why accessibility has to be considered at two levels:

  • the interface people use
  • the content or decisions the AI produces

For example, an AI assistant may still create barriers if it:

  • produces overly complex or verbose answers
  • uses ambiguous or bureaucratic language
  • summarises inaccurately
  • misses important context for people with disability
  • generates inaccessible documents or image-heavy outputs without alternatives
  • responds poorly to people who communicate differently

For AI used in digital services, a good baseline is to make sure the surrounding experience aligns with current digital accessibility standards such as WCAG 2.2, while also recognising that technical conformance alone is not enough.

 

Bias and unfairness need active attention

AI can reflect and amplify the patterns, assumptions and exclusions already present in data, systems and institutions.

That matters especially in government, where even small biases can have real consequences for people’s access to services, confidence in institutions and ability to participate equally.

A disability-inclusive approach means asking:

  • could this tool disadvantage people with disability?
  • does it work differently for different groups?
  • what happens to people with low literacy, cognitive disability or limited English?
  • are there compounded impacts for people at the intersection of disability, race, poverty, age or gender?
  • who is most likely to be misread, deprioritised or excluded?

This is not something that can be solved by good intentions alone. It needs testing, monitoring and review. It also needs humility. A system can look efficient and still be unfair.

 

Privacy matters even more with AI

One of the clearest messages in current Australian guidance is that organisations need to be extremely careful about privacy when using commercially available AI products.

OAIC’s guidance is explicit that privacy obligations still apply when AI products involve personal information, and its practical materials warn against putting personal or sensitive information into publicly available generative AI tools because of the significant privacy risks involved.

In practice, that means government staff should be careful about:

  • pasting identifiable personal information into public AI tools
  • using AI to summarise case notes or sensitive correspondence without approval
  • assuming enterprise tools are risk-free without checking settings and controls
  • using AI in ways that create new inferred information about a person
  • failing to understand what a tool stores, retains or shares

For Victorian public sector organisations, current state guidance and OVIC resources also stress the need for privacy, security and information management controls, especially for enterprise AI tools operating inside official systems.

 

Procurement is one of the biggest leverage points

A lot of AI risk enters through procurement.

If accessibility, privacy, transparency and human oversight are not built into contracts, briefs and vendor questions, they often become much harder to fix later.

When assessing AI-enabled products or services, it helps to ask:

  • how does the vendor address accessibility?
  • what testing has been done with disabled users?
  • what are the known limitations or failure modes?
  • what data is used, stored or retained?
  • can outputs be audited or challenged?
  • can the system be overridden by staff?
  • can the organisation exit the tool if problems emerge?
  • does the tool support recordkeeping, accessibility and review requirements?

This is very consistent with both the Australian Government’s responsible AI policy and existing privacy guidance on selecting AI products.

 

Human oversight still matters

Government cannot outsource responsibility to a model.

Even when AI is used to draft, summarise, classify, prioritise or recommend, there still needs to be clear human accountability for:

  • what the tool is used for
  • how outputs are checked
  • when staff override or reject outputs
  • how errors are corrected
  • how complaints or challenges are handled
  • when the tool should not be used at all

The Victorian Government’s current guidance for the VPS emphasises safe and responsible use of generative AI for official work purposes, while the Australian Government policy emphasises transparency, accountability and risk-based oversight. That direction is important because trust in government depends on people knowing that decisions remain reviewable and contestable.

 

Testing should include real users, not just technical checks

An AI system can pass internal testing and still fail badly in practice.

That is why testing should include:

  • disabled users
  • people using assistive technology
  • people with different communication styles
  • people with low digital confidence
  • diverse real-world scenarios, not only ideal ones
  • review of both interface and output quality

This is where co-design and lived experience become especially valuable. Technical teams often miss barriers that users identify immediately.

A universal design approach means involving people early enough to shape:

  • procurement criteria
  • design decisions
  • safeguards
  • escalation pathways
  • content style
  • testing scenarios
  • evaluation questions

 

Good governance is practical, not abstract

Most organisations do not need a philosophical statement about AI nearly as much as they need practical internal controls.

Practical internal controls might include:

  • approved and prohibited use cases
  • guidance for staff on what can and cannot be entered into tools
  • accessibility expectations for any public-facing AI use
  • procurement requirements for AI-enabled software
  • review thresholds for higher-risk uses
  • incident reporting and complaints pathways
  • records of where AI is being used and for what purpose

Public Record Office Victoria also now points agencies to current AI guidance, including Victorian and Australian government resources, which reinforces that AI should be treated as a governance and recordkeeping issue as well as a technology issue.

 

A better question for government

The most useful question is probably not “Should government use AI?” but:

Under what conditions can AI be used in ways that improve public services without reducing access, fairness, accountability or trust?

A universal design lens helps answer that question by keeping the focus on people, barriers and practical safeguards.

A UD approach asks government to design for:

  • more than one way in
  • more than one way to participate
  • more than one kind of user
  • more than one way to seek review or support

That is not just better design. It is also much more consistent with the responsibilities government already has under accessibility, discrimination, privacy and human rights frameworks.

Further reading

For a Victorian and Australian government audience, these are the most useful starting points:

  

Written by Virginia Richardson with the assistance of ChatGPT

Comments

Popular posts from this blog

The nexus between universal design, ethics and the use of Al