Poorly designed and implemented AI could reduce the performance of public services
Beyond demonstrating integrity and empathy in the design and delivery of AI-enabled public services, there are other factors that are important for trustworthiness – in particular, that use of AI does improve, or at least maintains the performance of public services. Performance may decline when:
- Individuals or agencies lack competence in developing and using AI.
- Agencies fail to incorporate diverse experiences and perspectives in AI design.
- Agencies fail to establish lines of accountability for AI outcomes. In particular, where there is a potential for adverse impacts on an individual, the reasoning underpinning the AI models used must be transparent and explainable, and appropriate human supervision be exercised, including having a human involved in an adverse decision.[47] The individual must also be able to functionally challenge the outcome.
- Agencies fail to accommodate the nature of the population and its digital experience and connectivity. For example, people in very remote areas are more likely to have mobile-only internet access (32.6% of people in very remote areas, compared with 10.5% nationally), which can hinder their ability to effectively access some government services.[48]
Data quality is critical for ensuring that AI improves public service delivery
The quality of an AI model’s outputs is driven by the quality of its data, making it important that Agencies create, manage, use and maintain high-quality, accurate and representative datasets, and practice robust data governance practices.[49] The performance of public service delivery will decline if the data used to train and deploy AI models is poor. As noted above, AI learns from patterns and biases present in data. If the data used to train AI is incomplete, biased or unrepresentative, the AI’s output can reflect those shortcomings.
Workshop participants also raised concerns around how data shared today might affect access to services tomorrow. For example, if individuals who had previously shared data chose to opt out of future sharing, would a system that had learned about them then continue to view them as the person they were prior to opting out? Similarly, they noted that if data collection systems were not comprehensive, which might be the case in the context of health, mental health and other serious life issues, then individuals might receive services or interventions that were not appropriate for their current circumstances.
New skills will be needed across the APS to steward the community through the transformations to public service delivery that will take place with AI
Digital technologies and data are transforming how the APS operates and delivers services. The APS will need a range of skills and capabilities if it is to adopt and use AI in ways that improve public services, and steward the community through the transformations to public service delivery that AI will bring about. Various cohorts of the APS will require a level of upskilling and reskilling as their jobs evolve and change through digital transformation. For individuals, these include skills like critical thinking and creativity to use AI effectively. At an agency level, this includes leadership for how AI will be used as well as frameworks and processes for managing risks, data and establishing responsibilities for decisions about AI.
Communication skills will also be critical, to communicate effectively with both technical and non-technical audiences about AI. The recent OAIC Australian Community Attitudes to Privacy Survey found that 71% of Australians consider it essential that people are told that AI is being used.[50] This means that the communication skills and knowledge of agencies’ frontline staff about how AI is being used will be important for ensuring end users receive and experience a better level of service. In particular, frontline staff will need to be able to explain the output of AI systems to others in a clear, understandable and audience-appropriate way.
Insight 3.1: Trustworthiness will be eroded if artificial intelligence makes it harder for people to access and engage with public services
AI-enabled public service delivery offers opportunities to streamline access to services for many in the community. However, for others, providing information, querying decisions and navigating through to human support may become much harder. Given that users of public services are often at a vulnerable point in their lives, the ease of their interaction with a service may determine if they ultimately access the support and assistance they are entitled to, or even seek support again.
Insight 3.2: Experiencing bias and discrimination significantly erodes trustworthiness
The risks of bias and discrimination with AI are well-known, including the implications of using biased datasets to train AI, and failing to include diverse perspectives in the design process. Workshop participants argued that trustworthiness would be lost if using AI perpetuated unintentional biases and stereotypes. They also noted that some people are unlikely to access a service again, having experienced a biased process or outcome.
However, experts in workshops noted that biases in AI can be detected and corrected, while the biases that exist in the current human-based system are sometimes impossible to detect and/or correct. Given this, failing to address biases and discrimination as they arise will significantly erode trustworthiness.