Why grounded, facility-specific AI is the only appropriate model for aged care
There is a version of AI adoption in aged care that looks like progress but is quietly dangerous. A nurse asks a general-purpose AI assistant about medication interactions. A care coordinator looks up a resident's consent requirements on ChatGPT. A team leader Googles a policy question at 11pm and gets an answer that sounds authoritative, but has nothing to do with their facility, their procedures, or the regulatory framework they are accountable under.
This is not a hypothetical. It is happening right now, across the sector. And as Australia's aged care system enters one of the most significant reform periods in its history, the gap between what general-purpose AI can offer and what aged care providers actually need has never been wider — or more consequential.
The reform context: more complexity, same workforce
The Strengthened Aged Care Quality Standards came into effect on 1 November 2025, alongside the new Aged Care Act 2024. The new standards are substantially more detailed and measurable than their predecessors — seven standards, 33 outcomes, and 137 discrete compliance actions that providers must be able to evidence at any point. This is not incremental change. It is a wholesale reshaping of what accountability in aged care looks like.
For providers, the regulatory burden has expanded dramatically. At the same time, the workforce challenges that have defined the sector for years have not eased. Staff are stretched. Information is scattered across multiple systems. And the expectation — from regulators, families, and residents themselves — is that care decisions are made quickly, correctly, and in accordance with current policy.
Into this environment, AI has arrived as a seemingly obvious solution. The promise is real: faster answers, reduced administrative burden, better-informed staff. But the way in which AI is deployed matters enormously. In aged care, deploying the wrong kind of AI does not just waste money. It can cause direct harm.
"All three government consultations acknowledge that the use of AI in healthcare is 'high risk', due to the direct impact on patient safety."
— MinterEllison, summarising the Australian Government's Department of Health, TGA, and DISR consultations, 2024
The problem with general-purpose AI in a care setting
When a staff member uses a general AI tool — whether that is ChatGPT, a consumer AI assistant, or any model trained on publicly available internet data — they receive an answer grounded in the internet at large. That answer may be statistically plausible. It may even sound authoritative. But it has no relationship to:
- The specific policies and procedures of the provider
- The current Strengthened Quality Standards and their specific outcome requirements
- The individual resident's care plan, history, and documented preferences
- The provider's obligations under the Serious Incident Response Scheme (SIRS)
In a sector where incorrect information can mean a missed medication, a failure to report a notifiable incident, or a breach of resident rights, the confidence with which a general AI delivers a wrong answer is not a minor issue. It is a governance failure waiting to happen.
The Australian Commission on Safety and Quality in Health Care raised this directly in its AI Clinical Use Guide, released in August 2025. The Guide warns that AI tools can produce outputs that are biased or simply incorrect — a phenomenon known as 'hallucination' — and that these errors can lead to inappropriate treatment decisions, overlooking important patient information, or clinical risk where the AI's training data does not represent the actual patient cohort.
"There are documented cases where AI tools have disadvantaged certain patient groups due to underrepresentation in the training data. This bias raises important ethical and equity concerns, along with potential clinical risks."
— Australian Commission on Safety and Quality in Health Care, AI Clinical Use Guide, August 2025
The Commission was unequivocal in its guidance: clinicians must critically evaluate AI outputs and recognise that these tools support, but do not replace, clinical judgment. That principle is sound — but it assumes the AI output is at least anchored to the right information domain. A tool trained on the open internet is not anchored to aged care policy, Australian regulation, or a specific facility's procedures at all.
The privacy dimension: a risk most providers are underestimating
Beyond clinical accuracy, there is a second layer of risk that aged care providers must urgently confront: data privacy.
When staff enter resident information — names, diagnoses, behaviours, medication histories — into a general-purpose AI tool, that data does not stay inside the organisation. Depending on the platform and its terms of service, it may be used to train future models, stored on overseas servers, or exposed in ways entirely inconsistent with the provider's obligations under the Privacy Act 1988 and the Aged Care Act 2024.
The Australian Government's 2024 amendments to the Privacy Act introduced new transparency obligations around automated decision-making — including requirements that covered entities disclose, within their privacy policies, the nature of decisions made with computer assistance that could substantially affect individuals' rights or interests. In aged care, where residents may have limited capacity to understand or consent to AI use, this obligation carries particular weight.
"Aged care providers handle some of Australia's most sensitive personal information — health records, behavioural data, and intimate details of daily living. Unlike other sectors, aged care serves individuals who may have limited capacity to understand or consent to AI use, making privacy protection both a legal obligation and an ethical imperative."
— AusMed, AI and Privacy in Aged Care, 2024
The risk is not theoretical. When ChatGPT emerged in late 2022, many aged care providers discovered their existing privacy policies had no framework for governing generative AI usage by staff at all. Those without adaptive governance structures were left scrambling. As AI tools continue to proliferate, the providers who build deliberate, policy-grounded frameworks will be significantly better positioned — both in terms of regulatory compliance and resident trust.
What 'grounded' AI actually means — and why it matters
The concept of 'grounded' AI refers to a model that draws its answers from a defined, curated, and controlled set of sources — rather than the open internet. In a healthcare or aged care context, this means:
- Answers are derived from the facility's own policies and procedures
- Responses reference the specific regulatory standards that apply to the provider
- Resident data remains within the secure, controlled environment — it is never sent to a general model
- The AI cannot generate an answer on a topic where it has no authoritative source — it will not speculate
This is not a modest distinction. It is the difference between an AI that reflects your organisation and one that reflects the internet. For a nurse asking about a medication protocol at 3am, those two things are not equivalent.
The case for grounded AI is further reinforced by the Australian Government's own policy direction. The Department of Industry, Science and Resources' September 2024 mandatory guardrails proposals paper identified six principal risks unique to AI: autonomy, general cognitive capabilities, adaptability, speed and scale, opacity, and high realism. All six of these risks are significantly amplified when AI is deployed without grounding — when it can answer questions about anything, from any source, with equal apparent confidence.
"In healthcare settings, AI must be used safely with the right regulatory approach. The proposed mandatory guardrails focused on accountability across the AI lifecycle, privacy and data quality, and human oversight and contestability."
— Department of Industry, Science and Resources, Mandatory Guardrails for AI in High-Risk Settings, September 2024
The workforce reality: staff need confidence, not caution
There is a practical workforce argument for grounded AI that sits alongside the governance and privacy case. At the HIC 2024 seminar, 66% of healthcare leaders cited staff readiness as their single biggest barrier to AI adoption. Across the broader health sector, only 30% of Australians say they trust AI more than they fear it.
In aged care, these trust barriers are even more pronounced. The sector serves residents who are among the most vulnerable members of the community. The workforce is acutely aware of this. Staff are not resistant to better tools — they are resistant to tools they cannot trust. And a tool that might give them an answer from the internet, rather than from their organisation's actual policy, is a tool that rational, responsible staff will learn to distrust and work around.
The design principle for AI in aged care should therefore not be 'powerful' — it should be 'trustworthy.' Staff need to know, with confidence, that when they ask a question, the answer is grounded in something real: their policies, their standards, their obligations. When that confidence exists, adoption follows. When it doesn't, the tool becomes a liability rather than an asset.
What good looks like: the design principles for responsible AI in aged care
Drawing on the regulatory landscape, the government's own consultation findings, and the practical realities of the sector, a set of clear design principles emerges for AI that is genuinely fit for purpose in aged care:
1. Source control
The AI must draw exclusively from defined, approved sources — the provider's policies, the Quality Standards, and integrated care data. It must not access the open internet or generate answers beyond its authorised knowledge base.
2. Data containment
Resident personally identifiable information must never leave the controlled environment. All processing should occur locally, with PII automatically stripped or controlled before any query is processed.
3. Role-based access
Different staff have different access rights to resident and policy information. The AI must reflect and enforce these boundaries — a care worker should not be able to inadvertently access clinical data they are not authorised to see, even via a conversational query.
4. Auditability
Every interaction should be traceable. Regulators are moving toward expecting evidence of continuous compliance, not just audit-time snapshots. An AI system that logs what was asked, what was answered, and on what basis creates an audit trail that supports accountability.
5. Standards alignment
The AI should be configured to the current Strengthened Quality Standards, with the ability to update as the regulatory framework evolves. It should not require a complete system overhaul each time the Standards are updated.
"The Commission pursues its goals through effective risk-based and proportionate regulation. Providers are required to report compliance data on a quarterly basis."
— Aged Care Quality and Safety Commissioner, January 2025
The governance imperative: from the boardroom to the bedside
For boards and executive teams, the AI question is ultimately a governance question. As the new Aged Care Act places greater accountability on governing bodies — not just operational management — the tools that staff use to make decisions become a governance matter.
A board that approves the use of general-purpose AI in a clinical or compliance context without understanding what data is leaving the organisation, or what sources the AI is drawing from, is a board that has not discharged its governance obligations. This is not alarmism. The Australian Government has explicitly identified AI in healthcare as high-risk, has funded a multi-agency review of the regulatory framework, and has signalled its intention to introduce mandatory guardrails.
Providers who get ahead of this — who build deliberate, grounded, policy-aligned AI into their operations now — will not only be better positioned for compliance. They will also be better positioned for the audit environment that the Strengthened Quality Standards are creating: one that expects continuous improvement, documented evidence, and genuine accountability from the boardroom to the bedside.
Conclusion: The question is not whether to use AI. It's which AI.
Australia's aged care sector is under more regulatory scrutiny, more operational pressure, and more public expectation than at any point in its history. AI has a genuine role to play in helping providers meet these demands — but only if it is designed for the task.
General-purpose AI trained on the internet is not that tool. It cannot know your policies. It cannot reflect your procedures. It cannot protect your residents' data. And it cannot be held accountable when it gets something wrong.
Grounded, facility-specific AI — built on a controlled knowledge base, aligned to the Quality Standards, and operating within a secure environment — is a fundamentally different proposition. It is not more restrictive. It is more trustworthy. And in aged care, trustworthiness is not a nice-to-have. It is the whole point.
About Governa.ai
Governa.ai is an AI-powered governance and compliance platform purpose-built for Australian aged care providers. Its AI assistant, Norma, draws exclusively from each provider's own policies and the Strengthened Aged Care Quality Standards — never from the open internet. Governa.ai supports residential, community, and home care providers to move from reactive compliance to continuous, evidence-based governance. Visit governa.ai to learn more.
.png)
.png)



