Anthropic has signed a memorandum of understanding with the Australian government, solidifying their intent to cooperate on artificial intelligence safety and support the goals of Australia's National AI Plan. The signing meeting took place in Canberra, where Anthropic CEO Dario Amodei met personally with Prime Minister Anthony Albanese.
Alongside the memorandum's signing, AU$3 million was announced for partnerships with leading Australian scientific institutions. The funds will be directed toward using the Claude model in two areas: improving the diagnosis and treatment of diseases, and supporting education and research in computer science.
What's in the Agreement
A central part of the memorandum is cooperation with the Australian AI Safety Institute. Anthropic plans to share research findings on the capabilities of new models and their associated risks, participate in joint safety evaluations, and collaborate with Australian academic institutions.
Simply put, this is similar to Anthropic's arrangements with AI safety institutes in the US, UK, and Japan: the developer provides governments with advance access to its models and technical information, allowing state bodies to form their own understanding of the development trajectory of advanced AI systems.
Additionally, Anthropic will provide the Australian government with data from its Economic Index – an internal company tool that tracks how AI is being integrated into the economy, its impact, and its implications for the labor market. The initial focus will be on sectors critical to the Australian economy: natural resources, agriculture, healthcare, and financial services.
Interestingly, data from the same index shows that Australians are already using Claude for a wider range of tasks than users in most other countries. Among English-speaking nations, Australia leads in the diversity of applications – from management and business tasks to life sciences and everyday needs. Moreover, users often collaborate with the model, creating complex prompts to solve professional challenges.
A separate point in the agreement is the intention to explore investment opportunities in data center infrastructure and energy across the country – in line with the Australian government's recently announced expectations in this area.
"Australia's investment in AI safety makes it a natural partner for the responsible development of the technology. This memorandum formalizes our collaboration," noted Dario Amodei, adding that he is particularly inspired by the work of Australian research institutions using Claude in disease diagnosis and treatment.
Science and Medicine: Where the Three Million Will Go
Anthropic is extending its «AI for Science» program to Australia. An initial package – AU$3 million in Claude API credits – has been distributed among four institutions.
The Australian National University (ANU) will use Claude in two areas. A multidisciplinary team from the John Curtin School of Medical Research is analyzing genetic sequencing data to aid in the study of rare diseases. Concurrently, the ANU School of Computing is integrating Claude into new courses to train the next generation of Australian developers and scientists.
The Garvan Institute of Medical Research plans to accelerate genomic discoveries through two major projects. The first, in collaboration with the University of New South Wales, aims to create systems that translate human genetic variation data into practical knowledge about disease mechanisms at the cellular level – with the goal of finding new treatments. The second project, carried out with the Centre for Population Genomics (a partnership between Garvan and the Murdoch Children's Research Institute), is designed to automate complex genetic analysis, which is currently the main «bottleneck» in diagnosing rare genetic diseases in children.
The Murdoch Children's Research Institute, in turn, will apply Claude to its stem cell program – to improve the identification of therapeutic targets for treating heart diseases in children.
The Curtin University Institute for Data Science – Australia's largest university-based data science research center – will use Claude to scale collaboration with the academic community and in research projects spanning healthcare, humanities, business, law, natural sciences, and engineering.
Startups Are Also on Board
Separately, Australia's first AI API credit program for venture-backed deep tech startups was announced. Participants working in drug discovery, materials science, climate modeling, and medical diagnostics can receive up to US$50,000 (approximately AU$72,000) in Claude credits, as well as access to resources and community support.
Anthropic has framed its visit to Australia as the beginning of a long-term collaboration and investment in the broader Asia-Pacific region. The company plans to open an office in Sydney soon and will announce its local team and leadership.
Essentially, this story is about how a major AI developer is building partnerships with a government not in a «we'll do it all ourselves» format, but through collaborative work: sharing knowledge, conducting joint risk assessments, and supporting local science. Whether this model proves effective in practice – only time will tell. But the approach itself, where AI safety agreements are formalized at the government level and backed by specific research projects, seems to be a solidifying trend in the industry.