In Australia, AI has largely been positioned as a way to stretch limited skills capacity in high-cost specialist roles, rather than as a headcount reduction tool. Despite widespread tech layoffs globally and locally over the past few years, Australia’s skills shortage has remained largely unchanged.
Cuts at large technology firms have often weakened the broader ecosystem, impacting smaller suppliers and subcontractors alongside the firms making redundancies. Rather than releasing excess capacity into the market, layoffs have tended to redistribute pressure across an already constrained talent base.
Skill shortages remain a key concern for CIOs, according to tech marketing firm Foundry’s State of the CIO survey. In the survey, 54 percent of CIOs cited skill and staff shortages as their number one challenge, diverting time away from innovative and strategic work. More than a third also reported difficulty finding candidates with the desired AI and machine learning experience.
Where AI reduces labour pressure
The rate of AI deployment or usage does not automatically translate into reduced labour pressure. In some cases, AI can fully take over a technical task, while in others it acts as an assistive tool that specialists still need to operate and oversee. While OpenAI and other foundation model builders claim AI will soon be capable of handling a broad range of tasks, for now there is a clearer playbook for which activities can be automated and which are better suited to augmentation.
The clearest gains tend to appear when AI removes repeatable, well-defined work from specialist teams, allowing scarce expertise to be deployed selectively rather than continuously. This is most common where demand is high and predictable, and where additional volume does not require proportional increases in supervision. Knowledge retrieval, internal support, and routine analytical requests are common examples.
At the other end of the spectrum, many AI tools do not reduce labour pressure at all, as they are used alongside specialists rather than in place of them. Examples include assistive coding tools that boost individual productivity, or AI embedded into existing workflows that still require specialist review. In these cases, AI can increase oversight, governance, and quality control requirements, raising rather than lowering demand for specialist input.
How to pressure test an AI deployment
For organisations looking to pressure test AI productivity in 2026, adoption and usage metrics alone are weak signals. As noted above, high product usage does not necessarily lead to lower labour pressure, particularly if the underlying delivery model remains unchanged.
More meaningful indicators are those that tie directly to economic outcomes. For AI deployments, these include a slowdown in headcount growth for specialist roles, reduced contractor or consulting spend, and team-wide productivity improvements such as shorter delivery times.
This gap between adoption and impact is evident in McKinsey’s State of AI report. While 88 percent of organisations report regular AI use, nearly two-thirds say those deployments are still in an experimental or pilot phase. Despite widespread enthusiasm and a “see what sticks” approach to implementation, relatively few businesses are currently extracting material value. Gartner has also estimated that 30 percent of AI projects will be abandoned after proof of concept due to escalating costs and weak business value.
For organisations assessing whether their AI investments have delivered a meaningful return, the focus should not be on internal adoption alone, but on whether there has been an observable reduction in spending on specialists, through hiring or contracting, alongside measurable improvements in team productivity.


