SB|Education
The Speed Nobody Is Accounting For

The Speed Nobody Is Accounting For

Liz T.
Liz T.
February 1, 2026 · 8 min read
In brief

The professionals most at risk are not those with the fewest skills. They are those with mid-level knowledge work expertise and no differentiation beyond execution — the precise layer being automated. The window to reposition is not infinite. It is, in fact, the thing moving fastest.

The pattern no one wants to name

There is a comfortable story circulating in most career conversations right now. It goes something like this: AI is changing things, yes, but gradually. There will be time to adapt. The skills that have served professionals well will remain relevant, perhaps augmented, perhaps slightly reshaped — but fundamentally intact.

That story is not supported by what is actually happening.

The single most underestimated variable in the AI transition is not the capability of the technology. It is the speed at which adoption is compressing the runway between "emerging trend" and "baseline expectation." What took previous technological shifts a decade to normalise is taking this one two to three years. Entire categories of work are not being slowly transformed. They are being structurally repriced, and in some cases, structurally removed.

The professionals and organisations who will fare best over the next four years are not necessarily the most technically sophisticated. They are the ones who have correctly read the pace of change — and have stopped waiting for it to slow down before acting.

Two simultaneous forces: obsolescence and emergence

It is important to hold two things at once, because the conversation tends to collapse into one or the other.

On one side, a meaningful number of roles as currently constructed will not survive the decade in their present form. Not because the underlying human need disappears, but because the execution layer of those roles — the gathering, organising, summarising, routing, and processing of information — is being absorbed by intelligent systems at a pace that most job descriptions have not yet caught up with. The roles most exposed are not the ones people assume. It is not only data entry or basic customer service. It is mid-level knowledge work: the analyst who aggregates reports, the coordinator who manages information flow between teams, the generalist who adds value through breadth rather than depth.

On the other side — and this is equally important — new categories of work are forming that did not exist three years ago and will be fully established disciplines by 2030. The emergence is real. The opportunity is genuine. But it will not be distributed evenly. It will accrue to the professionals who have been paying attention to what the technology actually requires of humans, rather than what it replaces in humans.

The tool landscape: what changes and when

Understanding the career implications requires understanding how the tools themselves are evolving, because the shape of the technology determines the shape of the demand.

2026: The age of agents begins

In 2026, the dominant shift is from AI as a conversational assistant to AI as an autonomous actor. Systems can now plan multi-step tasks, call on external tools, delegate to other systems, and complete workflows without a human prompting each step. This is not a marginal upgrade. It is a structural change in what AI can be given responsibility for. The copilot that helped you draft an email two years ago has been succeeded by a system that can manage an entire communications workflow end to end — drafting, sending, following up, logging, and reporting — without a single human instruction beyond the initial objective.

For professionals, the immediate implication is clear: any role whose primary value is coordinating information between systems is now operating on borrowed time.

2027–2028: Multi-agent networks and the death of SaaS as we know it

The next wave is arguably more consequential than the first, precisely because it is less visible to most working professionals.

Today, the average knowledge worker operates across a fragmented stack of software tools — a CRM, a project management platform, a communication suite, a document system, a reporting layer. Each tool has its own interface. Each requires its own fluency. Each creates its own data silo. The accumulated friction of navigating this stack is something organisations have simply accepted as the cost of doing business.

That cost is about to be eliminated — not by better software, but by agent networks that sit above the software entirely.

By 2027 and into 2028, coordinated networks of specialised AI agents will increasingly manage these stacks on behalf of organisations. One agent handles incoming data. Another routes and prioritises. Another synthesises and reports. Another flags anomalies and escalates. They do not need separate logins. They do not need training programmes. They do not forget the process documentation or go on leave.

The implications for SaaS as a business model are profound. When an intelligent agent layer can interface with any system through an API, the value of the interface itself — the thing most SaaS companies have been selling — collapses. What survives is the underlying data, the workflow logic, and the domain-specific intelligence embedded in the system. What does not survive is the human time spent navigating the interface, which is precisely what much of today's knowledge work consists of.

For careers, this shift raises a question that most professionals are not yet asking: if the tool landscape I have built my expertise around becomes largely abstracted away, what is my remaining contribution? The answer is not "learn the new tools." It is to develop the judgment, governance, and strategic capacity that sits above any tool — the layer that agent networks cannot yet provide.

This period will also see the beginning of a significant structural change in how organisations are staffed. Teams that once required four or five people to manage a function will be restructured around one or two people who direct, audit, and take accountability for an agent-driven process. The headcount reduction will not always be abrupt. In many cases it will happen through attrition — roles that are not backfilled when someone leaves, because the work has been quietly absorbed.

2029–2030: The intelligence layer becomes infrastructure

Toward the end of the decade, the more speculative but directionally consistent developments come into view. Synthetic data platforms will become foundational enterprise infrastructure — organisations generating the training data their AI systems need rather than waiting for it to accumulate organically. Digital twin environments will allow consequential decisions to be simulated before they are made, creating demand for specialists who can design, interpret, and govern these simulations. Ambient intelligence — AI embedded not in discrete tools but in the operating environment itself — will begin to reshape physical workplaces, not just digital ones.

These are not science fiction. They are the logical extension of trajectories already in motion, and they will create entirely new professional disciplines for those positioned to enter them.

The roles that will matter most by 2030

Several of the most valuable roles of 2030 do not yet have established job titles. That is precisely why they represent the greatest opportunity for professionals willing to position early.

Agent operations manager. The management of autonomous agent systems — overseeing their outputs, correcting their errors, setting their parameters, and taking accountability for their decisions — will become one of the most in-demand operational competencies across every industry. This is fundamentally a human judgment role, not a technical one. It requires understanding what these systems get wrong, not just what they get right.

AI behaviour auditor. The governance and audit of AI behaviour will emerge as a distinct professional discipline. As organisations deploy systems that make consequential decisions at scale, the need for structured accountability — who checks the system, how, and against what standard — will become a regulatory and reputational necessity, not an optional ethical exercise.

Human-AI team lead. The capacity to manage blended teams of human and automated contributors, assign work intelligently across both, and create coherent accountability will become a core management competency. The organisations building this capability now, before it is a formal discipline, will have a structural advantage in the talent market by 2028.

Synthetic data curator. As AI models become more domain-specific — in law, medicine, finance, logistics — the scarcest resource will not be compute or model capability. It will be high-quality, bias-checked, domain-relevant training data. The professionals who can build and steward that data will occupy a role analogous to what a skilled research librarian was to the pre-digital knowledge economy: underappreciated until the system cannot function without them.

AI interaction architect. Beyond prompt engineering — which is already becoming a commodity skill — there will be demand for professionals who design how AI agents behave across entire customer journeys or business processes. This is closer to systems design or service design than to anything currently in the technology job market, and it will draw heavily on disciplines that have not historically sat near the technology function.

Digital twin specialist. Professionals who can design, populate, and govern simulated environments — helping organisations test decisions, model risks, and explore scenarios before committing resources — will emerge as a distinct and valued specialism, particularly in industries where the cost of a wrong decision is high: infrastructure, healthcare, financial services, and urban planning.

What hiring will actually look like by 2030

The composition of hiring is already shifting in ways that will become pronounced by the end of the decade.

AI tool fluency will cease to be a differentiator and will function as a baseline filter — present in every job description, unremarkable in every candidate profile. The professional who leads with AI proficiency as a primary credential by 2028 will be in a position analogous to the professional who led with "Microsoft Office skills" in 2010. It signals competence at the floor, not distinction above it.

What will be genuinely scarce, and therefore genuinely valued, are the capabilities that resist automation not because they are protected by regulation or convention, but because they require qualities that AI systems structurally cannot replicate: contextual ethical judgment, the ability to build trust in high-stakes human relationships, creative direction grounded in lived experience, and the willingness to take accountable decisions in genuinely ambiguous situations where there is no clear precedent in the training data.

The professionals most at risk are not those with the least education or the fewest technical skills. They are those with mid-level knowledge work expertise and no clear differentiation beyond execution — roles where the execution layer is precisely what is being automated. This is a difficult reality, and it is one that the career advisory field has been too slow to name clearly.

The professionals best positioned are those developing what might be called a dual literacy: deep domain knowledge in a field where AI is being applied, combined with a working understanding of how AI systems behave, where they fail, and how to direct and evaluate them. This combination is currently rare. By 2030, it will be the expected foundation of serious professional practice in most knowledge-intensive fields.

The strategic imperative

The career advice that served professionals well for the past two decades — build expertise, accumulate credentials, deepen your specialism — remains partially true but is no longer sufficient on its own. Depth without adaptability is increasingly fragile. Credentials without currency become obsolete faster than the institutions that confer them can update their curricula.

The professionals who will look back on this period well are the ones who resisted the temptation to wait for clarity before moving. Who began developing governance literacy before it became a formal discipline. Who understood agent systems before they became standard infrastructure. Who built domain plus AI fluency before it became the job requirement rather than the differentiator.

The window for that kind of ahead-of-curve positioning is not infinite. It is, in fact, the thing that is moving faster than almost anyone in the career advisory space is currently acknowledging. The organisations and individuals who treat the next 24 months as a period of orientation rather than action will find themselves, by 2028, not catching up to a trend — but recovering from a structural disadvantage that will be genuinely difficult to close.

Research context: Estimates from Gartner, McKinsey, and the World Economic Forum suggest that AI will create in the region of 150 million+ new jobs globally by 2030 while displacing a significant share of current knowledge work functions. Projections indicate that a substantial proportion of enterprise applications will incorporate autonomous agent systems within the next two to three years, and that blended human-AI teams are expected to become standard operating structure in the majority of large organisations by 2028. The generative AI market is projected to reach approximately $523 billion by 2030. These figures carry meaningful uncertainty and should be read as directional indicators rather than precise forecasts.

Explore more writing on topics that matter.

← Back to all posts