Anthropic job posting broke the geopolitical internet
Everyone in the political risk industry is applying for the same role: To train Claude how to take their job.
The most coveted geopolitical risk roles are usually in the advisory world, where you get to fly in nice seats to brief smart clients and earn a Big 4-type salary doing challenging work. Yet, as the consulting industry comes under serious pressure from tech advancement, the opportunity to do geopolitical work in-house becomes much more intriguing.
It is against this backdrop that I note I have been sent Anthropic’s job posting for a geopolitical intelligence analyst by nearly a dozen people. I’ve been sent it by my wife. I’ve been sent it by other people’s spouses. I’ve been sent it by employees who I hope didn’t apply. I’ve been sent it by friends of friends who think it’s something I should know about. I’ve been sent it by enemies who want me to fold up shop at Unruly and stop trying to take their market, retreating instead to the internal affairs of Anthropic.
The job posting itself has even been reported as news. Never have I seen the geopolitical risk internet (a small corner of the universe to be sure) in such a tizzy over a role.
Here are four reasons why this is important.
First, it underscores that even Anthropic won’t simply use Claude to do geopolitical work! The geopolitical world has begun to dabble in the art of AI, using LLMs to support their work. But when arguably the hottest company in AI and vibe coding still needs humans to “conduct horizon scanning” and “provide geopolitical context for international conferences”, we all must realize that the general purpose models, given where they stand today, do not yet automate geopolitics. (I would argue that purpose built solutions are much better at this).
Second, geopolitical risk work is now a bedrock of corporate activity. Big energy companies have hired geopolitical analysts for a long-time but tech companies typically didn’t. Today, only a small percentage of companies overall have geopolitical teams and those teams are often left to wrestle with strategy, government affairs and the like for influence. Few companies place geopolitical analysis at the core of their of strategic decision-making.
But this job is a bit different. Anthropic itself is a geopolitical football being kicked around by Washington. The job explicitly works on “international expansion and facility siting.” As data centers become flag-bearing national outposts of American companies, choosing where to be and where not to be is much more than risk management. It represents some of the most strategic choices a company will make.
Whether this actually elevates the role of the in-house analyst or not remains to be seen but it underscores that you can no longer do corporate or external affairs without geopolitical affairs.
Third, the opportunity solidifies that the most exciting jobs in the geopolitical risk space may now be inside corporate, if not inside tech and AI specifically. For decades, the dream of many political science students was to land a seat at a major geopolitical advisory firm and work up to being a Managing Director or Partner. I followed this path. Yet, most consulting firms in the space are looking to hire ex-government movers-and-shakers now, not analysts, because they believe that access cannot be replicated by AI. Thus, the pathways to train up to become excellent are closing as AI eats more of the entry level work.
In fact, the number of geopolitical providers I’ve talked to who are running scared about what AI will do to their business model underscores even those sought-after political risk industry roles may only be a temporary safe-haven. At Unruly, we are finding we can serve more and more companies with AI generated geopolitical risk research. That means the role of the human in the model must evolve considerably which offers a different value-proposition to would-be analysts.
Of course, a key selling point of such a role would be building this at Anthropic itself. If you suddenly had access to all the tools and all the compute, you could probably develop some pretty novel ways to model and understand the world. While Anthropic may be an outlier, corporations are going to offer those types of compute packages far more than the consulting industry will in the future (unless, you work for an actual geopolitical tech company, like ours). Thus, if you want to solve big problems, you need big compute, and big compute means big budget which means big corporate.
Finally, the excitement around the job posting implies that even the most risk averse industry - the risk industry - gets excited by a one-in-a-lifetime bet. The job advertises top compensation at $220,000 which is less than what a top analyst would earn in the consulting world, even a lower salary than government Senior Executive Service roles. But presumably the job comes with options over Anthropic shares, which is like buying a lottery ticket where five your six numbers have already been drawn and matched. You’re just waiting for the sixth number so you know how much you’ve won.
You don’t need to be an intelligence analyst to realize that would be an intelligent move. Especially when there are far fewer lottery tickets available to geopolitical analysts than, say, AI engineers.
Based on the all the hubbub, I would guess that roughly half the people in my professional universe applied for this job. To the winner: I hope you have a good few years training Claude before it takes your job too.
-SW


