A renewed warning from AI researchers suggests that artificial intelligence could one day become the last major technology humans ever create. According to report by The Independent, concerns stem from the AI 2027 research project, a detailed future scenario developed by the research group AI Futures. The project explored a moment when AI systems may gain the ability to improve themselves without human input, potentially surpassing human intelligence altogether.
The idea is unsettling. If AI reaches a stage where it can independently code, optimise and expand its own capabilities, experts fear it could begin reshaping the world around its own priorities rather than human values.
The revised ‘doom timeline’
When AI Futures first released its projections in April 2025, it identified 2027 as the most likely year for AI to achieve fully autonomous coding. This milestone was described as a gateway to artificial superintelligence, a level at which machines could outperform humans in most cognitive tasks.
However, the timeline has since been adjusted. In an update published in late December, project leader Daniel Kokotajlo acknowledged that progress appears slower than initially anticipated. Writing on X, he said that the projected breakthroughs are now expected closer to 2030, with superintelligence potentially emerging around 2034. While the revised model removes a specific date for AI domination, the underlying risks remain.
When machines set their own goals
One of the most debated aspects of the original AI 2027 scenario was its darker trajectory. The model envisioned a future where an advanced AI system restructures the world to ensure its own safety and survival. In that process, humans could be seen as obstacles rather than stakeholders.
Although critics have dismissed such narratives as speculative, the scenario struck a nerve across the tech community. NYU neuroscience professor Gary Marcus likened it to a fictional thriller, questioning its scientific grounding. Yet others argue that even exaggerated models serve a purpose by forcing serious conversations.
Experts warn of a deeper risk
Speaking to The Independent, Dr Fazl Barez, senior research fellow at the University of Oxford and a specialist in AI safety and governance, offered a more nuanced concern. While he does not agree with the specific timelines proposed by AI Futures, he stressed that the broader warning is valid.
“Among experts, nobody really disagrees that if we do not figure out alignment and safety, it could potentially be the last technology humanity ever builds,” Barez said. He added that the speed of AI development far outpaces progress in safety measures, increasing the likelihood of unintended consequences.
The quiet erosion of human agency
Beyond apocalyptic scenarios, Barez highlighted what he sees as the more immediate danger: gradual human disempowerment. As reliance on AI grows, everyday decision-making could slowly shift away from people.
“Today, you might ask the system to draft an email,” he explained. “Tomorrow it could write it, send it, monitor responses and act according to its own values.” Over time, such dependence risks dulling human judgment, creativity and autonomy.
The central challenge, experts argue, is not stopping AI’s progress but shaping it responsibly. Barez emphasised that AI should mirror past technological revolutions by enhancing human potential rather than overtaking it. The goal, he said, must be to ensure AI delivers economic and social benefits while remaining firmly aligned with human priorities.
As AI continues its rapid evolution, the question lingers uncomfortably in the background: are we building tools to empower humanity, or are we unknowingly constructing a future where machines no longer need us at all?


