Breaking news

AI Security Takes Centre Stage: Hackers Warn Systems Are Still Shockingly Vulnerable

2025 marks a dramatic shift in the AI landscape—what was once a dialogue about AI “safety” has quickly transformed into a focus on AI “security.”

Since the debut of ChatGPT in late 2022, conversations around AI have often veered into the hypothetical, with alarmist warnings about existential threats: rogue AI causing global crises, or out-of-control systems undermining humanity. But in a surprising turn, the real and immediate security risks AI poses have begun to dominate discussions.

The State Of AI Security: Far From Secure

Security experts are making it clear: AI systems remain frighteningly easy to manipulate. These tools—designed to power everything from chatbots to self-driving cars—are still riddled with vulnerabilities. At this point, hackers can trick large language models (LLMs) into providing detailed guides on cyberattacks or exposing sensitive data. The risk is not just theoretical—deepfake videos could spread fake news, or chatbots could be weaponized for scams. These aren’t future threats—they’re happening now.

Even as companies scramble to patch AI security holes, a report from the 2024 Def Con hackers’ conference points out that current defenses are woefully inadequate. Despite the best efforts of ethical hackers, AI models continue to be alarmingly easy to break into, with major flaws still slipping under the radar.

Why Red-Teaming Isn’t Enough

At the heart of AI security efforts is a practice called “red teaming,” where companies stress-test their models by simulating potential attacks. The aim is to uncover weaknesses like misinformation, privacy leaks, or manipulation of model behavior. However, experts like Sven Cattell, founder of Def Con’s AI Village, aren’t convinced. Cattell argues that the current process is deeply flawed—AI systems are too complex and unpredictable for red-teaming to catch every potential vulnerability. He points out that no team, regardless of its size or expertise, can predict all how AI might be exploited. As he puts it, the unknowns in AI security will always outpace testing efforts.

Collaboration Is Key To AI Security

The way forward, Cattell insists, is collaboration. Just like traditional cybersecurity, AI security requires shared knowledge and a more coordinated approach to identifying and fixing vulnerabilities. Without a standardized system for reporting AI flaws and a public database to track these issues, the security of these systems will remain in jeopardy. Without this cooperation, AI will never be fully secure.

To truly safeguard AI models, experts urge the creation of dedicated frameworks, allowing developers to share vulnerabilities and fix them collectively. This is not just about building a secure system; it’s about creating a culture of collaboration across industries to prevent AI from being exploited by malicious actors.

In a world where AI’s role continues to expand, its security must become just as sophisticated as the systems it powers. Now is the time to act before these vulnerabilities spiral into real-world dangers.

The AI Agent Revolution: Can the Industry Handle the Compute Surge?

As AI agents evolve from simple chatbots into complex, autonomous assistants, the tech industry faces a new challenge: Is there enough computing power to support them? With AI agents poised to become integral in various industries, computational demands are rising rapidly.

A recent Barclays report forecasts that the AI industry can support between 1.5 billion and 22 billion AI agents, potentially revolutionizing white-collar work. However, the increase in AI’s capabilities comes at a cost. AI agents, unlike chatbots, generate significantly more tokens—up to 25 times more per query—requiring far greater computing power.

Tokens, the fundamental units of generative AI, represent fragmented parts of language to simplify processing. This increase in token generation is linked to reasoning models, like OpenAI’s o1 and DeepSeek’s R1, which break tasks into smaller, manageable chunks. As AI agents process more complex tasks, the tokens multiply, driving up the demand for AI chips and computational capacity.

Barclays analysts caution that while the current infrastructure can handle a significant volume of agents, the rise of these “super agents” might outpace available resources, requiring additional chips and servers to meet demand. OpenAI’s ChatGPT Pro, for example, generates around 9.4 million tokens annually per subscriber, highlighting just how computationally expensive these reasoning models can be.

In essence, the tech industry is at a critical juncture. While AI agents show immense potential, their expansion could strain the limits of current computing infrastructure. The question is, can the industry keep up with the demand?

Become a Speaker

Become a Speaker

Become a Partner

Subscribe for our weekly newsletter