Artificial Intelligence


NEWS 
  • AI-Automated Threat Hunting Brings GhostPenguin Out of the Shadows

    December 8, 2025

    Hunting high-impact, advanced malware is a difficult task. It becomes even harder and more time-consuming when defenders focus on low-detection or zero-detection samples. Every day, a huge number of files are sent to platforms like VirusTotal, and the relevant ones often get lost in all that noise. Identifying malware with low or no detections is ...

  • New Prompt Injection Attack Vectors Through MCP Sampling

    December 5, 2025

    This article examines the security implications of the Model Context Protocol (MCP) sampling feature in the context of a widely used coding copilot application. MCP is a standard for connecting large language model (LLM) applications to external data sources and tools. We show that, without proper safeguards, malicious MCP servers can exploit the sampling feature for ...

  • Unraveling Water Saci’s New Multi-Format, AI-Enhanced Attacks Propagated via WhatsApp

    December 2, 2025

    Brazil has seen a recent surge of threats delivered via WhatsApp. As observed in Trend Micro previously published research on the SORVEPOTEL malware and the broader Water Saci campaignopen on a new tab, this popular platform has been used to launch sophisticated campaigns. Unsuspecting users receive convincing messages from trusted contacts, often crafted to exploit social ...

  • OpenAI Data Breach Exposes User Data

    December 1, 2025

    A few days ago, on November 26th, right before Thanksgiving, OpenAI, the maker of ChatGPT, confirmed a recent security breach incident that started towards the beginning of November, which impacted its users, specifically those connected through OpenAI’s APIs. What caused the data breach? “On November 9, 2025, Mixpanel became aware of an attacker that gained unauthorized ...

  • The Dual-Use Dilemma of AI: Malicious LLMs

    November 25, 2025

    A fundamental challenge with large language models (LLMs) in a security context is that their greatest strengths as defensive tools are precisely what enable their offensive power. This issue is known as the dual-use dilemma, a concept typically applied to technologies like nuclear physics or biotechnology, but now also central to AI. Any tool powerful enough ...

  • Understanding the future of offensive AI in cybersecurity

    November 19, 2025

    As we step into an era where artificial intelligence (AI) plays an increasingly significant role in cybersecurity, discussions surrounding its offensive capabilities are becoming more prominent. A recent report by Anthropic—a leading AI research lab—has sparked the latest conversation on this topic, with questions raised about their claim that an AI-assisted attack they observed was ...

  • Bournemouth University receives £2.3 million to boost regional and national cyber security

    November 19, 2025

    Bournemouth University has been awarded nearly £2.3 million by the Office for Students to develop a new Cyber Competence Centre that will address regional and national cyber skills gaps. As well as upgrading the university’s existing facilities, the investment will be used to launch a new, AI-powered, Security Operations Centre of the Future for students to ...

  • Take fight to the enemy, US cyber boss says

    November 18, 2025

    America is fed up with being the prime target for foreign hackers. So US National Cyber Director Sean Cairncross says Uncle Sam is going on the offensive – he just isn’t saying when. Speaking at the Aspen Cyber Summit in Washington, D.C., on Tuesday, Cairncross said his office is currently working on a new National Cyber ...

  • SesameOp: Novel backdoor uses OpenAI Assistants API for command and control

    November 3, 2025

    Microsoft Incident Response – Detection and Response Team (DART) researchers uncovered a new backdoor that is notable for its novel use of the OpenAI Assistants Application Programming Interface (API) as a mechanism for command-and-control (C2) communications. Instead of relying on more traditional methods, the threat actor behind this backdoor abuses OpenAI as a C2 channel as ...

  • Clearview AI faces criminal heat for ignoring EU data fines

    October 28, 2025

    Privacy advocates at Noyb filed a criminal complaint against Clearview AI for scraping social media users’ faces without consent to train its AI algorithms. Austria-based Noyb (None of Your Business) is targeting the US company and its executives, arguing that if successful, individuals who authorized the data collection could face criminal penalties, including imprisonment. The complaint ...


Artificial Intelligence