Foreign Hackers Are Using Google’s Gemini in Attacks on the US

by Bella Baker


The rapid rise of DeepSeek, a Chinese generative AI platform, heightened concerns this week over the United States’ AI dominance as Americans increasingly adopt Chinese-owned digital services. With ongoing criticism over alleged security issues posed by TikTok’s relationship to China, DeepSeek’s own privacy policy confirms that it stores user data on servers in the country.

Meanwhile, security researchers at Wiz discovered that DeepSeek left a critical database exposed online, leaking over 1 million records, including user prompts, system logs, and API authentication tokens. As the platform promotes its cheaper R1 reasoning model, security researchers tested 50 well-known jailbreaks against DeepSeek’s chatbot and found lagging safety protections as compared to Western competitors.

Brandon Russell, the 29-year-old cofounder of the Atomwaffen Division, a neo-Nazi guerrilla organization, is on trial this week over an alleged plot to knock out Baltimore’s power grid and trigger a race war. The trial provides a look into federal law enforcement’s investigation into a disturbing propaganda network aiming to inspire mass casualty events in the US and beyond.

An informal group of West African fraudsters calling themselves the Yahoo Boys are using AI-generated news anchors to extort victims, producing fabricated news reports falsely accusing them of crimes. A WIRED review of Telegram posts reveals that these scammers create highly convincing fake news broadcasts to pressure victims into paying ransoms by threatening public humiliation.

That’s not all. Each week, we round up the security and privacy news we didn’t cover in depth ourselves. Click on the headlines to read the full stories. And stay safe out there.

According to a report by The Wall Street Journal, hacking groups with known ties to China, Iran, Russia, and North Korea are leveraging AI chatbots like Google Gemini to assist with tasks such as writing malicious code and researching potential attack targets.

While Western officials and security experts have long warned about AI’s potential for malicious use, the Journal, citing a Wednesday report from Google, noted that the dozens of hacking groups across more than 20 countries are primarily using the platform as a research and productivity tool—focusing on efficiency rather than developing sophisticated and novel hacking techniques.

Iranian groups, for instance, used the chatbot to generate phishing content in English, Hebrew, and Farsi. China-linked groups used Gemini for tactical research into technical concepts like data exfiltration and privilege escalation. In North Korea, hackers used it to draft cover letters for remote technology jobs, reportedly in support of the regime’s effort to place spies in tech roles to fund its nuclear program.

This is not the first time foreign hacking groups have been found using chatbots. Last year, OpenAI disclosed that five such groups had used ChatGPT in similar ways.

On Friday, WhatsApp disclosed that nearly 100 journalists and civil society members were targeted by spyware developed by the Israeli firm Paragon Solutions. The Meta-owned company alerted affected individuals, stating with “high confidence” that at least 90 users had been targeted and “possibly compromised,” according to a statement to The Guardian. WhatsApp did not reveal where the victims were located, including whether any were in the United States.

The attack appears to have used a “zero-click” exploit, meaning victims were infected without needing to open a malicious link or attachment. Once a phone is compromised, the spyware—known as Graphite—grants the operator full access, including the ability to read end-to-end encrypted messages sent via apps like WhatsApp and Signal.



Source link

Related Posts

Leave a Comment