Plus the AI cybersecurity news
Last October I spoke to an editor about my focus on AI cybersecurity news and she told me AI wasn’t a big deal in the industry. I knew from my work as an AI media researcher that she was wrong. Five months on and there’s rarely a cybersecurity article that doesn’t talk about AI. I hate to say I told you so but…
Microsoft launches cybersecurity generative AI tool
Microsoft has launched an AI tool using OpenAI’s latest GPT-4 generative AI model to help cybersecurity professionals identify threat signals, breaches, and better-analyzed data. The tool assistant is called ‘Security Copilot’ and is a simple prompt box to help security analysts with tasks like analyzing vulnerabilities, summarizing incidents, and sharing information with coworkers. The assistant uses Microsoft’s security-specific model, described by the company as “a growing set of security-specific skills” fed with more than 65 trillion signals daily. The launch comes on the back of a number of announcements from Microsoft about integrating AI into its offerings.
Google launches cybersecurity generative AI tool
Not to be outdone by Microsoft, Google’s cloud division has introduced the Security AI Workbench that leverages generative AI models for better visibility in the threat landscape. Google’s Security AI Workbench addresses three top security challenges: talent gap, threat overload, and toilsome tools. It will feature partner integrations to bring critical security functionality, threat intelligence, and workflow to customers. The platform’s new AI-powered tools can prevent new infections with point-in-time incident analysis and threat detection. The platform is built on Google Cloud’s Vertex AI infrastructure, allowing customers to control their data. It also ensures that customers make their private data available at inference time.
The Republican National Committee used AI to create a 30-second ad depicting what a second term for President Joe Biden would look like. Comprised of fake images and news reports, the ad showed fictional crisis after crisis, from the Chinese invasion of Taiwan to a financial shutdown and thousands of migrants crossing over the US border. Right now, as the technology stands today, it was easy to spot AI-generated faces but it won’t be long before the technology advances and it will be impossible to identify what is real and what is fake. Safeguards have been suggested such as providing content province like watermarking images. But those too can be faked. There is no universal standard for the detection of real or fake content. AI policy and safeguarding is only in place in California and Texas regarding pornographic images and targeting candidates for political office. President Biden is set to pass regulation but so far the EU is leading the way with the AI Act which states guidelines and rules for the use of AI.
ChatGPT writes code for malware
Japanese cybersecurity experts have discovered that ChatGPT could write code for malware when entering a prompt that makes it believe it is in developer mode. ChatGPT completed the code in minutes which poses extreme threats from cybercriminals using AI. The rapid advancement of ChatGPT has rushed multiple vendors around the world to embed the generative AI tool into products that protect organizations.
Deepfake fraud on the rise
A recent deep fake fraud survey conducted by ID verification software maker Regula found that AI-generated identity fraud is rising. The survey found that 37 percent of organizations have experienced synthesized voice fraud and 29 percent say they have been victims of deepfake videos. Regula stated that 91 percent of U.S. firms consider fake biometric artifacts such as deepfake video and voice as a growing security threat.
France and Singapore join forces to develop AI for cybersecurity
France’s Ministry of the Armed Forces and Singapore’s Ministry of Defence will develop AI capabilities for cyber defense in a joint mission. The research collaboration includes potential analysis of natural language processing, AI for geospatial analysis, plus image and video monitoring to identify potential threats. The research lab dual country collaboration is the first of its kind in Singapore.
Ginger Liu is the founder of Ginger Media & Entertainment, a Ph.D. Researcher in artificial intelligence and visual arts media, and an author, journalist, artist, and filmmaker. Listen to the Podcast.