Safety Risks Identified in AI-Powered Search Engines
/ 1 min read
🧠🔍 Emerging AI-Powered Search Engines face significant safety risks. Recent advancements in Large Language Models (LLMs) have improved AI-Powered Search Engines (AIPSEs), but they also pose risks of disseminating harmful or unverified information. A study quantifying safety risks across seven AIPSEs found that these systems often generate content with malicious URLs, even from benign queries. The research indicates that direct URL queries heighten risk, while natural language queries can mitigate it. Case studies on online document spoofing and phishing demonstrate the vulnerability of AIPSEs to deception. To address these issues, the study proposes an agent-based defense system that includes a content refinement tool and a URL detector, which effectively reduces risks but may limit available information. The findings underscore the urgent need for enhanced safety measures in AIPSEs.
