Crushing the Axios supply chain threat with Tenable Hexa AI: Use cases for agentic AI
See how you can use Tenable Hexa AI to determine in minutes if you're impacted by the Axios npm supply chain attack. Learn how easy it is to automate configuration of scans, identify impacted assets, prioritize remediation, and more using agentic AI from Tenable.
Uncover prompt injection, insider threats with the Tenable One Model Refusal Detection
Tenable One's new Model Refusal Detection turns an LLM's refusal to execute a risky or suspicious prompt into a high-fidelity early warning signal. It helps you uncover and stop prompt injection attacks, insider threats, and other risky behaviors before they escalate into a breach.
Security for AI: A guide to managing the risks of vibe coding and AI in software development
Get a template for an AI coding acceptable use policy with security controls and a list of 25 security questions to ask software developers and “citizen developers” about their AI use. Mitigate the security risks of vibe coding and using AI in software development with Tenable One.
Tenable Hexa AI 소개: 위험 노출 관리를 위한 에이전트 AI
Tenable One - 위험 관리 노출 플랫폼의 에이전트 엔진인 Tenable Hexa AI를 소개합니다. Tenable Hexa AI가 어떻게 복잡한 보안 워크플로를 자동화하고 위험 노출 인텔리전스를 조율된 대응으로 전환하여 보안 팀이 사이버 위험을 실질적으로 줄일 수 있도록 지원하는지 알아보십시오.
Don't confuse asset inventory with exposure management
Asset discovery tells you what IT exists in your environment. Exposure management tells you what will get you breached. If your platform can't connect vulnerabilities, identities, misconfigurations, and AI systems into real attack paths, you don't have exposure management. You have inventory.
LeakyLooker: Hacking Google Cloud’s Data via Dangerous Looker Studio Vulnerabilities
Tenable Research revealed "LeakyLooker," a set of nine novel cross-tenant vulnerabilities in Google Looker Studio. These flaws could have let attackers exfiltrate or modify data across Google services like BigQuery and Google Sheets. Google has since remediated all identified issues.
가트너®, 2025년 보고서에서 AI 기반 노출 평가 분야에서 현재 가장 주목해야 할 기업으로 Tenable 선정
"가트너는 'AI 공급업체 경쟁' 보고서에서 "Tenable의 자산 및 공격 표면 범위, AI 애플리케이션, 취약성 평가에 대한 명성 덕분에 AI 기반 노출 평가 분야에서 선두주자로 자리매김했다"고 평가했습니다: AI 기반 노출 평가를 위한 최고의 기업은 Tenable입니다."
What Anthropic’s Latest Model Reveals About the Future of Cybersecurity
AI can find vulnerabilities with unprecedented speed, but discovery alone doesn’t reduce cyber risk. We need exposure prioritization, contextual risk analysis, and AI-driven remediation to transform findings into security outcomes.
From Clawdbot to Moltbot to OpenClaw: Security Experts Detail Critical Vulnerabilities and 6 Immediate Hardening Steps for the Viral AI Agent
Moltbot, the viral AI agent, is riddled with critical vulnerabilities, exposed control interfaces, and malicious extensions that put users' sensitive data at risk. Understand the immediate security practices you can implement to mitigate this enormous agentic AI security risk.
Tenable One AI Exposure 소개: 대규모 AI 사용 보안에 대한 새로운 표준
Tenable One AI Exposure를 사용하여 섀도 AI, 에이전트, 브라우저 플러그인 등을 포함한 조직 전반의 모든 AI 사용을 지속적으로 파악하고 모니터링할 수 있습니다. 복잡한 AI 워크플로를 매핑하여 큰 영향을 미치는 위험 노출을 알아보고 보안 및 AI 사용 제한 정책의 준수를 모니터링하십시오.
Security for AI: How Shadow AI, Platform Risks, and Data Leakage Leave Your Organization Exposed
Your employees are using AI whether you’ve sanctioned it or not. And even if you’ve carefully vetted and approved an enterprise-grade AI platform, you’re still at risk of attacks and data leakage.
Microsoft Copilot Studio Security Risk: How Simple Prompt Injection Leaked Credit Cards and Booked a $0 Trip
The no-code power of Microsoft Copilot Studio introduces a new attack surface. Tenable AI Research demonstrates how a simple prompt injection attack of an AI agent bypasses security controls, leading to data leakage and financial fraud. We provide five best practices to secure your AI agents.