Strategic council of CISOs, CTOs, and AI researchers to shape the secure deployment, governance, and hardening of AI systems across global enterprises
Tuskira, the AI-native cybersecurity platform built to optimize and unify security operations, today introduced the formation of the AI Security Council (AISC), a strategic coalition of forward-thinking CISOs, CTOs, researchers, and security practitioners committed to defining how AI will be secured, governed, and defended across the modern enterprise.
The AISC is purpose-built to confront the rising complexity of autonomous threats and the systemic risks that come with AI adoption. Council members are practitioners designing the next generation of secure-by-design architectures, governance models, and operational guardrails for AI systems. Their collective goal is to ensure AI strengthens, rather than destabilizes, the global security ecosystem.
Focus areas include the detection of AI-generated threats, autonomous response orchestration, model integrity and hardening, data provenance within large-language-model ecosystems, and cross-border AI governance. Through private workshops, quarterly threat briefings, and collaborative intelligence reports, members will translate research and field insight into practical blueprints enterprises can use to operationalize secure AI at scale.
“We believe the complexity of today’s AI-driven threats demands a new model of collective intelligence,” said Piyush Sharma, CEO and co-founder of Tuskira. “The AI Security Council is built on the idea that open collaboration, diverse perspectives, and shared field insights are essential to staying ahead of adversaries. Our members bring a deep commitment to professionalism and progress, ensuring every exchange is rooted in integrity. Above all, we’re here to challenge the status quo and lead with purpose, shaping a more secure, inclusive, and resilient AI-powered future.”
“Cybersecurity cannot be a one-man show, especially given today’s era of AI-driven threats,” said Cassandra Mack, CISO at Tensorwave. “It requires collective intelligence, shared insight, and coordinated action. That’s exactly what the AI Security Council is built to deliver: a trusted forum where the brightest minds in security come together to solve our most urgent challenges.”
“Responsible AI isn’t just about explainability or performance, it’s about governance that anticipates misuse before it happens,” said Peter Holcomb, Founder & CEO of Optimo IT. “At the AI Security Council, we’re tackling the hard engineering questions: How do we validate model integrity in real time? How do we embed compliance without stifling innovation? We’re not observing from the sidelines, we’re building the guardrails and digital immune systems that will define the future of secure, scalable AI.”
“The AISC gives us a platform to move beyond anecdotal learnings,” said Amy Lemberger, Senior Consultant at Lemberger & Associates. “By sharing threat telemetry, testing control gaps together, and publishing frameworks born out of real operational stress, we’re accelerating maturity across the board. There’s real power in this kind of field-first collaboration; it’s already reshaping how I approach detection engineering in the environments I support.”
Additional Resources:
- To learn more about Tuskira’s AI Security Council members, please visit: www.tuskira.ai/ai-security-council
- Check out Tuskira’s upcoming webinar with members of the AI Security Council on November 5, 2025, at 8 AM PT/11 AM ET. Register here
About Tuskira
Tuskira turns fragmented security operations into preemptive, AI-driven defense. Tuskira is a multi-agent AI platform for security operations. Its domain-specific AI Analysts simulate attacks, validate defenses, and reduce real risk using telemetry from across your environment. By building a digital twin and running continuous simulations, Tuskira delivers measurable improvements in analyst productivity, threat response speed, and control effectiveness.
View source version on businesswire.com: https://www.businesswire.com/news/home/20251028038745/en/
