
7.7K
Downloads
67
Episodes
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live!
Youtube channel: https://www.youtube.com/LMGsecurity
Register Here: https://lmgsecurity.zoom.us/webinar/register/WN_4FpdxB0VQo6aURK1p7_k_g
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live!
Youtube channel: https://www.youtube.com/LMGsecurity
Register Here: https://lmgsecurity.zoom.us/webinar/register/WN_4FpdxB0VQo6aURK1p7_k_g
Episodes

17 minutes ago
17 minutes ago
Anthropic accidentally exposed the source code for its Claude Code CLI—and while no customer data or model weights were involved, the impacts are significant.
In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down what actually leaked, why the agent layer matters more than most people realize, and what happened next—including the rapid emergence of new open-source alternatives like Claw Code.
They also answer key questions from a client:
1. What risks should organizations be thinking about because of this leak?
2. Does this change how AI coding tools should be monitored?
3. What are some practical recommendations for educating end users and developers?
The conversation focuses on real-world impact: execution risk, supply chain exposure, and the growing need for governance around “vibe coding” tools.
Key Takeaways
1. Treat AI coding agents like controlled execution environments These tools can read files, execute commands, and modify code. Govern them like CI/CD or automation systems with constrained permissions and segmentation.
2. Assume attackers are studying this architecture right now The leak removes guesswork. Expect more targeted prompt injection and tool abuse as adversaries analyze how these systems behave internally.
3. Prioritize immediate risks: malicious repos and supply chain abuse Threat actors are already using this as a lure. Monitor for typosquatting, dependency confusion, and “leaked” tools distributing malware.
4. Ensure developers know what’s official—and what isn’t Make sure teams can distinguish between official tools and alternatives. If using open-source variants, vet the source, maintainers, and security model.
5.Take this as an opportunity to formalize AI governance for coding and development tools. Many organizations are still experimenting. Define policies, logging, and oversight now, especially around how these tools are approved and used.

Tuesday Apr 14, 2026
Tuesday Apr 14, 2026
Anthropic’s Project Glasswing and its unreleased Mythos model signal a potential turning point in cybersecurity: AI that can find—and potentially exploit—software vulnerabilities at unprecedented scale.
In this episode of Cyberside Chats, Sherri Davidoff and Tom Pohl break down what this means for organizations today. If AI can uncover decades-old bugs in seconds, what happens to patching cycles, vulnerability management, and the balance between attackers and defenders?
They explore the uncomfortable reality: we may be entering a period where vulnerabilities are discovered faster than organizations can fix them—and where access to powerful AI tools could determine who wins and loses in cybersecurity.
From continuous patching to network segmentation and vendor accountability, this episode focuses on what security leaders need to do right now to prepare for a rapidly shifting threat landscape.
Key Takeaways
1. Reduce your internet exposure - If a system doesn’t need to be publicly accessible, don’t put it on the internet. Move services behind firewalls, VPNs, or restricted access controls wherever possible. Attack surface matters more than ever.
2. Vet your vendors’ security practices - Don’t just trust that vendors are handling security well. Ask how they:
- Secure their development lifecycle (SDLC)
- Detect and respond to vulnerabilities
- Patch and distribute fixes
- Vendor risk is now a direct extension of your own risk.
3. Budget for ongoing maintenance of custom code - Custom applications aren’t “done” at deployment. Plan for:
- Regular security testing
- Continuous patching
- Developer time to fix vulnerabilities
- Software is a living system and requires ongoing care and feeding.
4. Segment your network to limit attacker movement - Assume attackers will get in. The goal is to stop them from moving laterally:
- Separate critical systems
- Limit privileged account access
- Control how systems communicate
- Containment is just as important as prevention.
5. Update your incident response plan for zero-day reality - Your IR plan should assume:
- Exploits may exist before patches are available
- Detection may lag behind compromise
- Prepare for faster response, imperfect information, and active exploitation of unknown vulnerabilities.
Resources & References
1. Anthropic – Project Glasswing - https://www.anthropic.com/glasswing
2. Anthropic – Mythos Preview - https://red.anthropic.com/2026/mythos-preview/
3. Historical example discussed: Microsoft bug tracking system breach (2017)
4. Example referenced: ProxyShell (Microsoft Exchange vulnerabilities and rapid exploitation)

Tuesday Apr 07, 2026
We don’t break in, we badge in
Tuesday Apr 07, 2026
Tuesday Apr 07, 2026
In this episode, Matt interviews Tom and Derek from our pen test team to break down why attackers often don’t need to hack their way in at all.
While most organizations invest heavily in tools like EDR and SIEM, Tom and Derek share how they regularly get inside buildings using nothing more than confidence, a good story, and sometimes even a box of donuts. From posing as copier technicians to tailgating behind employees, their experiences show that people are often the easiest way into an organization.
And once they’re in, things escalate fast. Physical access can quickly turn into network access, whether it’s plugging in a device, jumping on an unlocked workstation, or moving through the environment with far fewer restrictions than an external attacker would face.
The big takeaway is simple. Real-world testing exposes what audits miss. Doors get propped open, employees try to be helpful, and small gaps add up in ways most organizations never see on paper.
If you’re not testing your people and your physical controls, you’re only testing part of your security.
Key takeaways:
1. Attackers target people first, not systems - Social engineering consistently bypasses even mature technical controls.
2. Physical access equals full compromise - Once inside your facility, most security controls can be circumvented quickly.
3. Un-tested controls are assumed to fail - If you’re not running social engineering or physical assessments, you don’t know your real risk.
4. Culture is a security control - Employees must feel empowered to challenge, verify, and report suspicious behavior.
5. Real-world testing reveals what audits miss - Offensive social engineering exposes how attacks succeed, not just theoretical vulnerabilities.

Tuesday Mar 31, 2026
Stryker Attack Analysis: Cybersecurity and insurance perspectives
Tuesday Mar 31, 2026
Tuesday Mar 31, 2026
A $25 billion medical device company brought to a standstill—without a zero-day exploit.
In this episode of Cyberside Chats, Sherri Davidoff is joined by cyber insurance expert Bridget Quinn Choi to unpack the Stryker cyberattack and what it reveals about modern enterprise risk. From compromised admin credentials to the abuse of Microsoft Entra and Intune, this incident highlights how attackers are increasingly using trusted tools to cause widespread disruption.
We explore what likely happened, why this wasn’t a “sophisticated” attack in the traditional sense, and how a single identity compromise can cascade into operational shutdown. Bridget brings a unique perspective from the cyber insurance world—explaining how insurers evaluate risk, why some large companies choose to go without coverage, and what organizations lose when they do.
We also dig into phishing-resistant MFA, governance of powerful admin tools, and the evolving role of insurance as both a financial backstop and a driver of better security practices.
If your organization relies on centralized identity and device management systems, this is a conversation you can’t afford to miss.
Key Takeaways for Security Leadership
1. Use Cyber Insurance as a Security Maturity Lever Don’t treat cyber insurance as a checkbox—it can actively strengthen your security program. Use underwriting requirements to benchmark your controls, ask brokers and carriers where you differ from peers, and take advantage of included services like threat intelligence and incident response support. Approach renewal as a security review, not just a policy purchase.
2. Treat Self-Insurance as a Strategic Risk Decision—Not a Cost Savings Measure If you’re considering self-insuring cyber risk, account for what you’re giving up: external validation of your controls, a built-in incident response ecosystem, and coordinated support during a crisis. This should be a board-level discussion focused on whether the organization can handle a major operational outage—not just absorb the financial loss.
3. Secure Your Device Management Systems—Because They Can Control Everything at Once Systems used to manage laptops, servers, and mobile devices can push changes across your entire organization. If attackers gain access, they can disrupt operations at scale. Treat these as central control hubs, limit administrative access, and apply strong monitoring and authentication controls.
4. Require Dual Approval for High-Impact Administrative Actions Add a second layer of human verification for actions that could impact many systems, such as device wipes or large-scale changes. This introduces intentional friction that helps prevent catastrophic mistakes or misuse.
5. Move to Phishing-Resistant MFA for Privileged Access Traditional MFA can be bypassed. For high-risk accounts, adopt phishing-resistant methods like passkeys or hardware-backed authentication and prioritize these protections for users with administrative access.
6. Make Sure You Can Actually Recover—Not Just Back Up Backups only matter if they work under pressure. Test your ability to restore critical systems, ensure backups are protected from attackers, and measure how long recovery actually takes in a real-world scenario.
Resources
1. Stryker cyberattack reporting (New York Times) https://www.nytimes.com/2026/03/12/world/middleeast/stryker-iran-cyberattack.html
2. CISA alert on endpoint management system hardening https://www.cisa.gov/news-events/alerts/2026/03/18/cisa-urges-endpoint-management-system-hardening-after-cyberattack-against-us-organization
3. SecurityWeek coverage of the Stryker incident https://www.securityweek.com/medtech-giant-stryker-crippled-by-iran-linked-hacker-attack/
4. Lumos analysis of the Stryker hack https://www.lumos.com/blog/stryker-hack
5. Microsoft Intune security best practices https://techcommunity.microsoft.com/blog/intunecustomersuccess/best-practices-for-securing-microsoft-intune/4502117

Tuesday Mar 24, 2026
Mass Exploitation 2.0: Web Platforms Under Attack
Tuesday Mar 24, 2026
Tuesday Mar 24, 2026
Mass exploitation vulnerabilities are back—and they’re evolving. In this Cyberside Chats Live episode, we break down the recently disclosed React2Shell vulnerability and the confirmed LexisNexis incident, where attackers exploited an unpatched web application to access cloud infrastructure and exfiltrate data.
But this isn’t new. From SQL Slammer to Log4Shell to ProxyShell, we’ve seen this pattern before: widely deployed, internet-facing systems + simple exploits + automation = rapid, large-scale compromise.
Most importantly, we focus on what matters for organizations today: how to reduce exposure, how to prepare for the next mass exploitation event, and why you should assume compromise the moment one of these vulnerabilities emerges.
Key Takeaways for Security Leaders
1. Inventory and monitor all internet-facing systems. Maintain a current, validated inventory of externally accessible applications and services—because you can’t secure what you don’t know is exposed.
2. Reduce unnecessary exposure at the network edge. Remove or restrict public access to administrative interfaces and systems that do not need to be internet-facing.
3. Build and rehearse a rapid-response playbook for mass-exploitation vulnerabilities. Define roles, timelines, and actions for the first 24–72 hours so your team can move immediately when the next major vulnerability drops.
4. Contact critical vendors and suppliers during major vulnerability events. Don’t wait—proactively verify whether your vendors are affected and whether your data may be at risk through third- or fourth-party exposure.
5. Assume vulnerable internet-facing systems may already be compromised. When mass exploitation begins, attackers are moving at internet speed—patching alone is not enough. Investigate, hunt for persistence, and validate that systems are clean.
Resources
1. React2Shell vulnerability coverage (BleepingComputer) https://www.bleepingcomputer.com/news/security/react2shell-flaw-exploited-to-breach-30-orgs-77k-ip-addresses-vulnerable/
2. LexisNexis breach details (BleepingComputer) https://www.bleepingcomputer.com/news/security/lexisnexis-confirms-data-breach-as-hackers-leak-stolen-files/
3. Compromised web hosting panels in cybercrime markets (BleepingComputer) https://www.bleepingcomputer.com/news/security/compromised-site-management-panels-are-a-hot-item-in-cybercrime-markets/
4. CISA Known Exploited Vulnerabilities Catalog https://www.cisa.gov/known-exploited-vulnerabilities-catalog

Tuesday Mar 17, 2026
Is Anthropic a Pentagon “Supply Chain Risk”?
Tuesday Mar 17, 2026
Tuesday Mar 17, 2026
Anthropic has been labeled a “Supply-Chain Risk to National Security” after refusing two uses of its models: mass surveillance of Americans and lethal autonomous warfare without human oversight. But is Anthropic really a supply-chain risk, and how does this designation affect businesses that use Claude? In this episode, Sherri Davidoff and Matt Durrin unpack the timeline behind the Pentagon’s designation, what Anthropic claims is actually driving the conflict, and what’s known (and not known) about any underlying technical risk. They compare the situation to Kaspersky—where the supply-chain concern centered on privileged security software, foreign-state leverage, and update-channel risk—then bring it back to the enterprise questions that matter: vendor dependency, continuity planning, and what changes when an AI provider becomes politically or contractually constrained.
Key Takeaways for Security Leaders
1. Treat AI vendors as critical dependencies, not just tools.
If a frontier AI provider is embedded in coding, search, documentation, analytics, or agentic workflows, a legal or procurement shock can become an operational disruption. Track where you are dependent on a single model provider and where that dependency would hurt most.
2. For your highest-value uses, define fallback workflows ahead of time.
You may not be able to replace every provider quickly, but you should know what happens if a key AI service becomes unavailable, restricted, or no longer acceptable for regulatory or contractual reasons. For the workflows that matter most, decide in advance how the work gets done without that vendor.
3. Keep guardrails in place when AI is involved in critical changes.
AI can speed up engineering, operations, and decision-making, but that speed can create new failure modes if approvals, testing, rollback, and human review get weakened. Be especially careful in environments where AI-assisted or agentic systems can make infrastructure, code, security, or configuration changes.
4. Inventory where AI has real privilege.
The risk is much higher when AI can execute code, access sensitive data, approve actions, or trigger automations. Focus your review on those integrations first, because those are the places where vendor problems or internal AI mistakes are most likely to turn into real incidents.
5. Make your teams define the actual vendor risk they are worried about.
A vendor can create very different kinds of risk: technical compromise risk, foreign-control risk, continuity risk, or procurement/governance risk. Forcing that distinction helps teams respond more clearly and avoid treating every controversy like a hidden software compromise.
Resources
1. Statement from Dario Amodei on our discussions with the Department of War (Anthropic, Feb. 26, 2026) https://www.anthropic.com/news/statement-department-of-war
2. Where things stand with the Department of War (Anthropic, Mar. 5, 2026) https://www.anthropic.com/news/where-stand-department-war
3. Anthropic v. U.S. Department of War et al. — Complaint for Declaratory and Injunctive Relief (N.D. Cal., filed Mar. 9, 2026) (court filing PDF) https://cand.uscourts.gov/cases-e-filing/cases/326-cv-01996/anthropic-pbc-v-us-department-war-et-al
4. BOD 17-01: Removal of Kaspersky-branded Products (CISA/DHS, Sept. 13, 2017) https://www.dhs.gov/archive/news/2017/09/13/dhs-statement-issuance-binding-operational-directive-17-01
5. Amazon holds engineering meeting following AI-related outages (Financial Times, Mar. 2026) https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f771de

Tuesday Mar 03, 2026
Google Gemini Changed the Rules: Are Your API Keys Exposed?
Tuesday Mar 03, 2026
Tuesday Mar 03, 2026
For years, many Google API keys were treated as “public” project identifiers embedded in client-side code and protected mainly through referrer and API restrictions. But a recent discovery suggests Gemini changes that risk model: researchers found nearly 3,000 publicly exposed Google API keys that were still “live” and could be used to interact with Gemini endpoints, creating a new path to unauthorized usage, quota exhaustion, and potentially costly API charges.
In this episode of Cyberside Chats, we unpack what “changed the rules” actually means, why this is a classic cloud governance problem (old assumptions meeting new capabilities), and what to check right now. The bottom line: AI features are quietly expanding the blast radius of credentials you never intended to treat as secrets.
Key Takeaways
1. Audit legacy API keys before and after enabling AI services - Inventory every API key across your cloud projects and confirm it is still required, properly scoped, and has a clear owner. Treat AI enablement as a formal trigger event to reassess any previously published or embedded keys in that same project.
2. Treat API keys as sensitive credentials in the AI era - Even if a vendor once described a key as “not a secret,” AI endpoints materially increase financial and potential data exposure risk. Apply rotation, monitoring, strict quotas, and real-time billing alerts accordingly.
3. Enforce least privilege at the API level - Referrer or IP restrictions alone are insufficient. Every key should be explicitly limited to only the APIs it requires. “Allow all APIs” should not exist in production.
4. Isolate AI development from production application projects - Avoid enabling AI services in long-lived projects that contain public-facing keys. Use separate projects, accounts, or subscriptions for AI experimentation and production workloads to reduce blast radius and cost exposure.
5. Update third-party risk management to include AI-driven credential and cost risk - Ask vendors how API keys are scoped, restricted, rotated, and monitored especially for AI services. Confirm that AI environments are isolated from production systems and that abnormal AI usage or billing spikes are actively monitored.
Resources:
1. Google API Keys Weren’t Secrets. But then Gemini Changed the Rules (Truffle Security)
https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules
2. Previously harmless Google API keys now expose Gemini AI data (BleepingComputer)
3. DEF CON 31 – “Private Keys in Public Places” (Tom Pohl) (YouTube) https://www.youtube.com/watch?v=7t_ntuSXniw
4. Exposed Secrets, Broken Trust: What the DOGE API Key Leak Teaches Us About Software Security (LMG Security)
5. Google Cloud docs: API keys overview & best practices (Google) https://docs.cloud.google.com/api-keys/docs/overview

Tuesday Feb 24, 2026
Opus 4.6: Changing the Pace of Software Exploitation Description
Tuesday Feb 24, 2026
Tuesday Feb 24, 2026
Claude Opus 4.6 is generating serious buzz for one reason: it can rapidly spot zero-day vulnerabilities out of the box, suggesting that long-trusted software may no longer be as “safe by default” as security teams assume.
At the same time, Microsoft’s February patch cycle included an unusually high number of zero-days already under active exploitation — real-world evidence that the race is already accelerating, and the window between discovery and impact is shrinking.
In this Cyberside Chats Live, we’ll connect the dots on what this means for defenders in 2026: a shrinking window between discovery and exploitation, shifting assumptions about “well-tested” software, and practical ways to rethink patch prioritization, detection, and exposure management.
Key Takeaways:
1. Plan for exploitation before disclosure - The era of negative-day vulnerabilities is here, flaws that may be discovered and weaponized before the broader security community even knows they exist. Assume exploitation could precede public advisories. Build response models around mitigation speed, not just patch timelines.
2. Prioritize exposure, not just severity - In a compressed exploit cycle, CVSS alone won’t protect you. Focus first on internet-facing systems, identity infrastructure, and high-privilege assets. If you cannot quickly identify what is externally reachable, that visibility gap becomes strategic risk.
3. Assume compromise on exposed assets and monitor accordingly - If attackers can exploit vulnerabilities before the world knows they exist, you may be compromised without a CVE to point to. Increase monitoring on internet-facing systems and critical apps for signs of intrusion: unexpected processes, new admin accounts, unusual authentication patterns, suspicious outbound connections, and persistence mechanisms.
4. Treat compensating controls as first-line defense - When patches aren’t available or cannot be deployed immediately rapid mitigations matter. Restrict access, disable vulnerable features, deploy firewall and WAF protections, and tighten segmentation. Mitigation agility should be operational, tested, and pre-authorized.
5. Prepare for containment patches may not exist - If exploitation is confirmed and no fix is available, leadership decisions must happen quickly. Define in advance who can isolate systems, disable services, revoke credentials, or temporarily disrupt operations. Shorten containment decision cycles before you need them.
6. Rehearse a “negative-day” tabletop - Run a scenario where exploitation is active, no patch exists, and public disclosure hasn’t occurred. Measure how fast you can reduce exposure, hunt internally, and communicate with executives. This exercise will expose friction points that policies alone will not.
7. Integrate AI into your vendor risk model - If AI is accelerating vulnerability discovery and code generation, your third parties are likely using it too. Update vendor due diligence to assess how AI-generated code is reviewed, secured, and tested. Ask about model governance, secure development controls, and vulnerability response timelines. If you lack visibility into how vendors manage AI risk, that gap becomes part of your attack surface.
Resources:
1. Anthropic – Evaluating and Mitigating the Growing Risk of LLM-Discovered 0-Days (Feb 5, 2026) https://red.anthropic.com/2026/zero-days/
2. Zero Day Initiative – February 2026 Security Update Review https://www.zerodayinitiative.com/blog/2026/2/10/the-february-2026-security-update-review
3. SecurityWeek – 6 Actively Exploited Zero-Days Patched by Microsoft (Feb 2026) https://www.securityweek.com/6-actively-exploited-zero-days-patched-by-microsoft-with-february-2026-updates/
4. Tenable – Claude Opus and AI-Driven Vulnerability Discovery Analysis https://www.tenable.com/blog/Anthropic-Claude-Opus-AI-vulnerability-discovery-cybersecurity
5. OpenAI releases crypto security tool as Claude blamed for $2.7m Moonwell bug
https://www.dlnews.com/articles/defi/openai-releases-crypto-security-tool/

Looking for more cybersecurity resources?
Check out our additional resources:
Blog: https://www.LMGsecurity.com/blog/
Top Controls Reports: https://www.LMGsecurity.com/top-security-controls-reports/
Videos: www.youtube.com/@LMGsecurity
