
8.9K
Downloads
68
Episodes
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live!
Youtube channel: https://www.youtube.com/LMGsecurity
Register Here: https://lmgsecurity.zoom.us/webinar/register/WN_4FpdxB0VQo6aURK1p7_k_g
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live!
Youtube channel: https://www.youtube.com/LMGsecurity
Register Here: https://lmgsecurity.zoom.us/webinar/register/WN_4FpdxB0VQo6aURK1p7_k_g
Episodes

21 minutes ago
Security Debt: The Risk Nobody is Reporting
21 minutes ago
21 minutes ago
In this live episode of Cyberside Chats, we dig into security debt and why it continues to sit behind so many major incidents. This is the risk that builds quietly over time when controls are available but never turned on, systems aren’t fully decommissioned, or ownership is unclear.
Using recent examples like Stryker, along with Change Healthcare and Colonial Pipeline, we walk through how attackers don’t always need sophisticated techniques. In many cases, they just take advantage of gaps that have been sitting there for years. We also introduce a simple framework to think about security debt across identity, lifecycle, architecture, governance, and operations, and why most real-world incidents cut across more than one of these areas.
We close with a look at how things are changing. With AI accelerating exploit development, the window to fix these issues is getting smaller. What used to be a manageable delay is quickly becoming real exposure.
Audience takeaways
- Require dual approval for destructive admin actions. Any system where one administrator can wipe, delete, or lock out at scale — Intune, Entra, identity providers, backup consoles, remote management tools — should require a second administrator to approve the action before it executes. Microsoft's Multi Admin Approval does this for Intune. Most identity and backup platforms have an equivalent. Turn it on. Stryker is the case study for what happens when you don't. (Addresses: Governance debt primarily; reduces Identity and Architecture debt blast radius.)
- Enforce phishing-resistant MFA on every administrator and every remote-access path. Not "available," not "recommended" — enforced, with no exceptions. Every admin account. Every VPN. Every Citrix or similar remote portal. Change Healthcare is the case study for what a single missing MFA checkbox costs. (Addresses: Identity debt.)
- Separate admin work from daily work. Admins should use dedicated, hardened devices for privileged tasks — never the same laptop they use for email and browsing. An infostealer on an admin's everyday device is how privileged credentials walk out the door; isolating admin sessions removes that path. Microsoft calls this pattern Privileged Access Workstations; other vendors have equivalents. This directly addresses how attackers likely got Stryker's admin credentials in the first place. (Addresses: Architecture debt; reduces Identity debt.)
- Cut your patch SLA in half and plan capacity accordingly. Whatever your current median time-to-remediate is for critical vulnerabilities, assume you need to hit half of it within the next year. The Mythos research shows attacker timelines are compressing from weeks to hours. Your patch program needs budget, automation, and process changes to keep up — not pep talks. (Addresses: Operational debt.)
- Put expiration dates on every security exception and review them quarterly. If your exception register contains entries with no expiration date, no owner, or a "revisit in the future" stub — those are governance debt. Every open exception should have an expiration date, a named owner, and a scheduled review. Exceptions are fine; forever-exceptions are not. This is also how you close the loop on lifecycle debt: an EOS system running past its decommission date is just an exception someone never wrote down. (Addresses: Governance debt and Lifecycle debt.)
References For listeners who want to dig into the source material referenced in this episode:
- CISA Alert — Endpoint Management System Hardening After Cyberattack Against US Organization (March 18, 2026). The official CISA advisory issued in the wake of the Stryker incident, including specific guidance on Multi Admin Approval for high-impact actions like device wiping. cisa.gov/news-events/alerts/2026/03/18/cisa-urges-endpoint-management-system-hardening-after-cyberattack-against-us-organization
- CISA Binding Operational Directive 26-02 — Mitigating Risk From End-of-Support Edge Devices (February 5, 2026). The federal directive that defines deadlines for inventorying and decommissioning unsupported edge infrastructure — a useful baseline for anyone managing lifecycle debt. cisa.gov/news-events/directives/bod-26-02-mitigating-risk-end-support-edge-devices
- 3. Andrew Witty Written Testimony, House Energy & Commerce Subcommittee on Oversight (April 30, 2024). UnitedHealth Group CEO's congressional testimony confirming the Change Healthcare breach occurred via a Citrix portal that did not have multi-factor authentication enabled. energycommerce.house.gov/events/oversight-and-investigations-subcommittee-hearing-examining-the-change-healthcare-cyberattack

Tuesday Apr 21, 2026
Claude Code Leak: What Security Leaders Need to Know About AI Coding Agents
Tuesday Apr 21, 2026
Tuesday Apr 21, 2026
Anthropic accidentally exposed the source code for its Claude Code CLI—and while no customer data or model weights were involved, the impacts are significant.
In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down what actually leaked, why the agent layer matters more than most people realize, and what happened next—including the rapid emergence of new open-source alternatives like Claw Code.
They also answer key questions from a client:
1. What risks should organizations be thinking about because of this leak?
2. Does this change how AI coding tools should be monitored?
3. What are some practical recommendations for educating end users and developers?
The conversation focuses on real-world impact: execution risk, supply chain exposure, and the growing need for governance around “vibe coding” tools.
Key Takeaways
1. Treat AI coding agents like controlled execution environments These tools can read files, execute commands, and modify code. Govern them like CI/CD or automation systems with constrained permissions and segmentation.
2. Assume attackers are studying this architecture right now The leak removes guesswork. Expect more targeted prompt injection and tool abuse as adversaries analyze how these systems behave internally.
3. Prioritize immediate risks: malicious repos and supply chain abuse Threat actors are already using this as a lure. Monitor for typosquatting, dependency confusion, and “leaked” tools distributing malware.
4. Ensure developers know what’s official—and what isn’t Make sure teams can distinguish between official tools and alternatives. If using open-source variants, vet the source, maintainers, and security model.
5.Take this as an opportunity to formalize AI governance for coding and development tools. Many organizations are still experimenting. Define policies, logging, and oversight now, especially around how these tools are approved and used.

Tuesday Apr 14, 2026
Tuesday Apr 14, 2026
Anthropic’s Project Glasswing and its unreleased Mythos model signal a potential turning point in cybersecurity: AI that can find—and potentially exploit—software vulnerabilities at unprecedented scale.
In this episode of Cyberside Chats, Sherri Davidoff and Tom Pohl break down what this means for organizations today. If AI can uncover decades-old bugs in seconds, what happens to patching cycles, vulnerability management, and the balance between attackers and defenders?
They explore the uncomfortable reality: we may be entering a period where vulnerabilities are discovered faster than organizations can fix them—and where access to powerful AI tools could determine who wins and loses in cybersecurity.
From continuous patching to network segmentation and vendor accountability, this episode focuses on what security leaders need to do right now to prepare for a rapidly shifting threat landscape.
Key Takeaways
1. Reduce your internet exposure - If a system doesn’t need to be publicly accessible, don’t put it on the internet. Move services behind firewalls, VPNs, or restricted access controls wherever possible. Attack surface matters more than ever.
2. Vet your vendors’ security practices - Don’t just trust that vendors are handling security well. Ask how they:
- Secure their development lifecycle (SDLC)
- Detect and respond to vulnerabilities
- Patch and distribute fixes
- Vendor risk is now a direct extension of your own risk.
3. Budget for ongoing maintenance of custom code - Custom applications aren’t “done” at deployment. Plan for:
- Regular security testing
- Continuous patching
- Developer time to fix vulnerabilities
- Software is a living system and requires ongoing care and feeding.
4. Segment your network to limit attacker movement - Assume attackers will get in. The goal is to stop them from moving laterally:
- Separate critical systems
- Limit privileged account access
- Control how systems communicate
- Containment is just as important as prevention.
5. Update your incident response plan for zero-day reality - Your IR plan should assume:
- Exploits may exist before patches are available
- Detection may lag behind compromise
- Prepare for faster response, imperfect information, and active exploitation of unknown vulnerabilities.
Resources & References
1. Anthropic – Project Glasswing - https://www.anthropic.com/glasswing
2. Anthropic – Mythos Preview - https://red.anthropic.com/2026/mythos-preview/
3. Historical example discussed: Microsoft bug tracking system breach (2017)
4. Example referenced: ProxyShell (Microsoft Exchange vulnerabilities and rapid exploitation)

Tuesday Apr 07, 2026
We don’t break in, we badge in
Tuesday Apr 07, 2026
Tuesday Apr 07, 2026
In this episode, Matt interviews Tom and Derek from our pen test team to break down why attackers often don’t need to hack their way in at all.
While most organizations invest heavily in tools like EDR and SIEM, Tom and Derek share how they regularly get inside buildings using nothing more than confidence, a good story, and sometimes even a box of donuts. From posing as copier technicians to tailgating behind employees, their experiences show that people are often the easiest way into an organization.
And once they’re in, things escalate fast. Physical access can quickly turn into network access, whether it’s plugging in a device, jumping on an unlocked workstation, or moving through the environment with far fewer restrictions than an external attacker would face.
The big takeaway is simple. Real-world testing exposes what audits miss. Doors get propped open, employees try to be helpful, and small gaps add up in ways most organizations never see on paper.
If you’re not testing your people and your physical controls, you’re only testing part of your security.
Key takeaways:
1. Attackers target people first, not systems - Social engineering consistently bypasses even mature technical controls.
2. Physical access equals full compromise - Once inside your facility, most security controls can be circumvented quickly.
3. Un-tested controls are assumed to fail - If you’re not running social engineering or physical assessments, you don’t know your real risk.
4. Culture is a security control - Employees must feel empowered to challenge, verify, and report suspicious behavior.
5. Real-world testing reveals what audits miss - Offensive social engineering exposes how attacks succeed, not just theoretical vulnerabilities.

Tuesday Mar 31, 2026
Stryker Attack Analysis: Cybersecurity and insurance perspectives
Tuesday Mar 31, 2026
Tuesday Mar 31, 2026
A $25 billion medical device company brought to a standstill—without a zero-day exploit.
In this episode of Cyberside Chats, Sherri Davidoff is joined by cyber insurance expert Bridget Quinn Choi to unpack the Stryker cyberattack and what it reveals about modern enterprise risk. From compromised admin credentials to the abuse of Microsoft Entra and Intune, this incident highlights how attackers are increasingly using trusted tools to cause widespread disruption.
We explore what likely happened, why this wasn’t a “sophisticated” attack in the traditional sense, and how a single identity compromise can cascade into operational shutdown. Bridget brings a unique perspective from the cyber insurance world—explaining how insurers evaluate risk, why some large companies choose to go without coverage, and what organizations lose when they do.
We also dig into phishing-resistant MFA, governance of powerful admin tools, and the evolving role of insurance as both a financial backstop and a driver of better security practices.
If your organization relies on centralized identity and device management systems, this is a conversation you can’t afford to miss.
Key Takeaways for Security Leadership
1. Use Cyber Insurance as a Security Maturity Lever Don’t treat cyber insurance as a checkbox—it can actively strengthen your security program. Use underwriting requirements to benchmark your controls, ask brokers and carriers where you differ from peers, and take advantage of included services like threat intelligence and incident response support. Approach renewal as a security review, not just a policy purchase.
2. Treat Self-Insurance as a Strategic Risk Decision—Not a Cost Savings Measure If you’re considering self-insuring cyber risk, account for what you’re giving up: external validation of your controls, a built-in incident response ecosystem, and coordinated support during a crisis. This should be a board-level discussion focused on whether the organization can handle a major operational outage—not just absorb the financial loss.
3. Secure Your Device Management Systems—Because They Can Control Everything at Once Systems used to manage laptops, servers, and mobile devices can push changes across your entire organization. If attackers gain access, they can disrupt operations at scale. Treat these as central control hubs, limit administrative access, and apply strong monitoring and authentication controls.
4. Require Dual Approval for High-Impact Administrative Actions Add a second layer of human verification for actions that could impact many systems, such as device wipes or large-scale changes. This introduces intentional friction that helps prevent catastrophic mistakes or misuse.
5. Move to Phishing-Resistant MFA for Privileged Access Traditional MFA can be bypassed. For high-risk accounts, adopt phishing-resistant methods like passkeys or hardware-backed authentication and prioritize these protections for users with administrative access.
6. Make Sure You Can Actually Recover—Not Just Back Up Backups only matter if they work under pressure. Test your ability to restore critical systems, ensure backups are protected from attackers, and measure how long recovery actually takes in a real-world scenario.
Resources
1. Stryker cyberattack reporting (New York Times) https://www.nytimes.com/2026/03/12/world/middleeast/stryker-iran-cyberattack.html
2. CISA alert on endpoint management system hardening https://www.cisa.gov/news-events/alerts/2026/03/18/cisa-urges-endpoint-management-system-hardening-after-cyberattack-against-us-organization
3. SecurityWeek coverage of the Stryker incident https://www.securityweek.com/medtech-giant-stryker-crippled-by-iran-linked-hacker-attack/
4. Lumos analysis of the Stryker hack https://www.lumos.com/blog/stryker-hack
5. Microsoft Intune security best practices https://techcommunity.microsoft.com/blog/intunecustomersuccess/best-practices-for-securing-microsoft-intune/4502117

Tuesday Mar 24, 2026
Mass Exploitation 2.0: Web Platforms Under Attack
Tuesday Mar 24, 2026
Tuesday Mar 24, 2026
Mass exploitation vulnerabilities are back—and they’re evolving. In this Cyberside Chats Live episode, we break down the recently disclosed React2Shell vulnerability and the confirmed LexisNexis incident, where attackers exploited an unpatched web application to access cloud infrastructure and exfiltrate data.
But this isn’t new. From SQL Slammer to Log4Shell to ProxyShell, we’ve seen this pattern before: widely deployed, internet-facing systems + simple exploits + automation = rapid, large-scale compromise.
Most importantly, we focus on what matters for organizations today: how to reduce exposure, how to prepare for the next mass exploitation event, and why you should assume compromise the moment one of these vulnerabilities emerges.
Key Takeaways for Security Leaders
1. Inventory and monitor all internet-facing systems. Maintain a current, validated inventory of externally accessible applications and services—because you can’t secure what you don’t know is exposed.
2. Reduce unnecessary exposure at the network edge. Remove or restrict public access to administrative interfaces and systems that do not need to be internet-facing.
3. Build and rehearse a rapid-response playbook for mass-exploitation vulnerabilities. Define roles, timelines, and actions for the first 24–72 hours so your team can move immediately when the next major vulnerability drops.
4. Contact critical vendors and suppliers during major vulnerability events. Don’t wait—proactively verify whether your vendors are affected and whether your data may be at risk through third- or fourth-party exposure.
5. Assume vulnerable internet-facing systems may already be compromised. When mass exploitation begins, attackers are moving at internet speed—patching alone is not enough. Investigate, hunt for persistence, and validate that systems are clean.
Resources
1. React2Shell vulnerability coverage (BleepingComputer) https://www.bleepingcomputer.com/news/security/react2shell-flaw-exploited-to-breach-30-orgs-77k-ip-addresses-vulnerable/
2. LexisNexis breach details (BleepingComputer) https://www.bleepingcomputer.com/news/security/lexisnexis-confirms-data-breach-as-hackers-leak-stolen-files/
3. Compromised web hosting panels in cybercrime markets (BleepingComputer) https://www.bleepingcomputer.com/news/security/compromised-site-management-panels-are-a-hot-item-in-cybercrime-markets/
4. CISA Known Exploited Vulnerabilities Catalog https://www.cisa.gov/known-exploited-vulnerabilities-catalog

Tuesday Mar 17, 2026
Is Anthropic a Pentagon “Supply Chain Risk”?
Tuesday Mar 17, 2026
Tuesday Mar 17, 2026
Anthropic has been labeled a “Supply-Chain Risk to National Security” after refusing two uses of its models: mass surveillance of Americans and lethal autonomous warfare without human oversight. But is Anthropic really a supply-chain risk, and how does this designation affect businesses that use Claude? In this episode, Sherri Davidoff and Matt Durrin unpack the timeline behind the Pentagon’s designation, what Anthropic claims is actually driving the conflict, and what’s known (and not known) about any underlying technical risk. They compare the situation to Kaspersky—where the supply-chain concern centered on privileged security software, foreign-state leverage, and update-channel risk—then bring it back to the enterprise questions that matter: vendor dependency, continuity planning, and what changes when an AI provider becomes politically or contractually constrained.
Key Takeaways for Security Leaders
1. Treat AI vendors as critical dependencies, not just tools.
If a frontier AI provider is embedded in coding, search, documentation, analytics, or agentic workflows, a legal or procurement shock can become an operational disruption. Track where you are dependent on a single model provider and where that dependency would hurt most.
2. For your highest-value uses, define fallback workflows ahead of time.
You may not be able to replace every provider quickly, but you should know what happens if a key AI service becomes unavailable, restricted, or no longer acceptable for regulatory or contractual reasons. For the workflows that matter most, decide in advance how the work gets done without that vendor.
3. Keep guardrails in place when AI is involved in critical changes.
AI can speed up engineering, operations, and decision-making, but that speed can create new failure modes if approvals, testing, rollback, and human review get weakened. Be especially careful in environments where AI-assisted or agentic systems can make infrastructure, code, security, or configuration changes.
4. Inventory where AI has real privilege.
The risk is much higher when AI can execute code, access sensitive data, approve actions, or trigger automations. Focus your review on those integrations first, because those are the places where vendor problems or internal AI mistakes are most likely to turn into real incidents.
5. Make your teams define the actual vendor risk they are worried about.
A vendor can create very different kinds of risk: technical compromise risk, foreign-control risk, continuity risk, or procurement/governance risk. Forcing that distinction helps teams respond more clearly and avoid treating every controversy like a hidden software compromise.
Resources
1. Statement from Dario Amodei on our discussions with the Department of War (Anthropic, Feb. 26, 2026) https://www.anthropic.com/news/statement-department-of-war
2. Where things stand with the Department of War (Anthropic, Mar. 5, 2026) https://www.anthropic.com/news/where-stand-department-war
3. Anthropic v. U.S. Department of War et al. — Complaint for Declaratory and Injunctive Relief (N.D. Cal., filed Mar. 9, 2026) (court filing PDF) https://cand.uscourts.gov/cases-e-filing/cases/326-cv-01996/anthropic-pbc-v-us-department-war-et-al
4. BOD 17-01: Removal of Kaspersky-branded Products (CISA/DHS, Sept. 13, 2017) https://www.dhs.gov/archive/news/2017/09/13/dhs-statement-issuance-binding-operational-directive-17-01
5. Amazon holds engineering meeting following AI-related outages (Financial Times, Mar. 2026) https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f771de

Tuesday Mar 03, 2026
Google Gemini Changed the Rules: Are Your API Keys Exposed?
Tuesday Mar 03, 2026
Tuesday Mar 03, 2026
For years, many Google API keys were treated as “public” project identifiers embedded in client-side code and protected mainly through referrer and API restrictions. But a recent discovery suggests Gemini changes that risk model: researchers found nearly 3,000 publicly exposed Google API keys that were still “live” and could be used to interact with Gemini endpoints, creating a new path to unauthorized usage, quota exhaustion, and potentially costly API charges.
In this episode of Cyberside Chats, we unpack what “changed the rules” actually means, why this is a classic cloud governance problem (old assumptions meeting new capabilities), and what to check right now. The bottom line: AI features are quietly expanding the blast radius of credentials you never intended to treat as secrets.
Key Takeaways
1. Audit legacy API keys before and after enabling AI services - Inventory every API key across your cloud projects and confirm it is still required, properly scoped, and has a clear owner. Treat AI enablement as a formal trigger event to reassess any previously published or embedded keys in that same project.
2. Treat API keys as sensitive credentials in the AI era - Even if a vendor once described a key as “not a secret,” AI endpoints materially increase financial and potential data exposure risk. Apply rotation, monitoring, strict quotas, and real-time billing alerts accordingly.
3. Enforce least privilege at the API level - Referrer or IP restrictions alone are insufficient. Every key should be explicitly limited to only the APIs it requires. “Allow all APIs” should not exist in production.
4. Isolate AI development from production application projects - Avoid enabling AI services in long-lived projects that contain public-facing keys. Use separate projects, accounts, or subscriptions for AI experimentation and production workloads to reduce blast radius and cost exposure.
5. Update third-party risk management to include AI-driven credential and cost risk - Ask vendors how API keys are scoped, restricted, rotated, and monitored especially for AI services. Confirm that AI environments are isolated from production systems and that abnormal AI usage or billing spikes are actively monitored.
Resources:
1. Google API Keys Weren’t Secrets. But then Gemini Changed the Rules (Truffle Security)
https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules
2. Previously harmless Google API keys now expose Gemini AI data (BleepingComputer)
3. DEF CON 31 – “Private Keys in Public Places” (Tom Pohl) (YouTube) https://www.youtube.com/watch?v=7t_ntuSXniw
4. Exposed Secrets, Broken Trust: What the DOGE API Key Leak Teaches Us About Software Security (LMG Security)
5. Google Cloud docs: API keys overview & best practices (Google) https://docs.cloud.google.com/api-keys/docs/overview

Looking for more cybersecurity resources?
Check out our additional resources:
Blog: https://www.LMGsecurity.com/blog/
Top Controls Reports: https://www.LMGsecurity.com/top-security-controls-reports/
Videos: www.youtube.com/@LMGsecurity
