
3K
Downloads
61
Episodes
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live! Our next session will be announced soon.
Stay ahead of the latest cybersecurity trends with Cyberside Chats! Listen to our weekly podcast every Tuesday at 6:30 a.m. ET, and join us live once a month for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity professional or an executive looking to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you stay informed and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Join us monthly for an interactive Cyberside Chats: Live! Our next session will be announced soon.
Episodes

21 minutes ago
21 minutes ago
For years, many Google API keys were treated as “public” project identifiers embedded in client-side code and protected mainly through referrer and API restrictions. But a recent discovery suggests Gemini changes that risk model: researchers found nearly 3,000 publicly exposed Google API keys that were still “live” and could be used to interact with Gemini endpoints, creating a new path to unauthorized usage, quota exhaustion, and potentially costly API charges.
In this episode of Cyberside Chats, we unpack what “changed the rules” actually means, why this is a classic cloud governance problem (old assumptions meeting new capabilities), and what to check right now. The bottom line: AI features are quietly expanding the blast radius of credentials you never intended to treat as secrets.
Key Takeaways
1. Audit legacy API keys before and after enabling AI services - Inventory every API key across your cloud projects and confirm it is still required, properly scoped, and has a clear owner. Treat AI enablement as a formal trigger event to reassess any previously published or embedded keys in that same project.
2. Treat API keys as sensitive credentials in the AI era - Even if a vendor once described a key as “not a secret,” AI endpoints materially increase financial and potential data exposure risk. Apply rotation, monitoring, strict quotas, and real-time billing alerts accordingly.
3. Enforce least privilege at the API level - Referrer or IP restrictions alone are insufficient. Every key should be explicitly limited to only the APIs it requires. “Allow all APIs” should not exist in production.
4. Isolate AI development from production application projects - Avoid enabling AI services in long-lived projects that contain public-facing keys. Use separate projects, accounts, or subscriptions for AI experimentation and production workloads to reduce blast radius and cost exposure.
5. Update third-party risk management to include AI-driven credential and cost risk - Ask vendors how API keys are scoped, restricted, rotated, and monitored especially for AI services. Confirm that AI environments are isolated from production systems and that abnormal AI usage or billing spikes are actively monitored.
Resources:
1. Google API Keys Weren’t Secrets. But then Gemini Changed the Rules (Truffle Security)
https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules
2. Previously harmless Google API keys now expose Gemini AI data (BleepingComputer)
3. DEF CON 31 – “Private Keys in Public Places” (Tom Pohl) (YouTube) https://www.youtube.com/watch?v=7t_ntuSXniw
4. Exposed Secrets, Broken Trust: What the DOGE API Key Leak Teaches Us About Software Security (LMG Security)
5. Google Cloud docs: API keys overview & best practices (Google) https://docs.cloud.google.com/api-keys/docs/overview

Tuesday Feb 24, 2026
Opus 4.6: Changing the Pace of Software Exploitation Description
Tuesday Feb 24, 2026
Tuesday Feb 24, 2026
Claude Opus 4.6 is generating serious buzz for one reason: it can rapidly spot zero-day vulnerabilities out of the box, suggesting that long-trusted software may no longer be as “safe by default” as security teams assume.
At the same time, Microsoft’s February patch cycle included an unusually high number of zero-days already under active exploitation — real-world evidence that the race is already accelerating, and the window between discovery and impact is shrinking.
In this Cyberside Chats Live, we’ll connect the dots on what this means for defenders in 2026: a shrinking window between discovery and exploitation, shifting assumptions about “well-tested” software, and practical ways to rethink patch prioritization, detection, and exposure management.
Key Takeaways:
1. Plan for exploitation before disclosure - The era of negative-day vulnerabilities is here, flaws that may be discovered and weaponized before the broader security community even knows they exist. Assume exploitation could precede public advisories. Build response models around mitigation speed, not just patch timelines.
2. Prioritize exposure, not just severity - In a compressed exploit cycle, CVSS alone won’t protect you. Focus first on internet-facing systems, identity infrastructure, and high-privilege assets. If you cannot quickly identify what is externally reachable, that visibility gap becomes strategic risk.
3. Assume compromise on exposed assets and monitor accordingly - If attackers can exploit vulnerabilities before the world knows they exist, you may be compromised without a CVE to point to. Increase monitoring on internet-facing systems and critical apps for signs of intrusion: unexpected processes, new admin accounts, unusual authentication patterns, suspicious outbound connections, and persistence mechanisms.
4. Treat compensating controls as first-line defense - When patches aren’t available or cannot be deployed immediately rapid mitigations matter. Restrict access, disable vulnerable features, deploy firewall and WAF protections, and tighten segmentation. Mitigation agility should be operational, tested, and pre-authorized.
5. Prepare for containment patches may not exist - If exploitation is confirmed and no fix is available, leadership decisions must happen quickly. Define in advance who can isolate systems, disable services, revoke credentials, or temporarily disrupt operations. Shorten containment decision cycles before you need them.
6. Rehearse a “negative-day” tabletop - Run a scenario where exploitation is active, no patch exists, and public disclosure hasn’t occurred. Measure how fast you can reduce exposure, hunt internally, and communicate with executives. This exercise will expose friction points that policies alone will not.
7. Integrate AI into your vendor risk model - If AI is accelerating vulnerability discovery and code generation, your third parties are likely using it too. Update vendor due diligence to assess how AI-generated code is reviewed, secured, and tested. Ask about model governance, secure development controls, and vulnerability response timelines. If you lack visibility into how vendors manage AI risk, that gap becomes part of your attack surface.
Resources:
1. Anthropic – Evaluating and Mitigating the Growing Risk of LLM-Discovered 0-Days (Feb 5, 2026) https://red.anthropic.com/2026/zero-days/
2. Zero Day Initiative – February 2026 Security Update Review https://www.zerodayinitiative.com/blog/2026/2/10/the-february-2026-security-update-review
3. SecurityWeek – 6 Actively Exploited Zero-Days Patched by Microsoft (Feb 2026) https://www.securityweek.com/6-actively-exploited-zero-days-patched-by-microsoft-with-february-2026-updates/
4. Tenable – Claude Opus and AI-Driven Vulnerability Discovery Analysis https://www.tenable.com/blog/Anthropic-Claude-Opus-AI-vulnerability-discovery-cybersecurity
5. OpenAI releases crypto security tool as Claude blamed for $2.7m Moonwell bug
https://www.dlnews.com/articles/defi/openai-releases-crypto-security-tool/

Tuesday Feb 17, 2026
Nancy Guthrie’s Recovered Footage: The Reality of Residual Data
Tuesday Feb 17, 2026
Tuesday Feb 17, 2026
After the FBI announced it recovered previously inaccessible video from Nancy Guthrie’s disconnected Google Nest doorbell, one thing became clear: in releasing the footage, authorities revealed an important truth — deleted surveillance footage may not really be deleted. That means law enforcement (or threat actors) could potentially access it.
The case remains ongoing and deeply serious. For enterprise security leaders, the lesson is bigger than a consumer camera: modern systems often retain residual data across devices, local buffers, and vendor backends, even when teams believe it has been removed. In this episode of Cyberside Chats, we examine what that means for corporate environments, including IoT and physical security systems, data retention and legal exposure, vendor access models, and incident response realities when “deleted” data can still be recovered.
This case underscores a complex reality: data can remain accessible long after we believe it’s gone: sometimes a source of risk, and sometimes invaluable.
Key Takeaways:
1. Treat vendors as part of your data perimeter - Review contracts and platform settings to understand who can access footage or logs, what “support access” entails, what data is retained in backend systems, and how data is handled during incident response or legal requests.
2. Control encryption keys and access paths - Know who holds encryption keys, how administrative access is granted and monitored, and whether “end-to-end encryption” claims align with your threat model and regulatory requirements.
3. Include IoT and security devices in your data inventory - Cameras, badge systems, and smart building technology are data systems. Document on-device storage, cloud sync behavior, local buffers, and backend retention — not just cloud repositories.
4. Align retention decisions with legal and regulatory risk - Longer retention may aid investigations but increases eDiscovery scope, breach exposure, and privacy obligations. Retention should be a deliberate business risk decision made with Legal and Compliance.
5. Test whether deletion actually works - Validate purge workflows across vendor platforms and internal systems, including backups and disaster recovery, because “logical deletion” often isn’t “forensic deletion.” Build policies around how long data persists in replicas, backups, buffers, and vendor systems — and plan accordingly in both incident response and governance strategy.
Resources:
1. Tom’s Guide – How did the FBI get Nancy Guthrie’s Google Nest camera footage if it was disabled — and what does it mean for your privacy? https://www.tomsguide.com/computing/online-security/how-did-the-fbi-get-nancy-guthries-google-nest-camera-footage-if-it-was-disabled-and-what-does-it-mean-for-your-privacy
2. CNET – Amazon’s Ring cameras push deeper into police and government surveillance https://www.cnet.com/home/security/amazons-ring-cameras-push-deeper-into-police-and-government-surveillance/
3.NBC News – Ring doorbell camera employees mishandled customer videos, FTC says https://www.nbcnews.com/business/consumer/ring-doorbell-camera-employees-mishandled-customer-videos-rcna87103
4. Federal Trade Commission – Ring Refunds https://www.ftc.gov/enforcement/refunds/ring-refunds
5. R Street Institute – Apple pulls end-to-end encryption feature from UK after demands for law enforcement access https://www.rstreet.org/commentary/apple-pulls-end-to-end-encryption-feature-from-uk-after-demands-for-law-enforcement-access/
6. Exposing the Secret Office 365 Forensics Tool – An ethical crisis in the digital forensics industry came to a head last week with the release of new details on Microsoft’s undocumented “Activities” API. https://www.lmgsecurity.com/exposing-the-secret-office-365-forensics-tool/

Tuesday Feb 10, 2026
Ransomware Gangs Are Teaming Up
Tuesday Feb 10, 2026
Tuesday Feb 10, 2026
Ransomware gangs aren’t operating alone anymore and the lines between them are increasingly blurry.
In this episode of Cyberside Chats, we look at how modern ransomware groups collaborate, specialize, and team up to scale attacks faster. Using ShinyHunters’ newly launched data leak website as an example, we discuss how different crews handle access, social engineering, and data exposure, and why overlapping roles make attribution, defense, and response harder.
We also explore what this shift means for security leaders, from training and identity protection to preparing for data extortion that doesn’t involve encryption.
Key Takeaways
1. Harden identity and SaaS workflows, not just endpoints - Review help desk procedures, SSO flows, OAuth permissions, and admin access. Many recent incidents succeed without malware or exploits.
2. Train staff for voice phishing and IT impersonation - Add vishing scenarios to security awareness programs, especially for help desk and IT-adjacent roles.
3. Limit blast radius across cloud and SaaS platforms - Enforce least privilege, audit third-party integrations, and regularly review OAuth scopes and token lifetimes.
4. Plan for data extortion without ransomware - Update incident response plans and tabletop exercises to assume data theft and public exposure, even when no systems are encrypted.
5. Practice executive decision-making under data exposure pressure - Tabletop exercises should include legal, communications, and leadership discussions about public leaks, reputational risk, and extortion demands.
Resources
1. Panera Bread Breach Linked to ShinyHunters and Voice Phishing
https://mashable.com/article/panera-bread-breach-shinyhunters-voice-phishing-14-million-customers
2. BreachForums Database Leak Exposes 324,000 Accounts
3. BreachForums Disclosure and ShinyHunters
https://blog.barracuda.com/2026/01/26/breachforums-disclosure-shinyhunters
4. Scattered LAPSUS$ Hunters: 2025’s Most Dangerous Cybercrime
5. Microsoft Digital Defense Report
https://www.microsoft.com/security/business/security-insider/microsoft-digital-defense-report

Tuesday Feb 03, 2026
Top Threat of 2026: The AI Visibility and Control Gap
Tuesday Feb 03, 2026
Tuesday Feb 03, 2026
AI is no longer a standalone tool—it is embedded directly into productivity platforms, collaboration systems, analytics workflows, and customer-facing applications. In this special CyberSide Chats episode, Sherri Davidoff and Matt Durrin break down why lack of visibility and control over AI has emerged as the first and most pressing top threat of 2026.
Using real-world examples like the EchoLeak zero-click vulnerability in Microsoft 365 Copilot, the discussion highlights how AI can inherit broad, legitimate access to enterprise data while operating outside traditional security controls. These risks often generate no alerts, no indicators of compromise, and no obvious “incident” until sensitive data has already been exposed or misused.
Listeners will walk away with a practical framework for understanding where AI risk hides inside modern environments—and concrete steps security and IT teams can take to centralize AI usage, regain visibility, govern access, and apply long-standing security principles to this rapidly evolving attack surface.
Key Takeaways
1. Centralize AI usage across the organization. Require a clear, centralized process for approving AI tools and enabling new AI features, including those embedded in existing SaaS platforms.
2. Gain visibility into AI access and data flows. Inventory which AI tools, agents, and features are in use, which users interact with them, and what data sources they can access or influence.
3. Restrict and govern AI usage based on data sensitivity. Align AI permissions with data classification, restrict use for regulated or highly sensitive data sets, and integrate AI considerations into vendor risk management.
4. Apply the principle of least privilege to AI systems. Treat AI like any other privileged entity by limiting access to only what is necessary and reducing blast radius if credentials or models are misused.
5. Evaluate technical controls designed for AI security. Consider emerging solutions such as AI gateways that provide enforcement, logging, and observability for prompts, responses, and model access.
Resources
1. Microsoft Digital Defense Report 2025
2. NIST AI Risk Management Framework
https://www.nist.gov/itl/ai-risk-management-framework
3. Microsoft 365 Copilot Zero-Click AI Vulnerability (EchoLeak)
https://www.infosecurity-magazine.com/news/microsoft-365-copilot-zeroclick-ai/
4. Adapting to AI Risks: Essential Cybersecurity Program Updates.
https://www.LMGsecurity.com/resources/adapting-to-ai-risks-essential-cybersecurity-program-updates/
5. Microsoft on Agentic AI and Embedded Automation (2026)

Tuesday Jan 27, 2026
The Verizon Outage and the Cost of Concentration
Tuesday Jan 27, 2026
Tuesday Jan 27, 2026
The recent Verizon outage underscores a growing risk in today’s technology landscape: when critical services are concentrated among a small number of providers, failures don’t stay isolated.
In this live discussion, we’ll connect the Verizon outage to past telecom and cloud disruptions to examine how infrastructure dependency creates cascading business impact. We’ll also explore how large-scale outages intersect with security threats targeting telecommunications, where availability, confidentiality, and integrity failures increasingly overlap.
The session will close with actionable takeaways for strengthening resilience and risk planning across cybersecurity and IT programs.
Key Takeaways
1. Diversify your technology infrastructure. Relying on a single carrier, cloud provider, or bundled service creates a single point of failure. Purposeful diversification across providers can reduce the impact of large-scale outages and improve overall resilience.
2. Treat outages as security incidents, not just reliability problems. Large-scale telecom and cloud outages directly disrupt authentication, monitoring, and incident response, and should trigger security workflows—not just IT troubleshooting.
3. Identify and document your dependencies on carriers and cloud providers. Many security controls rely on SMS, voice, cloud identity, or single regions; understanding these dependencies ahead of time prevents dangerous blind spots during outages.
4. Plan and test incident response without phones, SMS, or primary cloud access. Assume your normal communication and authentication methods will fail and ensure your teams know how to coordinate securely when core services are unavailable.
5. Expect outages to increase fraud and social engineering activity. Attackers exploit confusion and urgency during service disruptions, so security teams should prepare staff for impersonation and “service restoration” scams during major outages.
6. Use widespread outages as learning opportunities. Review what happened, assess how your organization was—or could have been—impacted, identify potential areas for improvement, and update incident response, communications, and resilience plans accordingly.
Resources
1. Verizon official network outage update https://www.verizon.com/about/news/update-network-outage
2. Forrester: Verizon outage reignites reliability concerns https://www.forrester.com/blogs/verizon-outage-reignites-reliability-concerns/
3. CNN: Verizon outage disrupted phone and internet service nationwide https://www.cnn.com/2026/01/15/tech/verizon-outage-phone-internet-service
4. AP News: Verizon outage disrupted calling and data services nationwide https://apnews.com/article/85d658a4fb6a6175cae8981d91a809c9
5. CNN: AT&T outage shows how dependent daily life has become on mobile networks (2024) https://www.cnn.com/2024/02/23/tech/att-outage-customer-service

Tuesday Jan 20, 2026
Tuesday Jan 20, 2026
The FTC has issued an order against General Motors for collecting and selling drivers’ precise location and behavior data, gathered every few seconds and marketed as a safety feature. That data was sold into insurance ecosystems and used to influence pricing and coverage decisions — a clear reminder that how organizations collect, retain, and share data now carries direct security, regulatory, and financial risk.
In this episode of Cyberside Chats, we explain why the GM case matters to CISOs, cybersecurity leaders, and IT teams everywhere. Data proliferation doesn’t just create privacy exposure; it creates systemic risk that fuels identity abuse, authentication bypass, fake job applications, and deepfake campaigns across organizations. The message is simple: data is hazardous material, and minimizing it is now a core part of cybersecurity strategy.
Key Takeaways:
1. Prioritize data inventory and mapping in 2026
You cannot assess risk, select controls, or meet regulatory obligations without knowing what data you have, where it lives, how it flows, and why it is retained.
2. Reduce data to reduce risk
Data minimization is a security control that lowers breach impact, compliance burden, and long-term cost.
3. Expect that regulators care about data use, not just breaches
Enforcement increasingly targets over-collection, secondary use, sharing, and retention even when no breach occurs.
4. Create and actively use a data classification policy
Classification drives retention, access controls, monitoring, and protection aligned to data value and regulatory exposure.
5. Design identity and recovery assuming personal data is already compromised
Build authentication and recovery flows that do not rely on the secrecy of SSNs, dates of birth, addresses, or other static personal data.
6. Train teams on data handling, not just security tools
Ensure engineers, IT staff, and business teams understand what data can be collected, how long it can be retained, where it may be stored, and how it can be shared.
Resources:
1. California Privacy Protection Agency — Delete Request and Opt-Out Platform (DROP)
2. FTC Press Release — FTC Takes Action Against General Motors for Sharing Drivers’ Precise Location and Driving Behavior Data
3. California Delete Act (SB 362) — Overview
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB362
4. Texas Attorney General — Data Privacy Enforcement Actions
https://www.texasattorneygeneral.gov/news/releases
5. Data Breaches by Sherri Davidoff
https://www.amazon.com/Data-Breaches-Opportunity-Sherri-Davidoff/dp/0134506782

Tuesday Jan 13, 2026
Venezuela’s Blackout: Cybercrime Domino Effect
Tuesday Jan 13, 2026
Tuesday Jan 13, 2026
When Venezuela experienced widespread power and internet outages, the impact went far beyond inconvenience—it created a perfect environment for cyber exploitation.
In this episode of Cyberside Chats, we use Venezuela’s disruption as a case study to show how cyber risk escalates when power, connectivity, and trusted services break down. We examine why phishing, fraud, and impersonation reliably surge after crises, how narratives around cyber-enabled disruption can trigger copycat or opportunistic attacks, and why even well-run organizations resort to risky security shortcuts when normal systems fail.
We also explore how attackers weaponize emergency messaging, impersonate critical infrastructure and connectivity providers, and exploit verification failures when standard workflows are disrupted. The takeaway is simple: when infrastructure collapses, trust erodes—and cybercrime scales quickly to fill the gap.

Looking for more cybersecurity resources?
Check out our additional resources:
Blog: https://www.LMGsecurity.com/blog/
Top Controls Reports: https://www.LMGsecurity.com/top-security-controls-reports/
Videos: www.youtube.com/@LMGsecurity
