Monday, December 29, 2025

From Filing Cabinets to Quantum Data Governance


This is a continuation of the AI generated podcast curated by Joe Bartolo, J.D. This episode was created from drafts excerpts from a book authored by Joe Bartolo, J.D., entitled, "Quantum Uncertainaty - The Future of Information Governance", which examines the critical intersection of Information Governance (IG), eDiscovery, generative AI, and the emerging field of quantum computing. It details how traditional data management frameworks must evolve to address cybersecurity risks, data privacy obligations like GDPR, and the immense processing power of quantum networks. The author highlights the "Quantum Apocalypse," where advancing technology threatens to break current encryption standards, necessitating a transition to quantum-resistant cryptography. By bridging legal, technical, and organizational silos, the sources advocate for a proactive Quantum Governance strategy to manage complex data landscapes. Ultimately, the material serves as a roadmap for professionals to navigate the shift from classical data oversight to a future defined by quantum entanglement and rapid technological change.

The published book on this topic authored by Joe Bartolo, J.D., is available on Amazon at:  "Quantum Uncertainty, the Future of Information Governance." 


 

Tuesday, December 23, 2025

Happy Festivus - AI Generated Podcast with an airing of the grievances and tips for Epic Feats of Strength


 https://open.spotify.com/episode/0x6J9qozRNugecfvvHc2cm?si=NxJgN0SoQxy7-cUmORn9Pg


This AI generated podcast was created from a source document provided by Grok, as a reply to a prompt asking it to provide an airing of the grievances for Festivus, applicable to AI governance and Information governance . Enjoy the Festivus podcast and may you accomplish epic feats of strength.

Monday, December 22, 2025

6-7 AI Predictions from subject matter expert Gary Marcus - and a look back at his past 2025 predictions

https://open.spotify.com/episode/0e3laeUpeMA9UloFXrLkxr?si=4610a3adc7124848 

In this article, Gary Marcus evaluates the accuracy of his previous AI forecasts while presenting twenty-five new expectations for 2025. He contends that the industry is hitting a technical plateau, noting that current models still struggle with reasoninghallucinations, and physical logic. Marcus anticipates that artificial general intelligence will remain out of reach this year, as the economic returns for software developers fail to match the massive hype. He further suggests that investor enthusiasm may finally cool as the limits of scaling laws become more apparent to the public. Additionally, the text highlights concerns regarding regulatory gaps, increasing energy demands, and the ongoing legal battles over copyrighted training data.


Friday, December 19, 2025

CISO 2025 Report - Examines Relationship Between CISO and the Corporate Board

https://open.spotify.com/episode/6KvBpz10o7ZsRy9d2Rzbaa?si=5c_jOMYKRKSO4EQmXtW9og 

This AI Generated podcast is a continuation of the series, "AI Governance, Quantum Uncertainty and Data Privacy Frontiers", and was created from a 2025 CISO Report by Splunk, a Cisco company, which examines the evolving but often misaligned relationship between Chief Information Security Officers (CISOs) and corporate boards of directors. While technical leaders are gaining more presence in the boardroom, significant discrepancies remain regarding budget adequacy, performance metrics, and the prioritization of business enablement versus technical operations. The text emphasizes that CISOs must adopt stronger business acumen and communication skills to secure necessary funding and demonstrate how security investments drive return on investment (ROI). Furthermore, the report explores the dual nature of artificial intelligence, viewing it as both a powerful defensive tool and a sophisticated weapon for cybercriminals. Industry and regional data highlight that organizations with security expertise at the board level achieve better strategic alignment and overall digital resilience. This comprehensive analysis serves as a guide for security leaders to bridge communication gaps and align their goals with broader corporate strategy.


Thursday, December 18, 2025

AI Incident Database Documents Seriously Concerning AI Behaviors

https://open.spotify.com/episode/2qbttXvAwVhxphIFWSfEMZ?si=ZL3faabsTP-MJcSZs-staQ 


This AI generated podcast was created from excerpts shared on the AI Incident website, which details numerous instances of failures in automated and artificial intelligence (AI) systems across various sectors. A significant portion highlights algorithmic bias and discrimination, including instances of Google's AI labeling Black individuals as "gorillas," racial bias in advertising and risk assessment tools, and issues with facial recognition software wrongly identifying people or exhibiting racial and ethnic bias. Furthermore, the text documents numerous failures in autonomous vehicles, such as Tesla crashes while on Autopilot, a self-driving Uber hitting a pedestrian, and a security robot driving into a fountain. There are multiple examples of content moderation failures by platforms like YouTube and Facebook, which inappropriately censored content, promoted explicit material to children, or failed to remove hate speech. Finally, the sources cover system malfunctions with real-world financial or bureaucratic consequences, exemplified by the "Flash Crash" in financial markets, the accidental firing of an employee by an automated system, and the controversial "robo-debt" system in Australia.








Tuesday, December 16, 2025

OpenAI issues Code Red


https://open.spotify.com/episode/3qrXWYlsyMk6mIP4nl1vdO?si=_e-cfbZtSPW2BIivEE9d_g


 This AI generated podcast, Episode 30, Season 1, was crated from a provided text is an excerpt from a document titled "openai code red.pdf," which consists of a social media post by the Center for Humane Technology (CHT) and subsequent comments discussing the competitive pressure and ethical concerns facing OpenAI. Specifically, the post highlights OpenAI's declaration of a "Code Orange" and a more urgent "Code Red," reportedly initiated by CEO Sam Altman following the launch of Google's Gemini chatbot, to prioritize increasing user engagement over safety. Critics in the comments argue that this focus on growth, even amidst wrongful death lawsuits and mounting AI harms, demonstrates a flawed incentive structure and a need to rebuild AI ethics from an architectural perspective rather than relying on reactive policies. The conversation underscores the tension between market dominance and the responsible governance of artificial intelligence.

Monday, December 15, 2025

Doug Austin - eDiscovery Today - Blog Post Outlining Joe Bartolo's process for creating informative AI generated Infographics



Thank you kindy to Doug Austin for sharing the blog post today on his eDiscovery Today blog. The post discusses some useful information regarding a process that can be used to have generative AI create helpful infographics. Hope you enjoy the article, and hope you find Google's NotebookLM to be useful to your efforts. hashtaggenai hashtagnotebooklm hashtaginfographic hashtagaceds hashtagedrm hashtagarma hashtagldi