Friday, January 23, 2026

Episode 14 (S2) - Harvard Business Review Looks at True AI Use

 Episode 14 - (S2) - Harvard Business Review - Jan/Feb 2026



https://open.spotify.com/episode/6yD5pe8Cz6HnbY91OQt8qD?si=VcUGVAPJQ6CUyA6U8dBOtw

This is the continuation of the AI generated podcast series,"AI Governance, Quantum Uncertainty and Data Privacy Frontiers". Thisepisode was created from an article in Harvard Business Review in theirJanuary/February 2026 edition, written by: by Cyril Bouquet, Christopher J.Wright and Julian Nolan. This episode look at the ambitious AI goals and plansof Fortune 1000 organizations such as GM and Apple, and examines the actual useof AI by those organizations to determine if they are following their intendedplans.

The provided text explores why many artificialintelligence initiatives fail despite heavy investment, attributing thesestruggles to a lack of harmony between ambitious goals and organizationalcapabilities. By contrasting the experiences of companies like GeneralMotors and Apple, the authors illustrate that success depends on a firm's controlover its value chain and the breadth of its technology stack. Toaddress these challenges, the article introduces a strategic framework consistingof four distinct approaches: focused differentiation, vertical integration,collaborative ecosystems, and platform leadership. The narrative emphasizes thathuman engagement and systemic alignment are more critical to scalinginnovation than the complexity of the algorithms themselves. Ultimately, thesources suggest that AI should be viewed as a tool to realize strategy ratherthan a standalone objective.

Wednesday, January 21, 2026

AI Security Snapshot - A look at AI Security in 2025

https://open.spotify.com/episode/245S2gezrfiBKZ8ENgG8uJ?si=9cad070437fa4776 



This is a continuation of the AI Generated podcast, "AI Governance, Quantum Uncertainty and Data Privacy Frontiers." In this episode, a discussion is created from a document that is a report issued by the Cloud Security Alliance and also Google Cloud, which examines teh current landscape of AI Security and Governance.Chat AI Security Snapshot: Governance and Adoption Trends 20253 sources

The provided sources examine the current landscape of AI security and governance, focusing on a collaborative report from the Cloud Security Alliance and Google Cloud. This research highlights a significant divide between organizations with formal governance policies and those without, noting that established frameworks are the primary drivers of confidence and maturity. A major shift is occurring as security teams transition from reactive observers to early adopters of artificial intelligence to enhance threat detection and incident response. While enterprise adoption of large language models is accelerating and consolidating around a few major providers, many leaders remain concerned about sensitive data exposure and regulatory compliance. Ultimately, the materials emphasize that robust governance is the essential foundation for organizations to move safely from experimental pilots to full-scale AI production.

How does formal governance serve as a primary catalyst for mature AI adoption?In what ways are security teams transitioning from technology followers to early adopters?Which specific risks and model strategies currently dominate the corporate AI security landscape?3 sourcesStudioAudio OverviewVideo OverviewMind

The provided sources examine the current landscape of AI security and governance, focusing on a collaborative report from the Cloud Security Alliance and Google Cloud. This research highlights a significant divide between organizations with formal governance policies and those without, noting that established frameworks are the primary drivers of confidence and maturity. A major shift is occurring as security teams transition from reactive observers to early adopters of artificial intelligence to enhance threat detection and incident response. While enterprise adoption of large language models is accelerating and consolidating around a few major providers, many leaders remain concerned about sensitive data exposure and regulatory compliance. Ultimately, the materials emphasize that robust governance is the essential foundation for organizations to move safely from experimental pilots to full-scale AI production.

Tuesday, January 20, 2026

Episode 11 (S2) - National Weather Service using AI - Hallucinates fake town name in Idaho (Whata Bod)

 

This is a continuation of the AI Generated podcast, "AIGovernance, Quantum Uncertainty and Data Privacy Frontiers." In this episode, a recent AI Hallucination that made the news is discussed. In an article by Victor Tangermann appearing on the Futurism website in January 2026,an incident is discussed where a generative AI solution was used to assist with weather forecasting. Unfortunately, in another incident of AI hallucinating, the solution being utilized created a non-existent town, adding the fictional "Whata Bod" Idaho. Fortunately, the name wasn't any more risqué thanthe inappropriate town name it created.

If you are planning to travel to "Whata Bod"...good luck finding the place...maybe generative AI can help you findit since it seems to know where it is.


This podcast episode discusses recent reports that highlighta significant failure at the National Weather Service, where artificial intelligence was used to generate weather maps that included hallucinated town names. These sources explain that staffing shortages led the agency torely on generative AI, resulting in the creation of fictional locations like "Whata Bod" in Idaho. While the agency claims these instances are rare, experts warn that such technological blunders can severely damage public trust and institutional authority. The incident reflects a broader concern regarding the hasty adoption of AI within government sectors without sufficient human oversight. Ultimately, these documents serve as a cautionary tale about the unreliability of AI generated visuals in critical public safety communications.


Friday, January 16, 2026

Episode 10 (S2) - What is Quantum Computing's Potential

 https://open.spotify.com/episode/2kgN0K6lGxmrXAK2OKKUK4?si=pS5ffLgbTxOZ_eM10nJ4vw


Episode 10 (S2) - What's Quantum Computing's Potential?

This is a continuation of the AI generated podcast series curated by Joe Bartolo, J.D. This Episode 10 of Season 2 was created from an article written by author Tim Bajarin, in December 2025, which explores the potentail of quantum computing and discusses certain quantum mechanics concetps which will revolutionize the storage and transfer of data.


Quantum Computing: Foundations and Future Implications


Tim Bajarin explores the transformative potential of quantum computing, a field utilizing the laws of physics to solve problems that are impossible for traditional hardware. Unlike standard bits, qubits utilize superposition and entanglement to process vast amounts of data simultaneously rather than in a sequence. While these systems currently face limitations known as the NISQ era, characterized by environmental noise and high error rates, their eventual maturity could revolutionize drug discovery, cryptography, and material science. The source emphasizes that while quantum tools are not intended to replace personal devices, they will offer unprecedented optimization and simulation capabilities. Understanding these core concepts is framed as essential literacy for navigating a future where fault-tolerant quantum systems reshape the global technological landscape.






Thursday, January 15, 2026

Episode 9 (S2) - Is AI Destroying Institutions?

 https://open.spotify.com/episode/4XZSVjnFolYWImMSLmnzRn?si=R-BDdj0AQSmU0aX6d9M2qQ


Episode 9 (S2) - Is AI Destroying Institutions?This is a continuation of the AI generative Podcast series entitled "AI Governance, Quantum Uncertainty and Data Privacy Frontiers." This is episode 9 in season 2, and was generated from various content, including an online newsletter curated by esteemed subject matter expert Gary Marcus, and also an article by By Woodrow Hartzog and Jessica Silbey, professors at Boston University Law.

This overview examines a podcast episode and a supporting newsletter by Gary Marcus regarding the existential threat generative artificial intelligence poses to democratic institutions. The sources center on a scholarly paper by Woodrow Hartzog and Jessica Silbey, who argue that AI is inherently designed to degrade public infrastructure like healthcare, journalism, and education. Rather than being a neutral tool for efficiency, the technology is described as an active agent of institutional collapse that prioritizes optimization over human accountability. The authors originally intended to find a positive angle but ultimately concluded that AI’s current functionality is antithetical to civic life. Consequently, the text serves as an urgent warning that the widespread adoption of AI may permanently enfeeble the foundational structures of society.



Wednesday, January 14, 2026

Law Firm Challenges - Architecture Before Automation: Solving the Legal AI Crisis


 https://open.spotify.com/episode/6H2NE6054cMCRFOPbk4inL?si=36c9c038955244e4

This is a continuation of the ongoing AI generated podcast series curated by Joe Bartolo on Spotify. This episode was generated from a social media post by Lashay Dodd in December 2025, and some comments from generative AI solutions in January 2026. This episode focuses on the fact that law firm's often don't have proper infastrucutre in place to suppor the AI solutions that the firm will likely need to implement.The provided text argues that law firms fail to successfully implement artificial intelligence not because of the technology itself, but due to a lack of operational architecture. These sources explain that layering advanced tools over disorganized workflows, vague job roles, and scattered data only serves to amplify existing internal chaos. While the primary author advocates for designing infrastructure before adopting automation, the supplemental analysis suggests that AI can actually serve as a diagnostic tool to identify these structural flaws. Ultimately, the text asserts that long-term efficiency is only possible when a firm transitions from a "tribal knowledge" culture to a structured environment where human judgment and machine speed are properly aligned. Successful adoption requires leaders to prioritize systemic blueprints and incentive redesigns over simply purchasing the next popular software solution.

Tuesday, January 13, 2026

Dedicated to my late Cousin Anthony Butera - 40th Anniversary of his Passing - My Love/Hate of Technology

It was 40 years ago today that my cousin Anthony, only a few months older than me, tragically passed away from complications he suffered in a terrible car accident on New Year’s Day, 1986. Having two younger sisters and no brothers, Anthony was the closest thing I had to a brother in my life. Despite my own early talent with computers, his talent in that area always exceeded mine. I can only imagine how much he would have accomplished had his life not been cruelly cut short. This writing is dedicated to his memory on this anniversary of his passing.

I live in a strange kind of love, hate relationship with technology, and at this point it basically defines who I am as a professional.

Everything I do, as an information governance consultant, and in my past roles as legal counsel, comes down to risk. Not just the obvious “Are we going to get sued?” kind of risk, but the subtler forms, regulatory exposure, reputational damage, operational disruption, even the risk of misunderstanding complex technology in front of a judge who is still wrestling with email and metadata, let alone qubits. For a long time I used to say my work was about “cost and risk,” but the more I dug in, the more I realized cost is just another risk vector. It is one more variable in a massive equation where the unknowns keep multiplying.

My love story with computers started early. In the early 1980s, when most people still treated school computers like mysterious, fragile boxes, I was a high school kid in South Brunswick teaching the teachers how to use theirs. I loved the feeling that this little machine could do so much more than people expected, faster, cleaner, more elegantly. Back then, the worst-case scenarios were losing a file, crashing a program, or causing a power outage (which I did once… oops). Now, that same category of machine sits at the center of cybersecurity incidents, data breaches, cross border data fights, and messy litigation that can turn on a single mismanaged email.

Today, computers are not background tools in my world, they are crime scenes, key witnesses, and sometimes co-conspirators. When I discuss cybersecurity, eDiscovery, internal investigations, data breach audits, FOIA requests, and social media governance, I am really talking about the same thing from different angles, how much risk is hidden in all this data, and how we keep it from exploding at the worst possible time. Every decision about data, where it lives, how long it is kept, who can touch it, how it is secured, carries legal consequences. I might be helping a client with M&A due diligence one day, worrying about data migration and legacy systems, and the next day focused on remote data collection, self-collection pitfalls, or redaction failures that could accidentally expose privileged or sensitive information.

The love part is that I genuinely enjoy this work. In addition, the people that I have had the good fortune of meeting within the legal technology community are among the finest I’ve ever met. Some of my industry colleagues have become like extended family to me over the past 20 years where I have focused primarily on technology. I also like the intellectual challenge of untangling a messy data lake, designing sane retention policies, or wrestling with multilingual data sets where one document can contain multiple languages and nested links to other files. I like translating all of this into plain language for judges, executives, or regulators who do not live in the weeds. There is real satisfaction in helping someone see that information governance, data protection, and legal operations are not just overhead, they are survival skills.

But there is a dark edge to that love, and that is where the hate comes in. Technology has given us tools like generative AI that can draft, summarize, and search in ways that were nearly unthinkable a few years ago. It is exhilarating to explore generative AI solutions for legal practice, to think about how they can help lawyers be more efficient, more creative, and more informed. However, the very same tools hallucinate cases out of thin air, create deepfakes that can contaminate evidentiary records, and encourage people to place blind trust in systems they barely understand. When you work at the intersection of technology and law, you do not just see “cool new tech,” you see new failure modes, malpractice, sanctions, regulatory fines, and a long tail of unintended consequences.

Quantum computing is the next chapter in this story, and it might be the most extreme version of the love, hate dynamic. The concepts that drive it, superposition, uncertainty, entanglement, decoherence, are fascinating to me. I can happily nerd out about qubits and qudits, about how these strange quantum states can represent and process information in ways that make classical bits look quaint. I took AP Physics in High-School and although I was a Political Science Major in College, at NYU, I have always been fascinated by quantum mechanics.

The upside is huge, breakthroughs in medicine, optimization, materials science, and more. But I can also see the legal and governance nightmare waiting in the wings. How do you explain superposition and entanglement to a judge in a way that matters for causation, reliability, or admissibility? How do you talk about decoherence and still claim you have a stable, reproducible process behind a key piece of evidence?

And then there is the risk that quantum computing blows up our existing security assumptions. The encryption that underpins global finance, government communications, and everyday privacy is not guaranteed to survive contact with sufficiently powerful quantum machines. That is why post quantum encryption and “quantum governance” are not just buzzwords to me, they are the early lines of defense in a world where attackers can “harvest now, decrypt later.” The upside of quantum is enormous, but so is the downside, it has the same dual use character as generative AI, only with potentially deeper structural consequences.

When you pile all of this together, cybersecurity, data protection, data transfers, data privacy, records management, internal investigations, computer forensics, AI governance, quantum governance, social media governance, body and drone cameras, self driving vehicles, AI robotics, multilingual data, hyperlinked file structures, the pattern becomes clear. My work is about living in the middle of that complexity and trying to keep the whole system from spiraling into chaos. Every project is a balancing act. Collect too much data and you increase cost, privacy exposure, and breach risk, collect too little and you are facing sanctions or an incomplete factual record. Redact too aggressively and you look evasive, redact too lightly and you leak something you can never take back.

So when I say I love and hate technology, what I really mean is that I love what it lets us do and I hate how easy it is to get hurt by it. I love the creativity, the speed, the power, and the sheer intellectual puzzle of it all. I hate the fact that every new solution seems to generate a new class of challenges, that every innovation comes with a fine print of new risks that legal, compliance, and governance professionals have to parse and manage. Technology is both my toolset and my adversary, my career and my cautionary tale. Risks related to the use of technology have cost me most of my hair, and what is left is rapidly turning gray. I am not complaining, I have chosen to stand right at that fault line, trying to make sense of it for clients, for courts, and, frankly, for myself. I certainly wish my late cousin were still here with us to help me grapple with all of these technology related risks.

May my cousin Anthony Butera (1966–1986) rest in peace and thank you for reading this narrative.