Tuesday, January 27, 2026

Attorneys and the duty of technical competence, as well as ethical responsibilites, related to AI's use.

https://open.spotify.com/episode/4pjjUUllVqrUQE6kzh9vMX?si=c544503353114872&nd=1&dlsi=2eb7c28e5f7847f1 




Episode 16 (S2) - AI & Duty of Competence for AttorneysThis is a continuation of the AI generated podcast, AI Governance, Quantum Uncertainty and Data Privacy Frontiers. Episode 16 of Season 2 looks at the duty of competence and ethical guidelines related to AI's use by attorneys. The sources include an artilce by Anna Conley in the North Carolina Journal of Techology, Volume 27, Issue 1 in 2025 and a 2nd source, consisting of a powerpoint presentation shared in June 2025 by the law firm William Connolly, which included panelists Craig D. Singe,David Randall J. Riskin, both partners in the firm, and also Jaquelyn Stanley, Senior Compliance Counsel of Pfizer.
This presentation examines the ethical challenges and professional duties associated with using generative artificial intelligence in legal practice. The panelists highlight critical risks, such as hallucinations and the potential for revealing confidential client information through insecure platforms. Legal professionals must navigate specific obligations regarding competence, supervision, and transparency when integrating these tools into tasks like research and drafting. The materials also review recent court sanctions for inaccurate AI filings and provide guidance on complying with evolving bar association rules. Ultimately, the sources advocate for best practices that prioritize human oversight and the protection of client interests. #genai #dutyofcompetence #ethics #aiethics #aigov #infogov #dataprivacy #dataprotection #edrm #aceds #iapp #arma #legaltech





Friday, January 23, 2026

Episode 14 (S2) - Harvard Business Review Looks at True AI Use

 Episode 14 - (S2) - Harvard Business Review - Jan/Feb 2026



https://open.spotify.com/episode/6yD5pe8Cz6HnbY91OQt8qD?si=VcUGVAPJQ6CUyA6U8dBOtw

This is the continuation of the AI generated podcast series,"AI Governance, Quantum Uncertainty and Data Privacy Frontiers". Thisepisode was created from an article in Harvard Business Review in theirJanuary/February 2026 edition, written by: by Cyril Bouquet, Christopher J.Wright and Julian Nolan. This episode look at the ambitious AI goals and plansof Fortune 1000 organizations such as GM and Apple, and examines the actual useof AI by those organizations to determine if they are following their intendedplans.

The provided text explores why many artificialintelligence initiatives fail despite heavy investment, attributing thesestruggles to a lack of harmony between ambitious goals and organizationalcapabilities. By contrasting the experiences of companies like GeneralMotors and Apple, the authors illustrate that success depends on a firm's controlover its value chain and the breadth of its technology stack. Toaddress these challenges, the article introduces a strategic framework consistingof four distinct approaches: focused differentiation, vertical integration,collaborative ecosystems, and platform leadership. The narrative emphasizes thathuman engagement and systemic alignment are more critical to scalinginnovation than the complexity of the algorithms themselves. Ultimately, thesources suggest that AI should be viewed as a tool to realize strategy ratherthan a standalone objective.

Wednesday, January 21, 2026

AI Security Snapshot - A look at AI Security in 2025

https://open.spotify.com/episode/245S2gezrfiBKZ8ENgG8uJ?si=9cad070437fa4776 



This is a continuation of the AI Generated podcast, "AI Governance, Quantum Uncertainty and Data Privacy Frontiers." In this episode, a discussion is created from a document that is a report issued by the Cloud Security Alliance and also Google Cloud, which examines teh current landscape of AI Security and Governance.Chat AI Security Snapshot: Governance and Adoption Trends 20253 sources

The provided sources examine the current landscape of AI security and governance, focusing on a collaborative report from the Cloud Security Alliance and Google Cloud. This research highlights a significant divide between organizations with formal governance policies and those without, noting that established frameworks are the primary drivers of confidence and maturity. A major shift is occurring as security teams transition from reactive observers to early adopters of artificial intelligence to enhance threat detection and incident response. While enterprise adoption of large language models is accelerating and consolidating around a few major providers, many leaders remain concerned about sensitive data exposure and regulatory compliance. Ultimately, the materials emphasize that robust governance is the essential foundation for organizations to move safely from experimental pilots to full-scale AI production.

How does formal governance serve as a primary catalyst for mature AI adoption?In what ways are security teams transitioning from technology followers to early adopters?Which specific risks and model strategies currently dominate the corporate AI security landscape?3 sourcesStudioAudio OverviewVideo OverviewMind

The provided sources examine the current landscape of AI security and governance, focusing on a collaborative report from the Cloud Security Alliance and Google Cloud. This research highlights a significant divide between organizations with formal governance policies and those without, noting that established frameworks are the primary drivers of confidence and maturity. A major shift is occurring as security teams transition from reactive observers to early adopters of artificial intelligence to enhance threat detection and incident response. While enterprise adoption of large language models is accelerating and consolidating around a few major providers, many leaders remain concerned about sensitive data exposure and regulatory compliance. Ultimately, the materials emphasize that robust governance is the essential foundation for organizations to move safely from experimental pilots to full-scale AI production.

Tuesday, January 20, 2026

Episode 11 (S2) - National Weather Service using AI - Hallucinates fake town name in Idaho (Whata Bod)

 

This is a continuation of the AI Generated podcast, "AIGovernance, Quantum Uncertainty and Data Privacy Frontiers." In this episode, a recent AI Hallucination that made the news is discussed. In an article by Victor Tangermann appearing on the Futurism website in January 2026,an incident is discussed where a generative AI solution was used to assist with weather forecasting. Unfortunately, in another incident of AI hallucinating, the solution being utilized created a non-existent town, adding the fictional "Whata Bod" Idaho. Fortunately, the name wasn't any more risqué thanthe inappropriate town name it created.

If you are planning to travel to "Whata Bod"...good luck finding the place...maybe generative AI can help you findit since it seems to know where it is.


This podcast episode discusses recent reports that highlighta significant failure at the National Weather Service, where artificial intelligence was used to generate weather maps that included hallucinated town names. These sources explain that staffing shortages led the agency torely on generative AI, resulting in the creation of fictional locations like "Whata Bod" in Idaho. While the agency claims these instances are rare, experts warn that such technological blunders can severely damage public trust and institutional authority. The incident reflects a broader concern regarding the hasty adoption of AI within government sectors without sufficient human oversight. Ultimately, these documents serve as a cautionary tale about the unreliability of AI generated visuals in critical public safety communications.


Friday, January 16, 2026

Episode 10 (S2) - What is Quantum Computing's Potential

 https://open.spotify.com/episode/2kgN0K6lGxmrXAK2OKKUK4?si=pS5ffLgbTxOZ_eM10nJ4vw


Episode 10 (S2) - What's Quantum Computing's Potential?

This is a continuation of the AI generated podcast series curated by Joe Bartolo, J.D. This Episode 10 of Season 2 was created from an article written by author Tim Bajarin, in December 2025, which explores the potentail of quantum computing and discusses certain quantum mechanics concetps which will revolutionize the storage and transfer of data.


Quantum Computing: Foundations and Future Implications


Tim Bajarin explores the transformative potential of quantum computing, a field utilizing the laws of physics to solve problems that are impossible for traditional hardware. Unlike standard bits, qubits utilize superposition and entanglement to process vast amounts of data simultaneously rather than in a sequence. While these systems currently face limitations known as the NISQ era, characterized by environmental noise and high error rates, their eventual maturity could revolutionize drug discovery, cryptography, and material science. The source emphasizes that while quantum tools are not intended to replace personal devices, they will offer unprecedented optimization and simulation capabilities. Understanding these core concepts is framed as essential literacy for navigating a future where fault-tolerant quantum systems reshape the global technological landscape.






Thursday, January 15, 2026

Episode 9 (S2) - Is AI Destroying Institutions?

 https://open.spotify.com/episode/4XZSVjnFolYWImMSLmnzRn?si=R-BDdj0AQSmU0aX6d9M2qQ


Episode 9 (S2) - Is AI Destroying Institutions?This is a continuation of the AI generative Podcast series entitled "AI Governance, Quantum Uncertainty and Data Privacy Frontiers." This is episode 9 in season 2, and was generated from various content, including an online newsletter curated by esteemed subject matter expert Gary Marcus, and also an article by By Woodrow Hartzog and Jessica Silbey, professors at Boston University Law.

This overview examines a podcast episode and a supporting newsletter by Gary Marcus regarding the existential threat generative artificial intelligence poses to democratic institutions. The sources center on a scholarly paper by Woodrow Hartzog and Jessica Silbey, who argue that AI is inherently designed to degrade public infrastructure like healthcare, journalism, and education. Rather than being a neutral tool for efficiency, the technology is described as an active agent of institutional collapse that prioritizes optimization over human accountability. The authors originally intended to find a positive angle but ultimately concluded that AI’s current functionality is antithetical to civic life. Consequently, the text serves as an urgent warning that the widespread adoption of AI may permanently enfeeble the foundational structures of society.



Wednesday, January 14, 2026

Law Firm Challenges - Architecture Before Automation: Solving the Legal AI Crisis


 https://open.spotify.com/episode/6H2NE6054cMCRFOPbk4inL?si=36c9c038955244e4

This is a continuation of the ongoing AI generated podcast series curated by Joe Bartolo on Spotify. This episode was generated from a social media post by Lashay Dodd in December 2025, and some comments from generative AI solutions in January 2026. This episode focuses on the fact that law firm's often don't have proper infastrucutre in place to suppor the AI solutions that the firm will likely need to implement.The provided text argues that law firms fail to successfully implement artificial intelligence not because of the technology itself, but due to a lack of operational architecture. These sources explain that layering advanced tools over disorganized workflows, vague job roles, and scattered data only serves to amplify existing internal chaos. While the primary author advocates for designing infrastructure before adopting automation, the supplemental analysis suggests that AI can actually serve as a diagnostic tool to identify these structural flaws. Ultimately, the text asserts that long-term efficiency is only possible when a firm transitions from a "tribal knowledge" culture to a structured environment where human judgment and machine speed are properly aligned. Successful adoption requires leaders to prioritize systemic blueprints and incentive redesigns over simply purchasing the next popular software solution.

Tuesday, January 13, 2026

Dedicated to my late Cousin Anthony Butera - 40th Anniversary of his Passing - My Love/Hate of Technology

It was 40 years ago today that my cousin Anthony, only a few months older than me, tragically passed away from complications he suffered in a terrible car accident on New Year’s Day, 1986. Having two younger sisters and no brothers, Anthony was the closest thing I had to a brother in my life. Despite my own early talent with computers, his talent in that area always exceeded mine. I can only imagine how much he would have accomplished had his life not been cruelly cut short. This writing is dedicated to his memory on this anniversary of his passing.

I live in a strange kind of love, hate relationship with technology, and at this point it basically defines who I am as a professional.

Everything I do, as an information governance consultant, and in my past roles as legal counsel, comes down to risk. Not just the obvious “Are we going to get sued?” kind of risk, but the subtler forms, regulatory exposure, reputational damage, operational disruption, even the risk of misunderstanding complex technology in front of a judge who is still wrestling with email and metadata, let alone qubits. For a long time I used to say my work was about “cost and risk,” but the more I dug in, the more I realized cost is just another risk vector. It is one more variable in a massive equation where the unknowns keep multiplying.

My love story with computers started early. In the early 1980s, when most people still treated school computers like mysterious, fragile boxes, I was a high school kid in South Brunswick teaching the teachers how to use theirs. I loved the feeling that this little machine could do so much more than people expected, faster, cleaner, more elegantly. Back then, the worst-case scenarios were losing a file, crashing a program, or causing a power outage (which I did once… oops). Now, that same category of machine sits at the center of cybersecurity incidents, data breaches, cross border data fights, and messy litigation that can turn on a single mismanaged email.

Today, computers are not background tools in my world, they are crime scenes, key witnesses, and sometimes co-conspirators. When I discuss cybersecurity, eDiscovery, internal investigations, data breach audits, FOIA requests, and social media governance, I am really talking about the same thing from different angles, how much risk is hidden in all this data, and how we keep it from exploding at the worst possible time. Every decision about data, where it lives, how long it is kept, who can touch it, how it is secured, carries legal consequences. I might be helping a client with M&A due diligence one day, worrying about data migration and legacy systems, and the next day focused on remote data collection, self-collection pitfalls, or redaction failures that could accidentally expose privileged or sensitive information.

The love part is that I genuinely enjoy this work. In addition, the people that I have had the good fortune of meeting within the legal technology community are among the finest I’ve ever met. Some of my industry colleagues have become like extended family to me over the past 20 years where I have focused primarily on technology. I also like the intellectual challenge of untangling a messy data lake, designing sane retention policies, or wrestling with multilingual data sets where one document can contain multiple languages and nested links to other files. I like translating all of this into plain language for judges, executives, or regulators who do not live in the weeds. There is real satisfaction in helping someone see that information governance, data protection, and legal operations are not just overhead, they are survival skills.

But there is a dark edge to that love, and that is where the hate comes in. Technology has given us tools like generative AI that can draft, summarize, and search in ways that were nearly unthinkable a few years ago. It is exhilarating to explore generative AI solutions for legal practice, to think about how they can help lawyers be more efficient, more creative, and more informed. However, the very same tools hallucinate cases out of thin air, create deepfakes that can contaminate evidentiary records, and encourage people to place blind trust in systems they barely understand. When you work at the intersection of technology and law, you do not just see “cool new tech,” you see new failure modes, malpractice, sanctions, regulatory fines, and a long tail of unintended consequences.

Quantum computing is the next chapter in this story, and it might be the most extreme version of the love, hate dynamic. The concepts that drive it, superposition, uncertainty, entanglement, decoherence, are fascinating to me. I can happily nerd out about qubits and qudits, about how these strange quantum states can represent and process information in ways that make classical bits look quaint. I took AP Physics in High-School and although I was a Political Science Major in College, at NYU, I have always been fascinated by quantum mechanics.

The upside is huge, breakthroughs in medicine, optimization, materials science, and more. But I can also see the legal and governance nightmare waiting in the wings. How do you explain superposition and entanglement to a judge in a way that matters for causation, reliability, or admissibility? How do you talk about decoherence and still claim you have a stable, reproducible process behind a key piece of evidence?

And then there is the risk that quantum computing blows up our existing security assumptions. The encryption that underpins global finance, government communications, and everyday privacy is not guaranteed to survive contact with sufficiently powerful quantum machines. That is why post quantum encryption and “quantum governance” are not just buzzwords to me, they are the early lines of defense in a world where attackers can “harvest now, decrypt later.” The upside of quantum is enormous, but so is the downside, it has the same dual use character as generative AI, only with potentially deeper structural consequences.

When you pile all of this together, cybersecurity, data protection, data transfers, data privacy, records management, internal investigations, computer forensics, AI governance, quantum governance, social media governance, body and drone cameras, self driving vehicles, AI robotics, multilingual data, hyperlinked file structures, the pattern becomes clear. My work is about living in the middle of that complexity and trying to keep the whole system from spiraling into chaos. Every project is a balancing act. Collect too much data and you increase cost, privacy exposure, and breach risk, collect too little and you are facing sanctions or an incomplete factual record. Redact too aggressively and you look evasive, redact too lightly and you leak something you can never take back.

So when I say I love and hate technology, what I really mean is that I love what it lets us do and I hate how easy it is to get hurt by it. I love the creativity, the speed, the power, and the sheer intellectual puzzle of it all. I hate the fact that every new solution seems to generate a new class of challenges, that every innovation comes with a fine print of new risks that legal, compliance, and governance professionals have to parse and manage. Technology is both my toolset and my adversary, my career and my cautionary tale. Risks related to the use of technology have cost me most of my hair, and what is left is rapidly turning gray. I am not complaining, I have chosen to stand right at that fault line, trying to make sense of it for clients, for courts, and, frankly, for myself. I certainly wish my late cousin were still here with us to help me grapple with all of these technology related risks.

May my cousin Anthony Butera (1966–1986) rest in peace and thank you for reading this narrative.

 

 

37 Dimensions of Light Reached in Recent Quanutm Experiment

 https://www.linkedin.com/posts/joe-bartolo-4433126_quantum-quantumcomputing-quantumrisk-activity-7416830805941055490-K6ZP?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAEdJPMBJ2msVSRVP7m90GS7R5fJsy_fVGs

37 Dimensions reached by light in recent quantum system experiment. An AI generated podcast discussion (episode 7 - S2) - Discussion of quantum computing experiment and managing quantum risk. Post Quantum encryption standards are under development and reinement. New light experiments are assisting in making advancements in the stability of quantum computing environments. Are we doing enough to address potential risks from such technologies? What about the risks posed by conducting these sub-atomic experiments in the first place?
Personally, I have read and watched enough bad/crazy sci-fi that it is difficult for me not to be concerned.






Friday, January 9, 2026

Episode 5 (S2) - AI Governance in 2025 Recap - EU vs. U.S.

 https://open.spotify.com/episode/3JTOdumPlcbds8DKTgOoCw



This is an AI Generated podcast created from prompts posed to Grok's AI, and Google Gemini AI. The respones from generative AI chatbots were used as the basis of creating this generative AI podcast session. The podcast looks at key differences between the EU's approach to governance of AI, as opposed to the approach being followed by the United States.


Thursday, January 8, 2026

Episode 4 (S2) - The Anatomy of a Data Breach - Aflac's Breach in 2025 Exposes over 22.5 Million People's Info


https://open.spotify.com/episode/3ObS55XtHeO8y9loUF3k3b?si=cxVLmaWcRWaVw2oWRhpBOA

 This is episode 4 (Season 2) of the ongoing AI generated podcast series. This episode looks at artilces appearing from December 29, 2025 through January 2, 2026 which discussed the Aflac Data Breach that impacted 22+ in 2025. In June 2025, the major insurance provider Aflac experienced a massive cyberattack that compromised the sensitive data of approximately 22.65 million individuals. These reports detail how hackers accessed personal details such as Social Security numbers, medical records, and contact information, potentially making it the largest health-related breach of the year. Although the company quickly contained the incident and avoided a ransomware shutdown, security experts believe the sophisticated criminal group Scattered Spider may be responsible. In response, Aflac is providing identity theft protection and credit monitoring to those affected while simultaneously facing numerous class action lawsuits for alleged negligence. These sources underscore the growing financial and legal risks facing the insurance industry as it battles a rising tide of coordinated cybercrime.



Wednesday, January 7, 2026

China is building an AI supercomputer in space


 https://open.spotify.com/episode/7ynNvWrlRaZOr6VINmXFNC?si=rJi-Z23MTRuc6BAFmqEjoQ


This is an AI generated podcast created from a Poplular Mechanics article that was published in January 2026, which explores a modern space race centered on establishing artificial intelligence infrastructure and supercomputing power in low-Earth orbit. This shift toward orbital data centers is driven by the need for environmental sustainability, as space-based systems can utilize solar energy to reduce the massive water and power consumption of terrestrial facilities. Major global players, including China and American corporations like Starcloud and Google, are currently testing advanced hardware and training large language models in the harsh conditions of outer space. While entities like Starcloud have successfully deployed high-performance GPUs, Chinese collaborations are simultaneously building satellite constellations to lay the groundwork for future orbital supercomputers. Despite technical challenges such as radiation and extreme temperatures, experts anticipate fully operational supercomputers will inhabit space by the 2030s. This technological evolution represents a significant transition from the lunar missions of the past to a future defined by space-based AI dominance.

Tuesday, January 6, 2026

Post Quantum Challenges - There are Many

 



This AI genereated podcase was created from provided narrative from Grok's generative AI chatbot, that outlines the emerging field of quantum governance, focusing on the ethical, legal, and security frameworks necessary to manage advanced computing technologies. These sources highlight how quantum capabilities like superposition and entanglement challenge existing standards, specifically regarding GDPR compliance, digital forensics, and eDiscovery obligations. Experts warn that the fragile nature of quantum data—which degrades over time and changes when observed—conflicts with traditional requirements for evidence preservation and the chain of custody. To address these risks, organizations like the World Economic Forum, NIST, and EDRM are advocating for post-quantum cryptography and updated legal protocols. The collective discourse emphasizes a shift toward crypto-agility and proactive policy adaptation to ensure that innovation does not undermine global data privacy or judicial integrity. Key industry figures, including Joe Bartolo, curator of this AI generated podcast series, are noted for bridging the gap between complex quantum physics and practical regulatory compliance.

https://open.spotify.com/episode/18l9cTaNnWYD6QVDImwC6v?si=LZNz-o-mTQS_Dhk-LrB0GA