• Home
  • AI Workgroup - Session 2: Avoiding the Traps - October 2025

AI Workgroup - Session 2: Avoiding the Traps - October 2025

Thursday, October 09, 2025 12:25 PM | Victoria Brenes (Administrator)

Date: 10/5/2025

Panelists: Margaret Spence, Alyssa Chuck

Tags: AI workgroup, AI, compliance, risk, talent development 

Follow along with our handout!

AI Generated Summary: 

The group explored AI risks and compliance, discussing the outputs from large language models and the importance of proper prompting and verification when using generative AI tools.

AI Risk Assessment and Compliance

The meeting focused on discussing AI risks and compliance, with panelists Margaret Spence and Alyssa Chuck sharing their insights on the outputs from large language models (LLMs) regarding generative AI risks. Margaret noted that the input prompt was insufficiently detailed, while Alyssa highlighted the robustness of the responses and emphasized the importance of basic AI literacy. Both agreed that the LLMs missed significant risks, with Alyssa expressing concern about the lack of citations to European AI policies. The discussion also touched on the need for better vendor vetting and the potential pitfalls of relying on free, easily accessible AI tools like ChatGPT. The session concluded with conversations about AI's impact on various fields, including design work with Canva AI, and plans for new membership types and educational initiatives focused on AI training and project sharing.

Generative AI and Information Retrieval

The group discussed the use of generative AI and search engines for information retrieval. Margaret emphasized the importance of effective prompting and highlighted that different AI models, such as ChatGPT and Perplexity, provide varying levels of useful information. She suggested asking AI models what one should know about a topic as a strategy to receive relevant guidance. Alyssa noted the distinction between generative AI and search engines, pointing out that search engines often lead users to external sources like Wikipedia, which may not be suitable for research purposes. Margaret warned about AI hallucinations and suggested double-checking AI-generated content by searching for the information on Google to verify its accuracy and originality. Steve Yudewitz noted that even when AI is asked to double-check its responses, it may still make mistakes. The group also discussed the need for critical thinking when using AI tools and the importance of verifying sources, especially when dealing with copyrighted material.

AI Adoption and Bias Challenges

Margaret and Alyssa discussed their experiences with different AI models, finding Gemini the most practical and Claude the least reliable. They emphasized the importance of understanding AI laws and regulations, particularly for talent development professionals who are being asked to train others on these models. Margaret highlighted the need for organizations to strike a balance between AI guardrails and learning opportunities, as excessive restrictions can hinder AI adoption. They also discussed the challenges of unconscious bias in AI prompts and outputs, with Margaret sharing data on how women and neurodiverse individuals are disproportionately affected by AI disruptions in the workplace.

Bias Mitigation in Generative AI

The group discussed strategies for mitigating bias in generative AI, with Margaret emphasizing the importance of questioning AI outputs and Alyssa suggesting the use of thumbs up/down buttons with detailed explanations. George announced the formation of three breakout rooms focused on different AI tools (ChatGPT, Gemini, and Claude), and helped participants select their preferred rooms. After the breakout sessions, the conversation ended with a brief Q&A session followed by an optional networking period.

AI Evolution and Human Touch

The group discussed the impact of AI on various fields, with Margaret emphasizing that Claude will continue to be a leading tool for coding and writing due to its superior capabilities. They explored how AI tools like ChatGPT, Copilot, and others are evolving and potentially replacing traditional software and search engines. The conversation also touched on the importance of representation in AI-generated content and the need for sensitivity in AI applications, as highlighted by Victoria Brenes and Alyssa. The conversation ended with a reflection on the value of human touch in communications, with members emphasizing the irreplaceable role of human interaction and personalization.

Canva AI 

The group discussed using Canva AI for design work, with several members sharing their experiences and tips. Alyssa explained how to effectively use Canva AI for images by providing simple, metaphor-based prompts, while Margaret revealed that Claude is now integrated with Canva as its backend.

Earn Your Digital Badge!

To get credit for this session and move toward your official AI Workgroup Digital Badge, please fill out the quick Follow Up Survey.



Upcoming events

Upcoming events

Site content © ATD South Florida 2025. All rights reserved. Contact us at info@atdsfl.org

Mailing Address: ATD South Florida, 6278 N. Federal Hwy,  Fort Lauderdale, FL 33308

Powered by Wild Apricot Membership Software