- Is this more than an LLM wrapper? (It’s a red flag if I think I could reasonably achieve the same outcomes using my existing LLM subscriptions, given a strong cup of coffee and a meetings-free afternoon.)
- Is it super-clear to me where and how AI is used and where and how it isn’t?
- Does this product or service get me results that meet my (very high) quality bar, faster and cheaper than however I’m getting to that quality bar today?
- Does how the product or service reach those high-quality results at the very least not *add* to my ethical concerns about AI impact on the most vulnerable among us?
- Is the non-deterministic, prone-to-wander nature of current generative AI tech given due consideration? The red flag: a “successful” implementation relies on AI’s output looking the same way every time, over a sustained period. Who’s checking to make sure it’s staying consistent? (The answer can’t be “You, Katelyn.”)
- Is there an accountability structure in place? When baked-in automation gets something wrong, who makes it right—is there a plan in place to fix both small and big mistakes, and does that plan reduce my own exposure?I need solid “Yes” answers down the line to proceed. What does your decision-making matrix look like on this front? What are your AI product red flags?
By the way, our Tuesday morning AI chats are not only still going (RSVP here for the meeting link), we now have an asynchronous version in the form of a LinkedIn group! Join us any time as we talk ethics, practical considerations, and the changing cultural and regulatory landscape of generative AI.
Thanks,
Katelyn
Photo by Kenny Eliason on Unsplash