Welcome to the fourth installment in my series on AI product management frameworks. So far, I've explored the evolution of AI products, the classic CRISP-DM framework, and CRISP-ML for machine learning. Today, I turn our attention to generative AI products and the unique challenges they present for product managers, from prompt engineering to subjective quality evaluation.
The Generative AI Product Management Challenge
Generative AI represents a paradigm shift in AI product development. Unlike traditional machine learning, which focuses on prediction and classification, generative models create entirely new content—whether text, images, code, or other media.
This shift introduces several critical challenges for product managers:
Foundation model dynamics: Most generative AI products build on foundation models (like GPT-4, Claude, Stable Diffusion) rather than being built from scratch
Prompt engineering: Success often depends on effective prompt design rather than traditional feature engineering
Subjective evaluation: Output quality can be highly subjective and context-dependent
Creative applications: Use cases often involve creative tasks previously considered exclusively human
Ethical considerations: New concerns around copyright, misinformation, and appropriate content generation
These characteristics demand a reimagined approach to product management, one that CRISP-GEN AI helps provide.
CRISP-GEN AI: Adapting for Generative Models
CRISP-GEN AI represents an adaptation of the CRISP-DM framework tailored explicitly for generative AI projects. Let's examine how each phase evolves to address the unique aspects of generative AI products.
The PM's Manual to CRISP-GEN AI Phases
1. Business Understanding for Generative AI
The business understanding phase for generative AI products requires consideration of unique factors.
The Product Manager's Role:
Use case suitability assessment: Determine whether generative AI is the appropriate approach for the business need
Interaction design strategy: Define how users will interact with the generative system (prompts, iterations, feedback)
Expectation management: Set realistic expectations about generative capabilities and limitations
Ethical boundary definition: Establish clear guidelines for appropriate content generation
Competitive positioning: Differentiate from the growing landscape of generative AI products
PM Deliverables:
Generative AI use case validation
User interaction flow diagrams
Stakeholder expectation document
Ethical guidelines and content policies
Competitive positioning statement
This phase helps product managers ensure generative AI is the right solution and set appropriate expectations.
2. Data Understanding for Generative AI
The data understanding phase for generative AI products focuses on different considerations compared to traditional ML.
The Product Manager's Role:
Foundation model selection criteria: Define requirements for selecting appropriate base models
Fine-tuning data assessment: Evaluate what custom data will be needed for fine-tuning
Data diversity planning: Ensure training data represents diverse perspectives and use cases
Data rights clearance: Verify appropriate permissions for training data
Retrieval corpus planning: For RAG (Retrieval-Augmented Generation), define knowledge base requirements
PM Deliverables:
Foundation model selection criteria
Fine-tuning the data acquisition plan
Data diversity assessment framework
Data rights clearance documentation
Knowledge base requirements (for RAG)
This phase helps product managers plan the data foundation for generative capabilities.
3. Data Preparation for Generative AI
Data preparation for generative AI often involves techniques that differ from traditional machine learning.
The Product Manager's Role:
Fine-tuning dataset curation: Guide the selection and preparation of examples for fine-tuning
Retrieval corpus development: Oversee the creation of knowledge bases for RAG
Data filtering policy: Establish guidelines for filtering inappropriate content from training data
Synthetic data strategy: Determine if and how synthetic data will supplement real data
Resource allocation: Manage resources for potentially compute-intensive data preparation
PM Deliverables:
Fine-tuning dataset specification
Retrieval corpus development plan
Data filtering policy
Synthetic data strategy
Data preparation resource plan
Effective data preparation sets the foundation for generative capabilities that align with product goals.
4. Modeling Through Prompt Engineering
The modeling phase in generative AI shifts from algorithm development to prompt engineering and model adaptation.
The Product Manager's Role:
Prompt strategy development: Define the approach to prompt design (templates, few-shot learning, etc.), working with the appropriate team members in your organization
Fine-tuning strategy: Determine the extent and focus of model fine-tuning
Parameter optimization: Guide decisions about generation parameters (such as temperature, top-p, etc.)
Prompt management system: Design systems for managing and versioning prompts
Evaluation criteria: Establish how generated outputs will be evaluated
PM Deliverables:
Prompt strategy document
Fine-tuning specification
Parameter guidelines for different use cases
Prompt management requirements
Generation evaluation framework
This phase helps product managers guide the development of effective generation capabilities.
5. Evaluation Beyond Metrics
Evaluating generative AI requires a different approach than traditional ML evaluation.
The Product Manager's Role:
Multi-faceted evaluation: Design evaluation frameworks that capture different quality dimensions
Human evaluation protocols: Develop processes for the human assessment of generated content
Red teaming: Organize adversarial testing to identify potential misuse or harmful outputs
Comparative evaluation: Setup A/B testing between different prompt strategies or models
User feedback integration: Design mechanisms to capture and incorporate user feedback
PM Deliverables:
Comprehensive evaluation framework
Human evaluation guidelines
Red team testing protocol
A/B testing plan
User feedback collection mechanism
Thorough evaluation helps product managers ensure that generative capabilities meet user needs and quality standards.
6. Deployment with Guardrails
Deploying generative AI products requires careful consideration of guardrails and monitoring.
The Product Manager's Role:
Content filtering implementation: Ensure appropriate content filters are in place
Usage monitoring plan: Define what usage patterns will be tracked
Feedback loops: Establish mechanisms for users to report issues with generated content
Gradual rollout strategy: Plan phased deployment to manage risks
User education: Develop materials to help users interact effectively with generative features
PM Deliverables:
Content filtering requirements
Usage monitoring specification
User feedback mechanisms
Staged rollout plan
User education materials
Careful deployment helps manage the unique risks associated with generative AI.
7. Continuous Learning and Improvement
While not always formalized as a separate phase, continuous improvement is critical for generative AI products.
The Product Manager's Role:
Prompt refinement process: Establish processes for ongoing prompt optimization
Model update criteria: Define when to update or fine-tune the underlying models
Performance monitoring: Track generation quality and user satisfaction over time
Emerging use case identification: Identify new use cases based on user behavior
Competitive monitoring: Track advances in foundation models and competitor offerings
PM Deliverables:
Prompt refinement framework
Model update criteria
Performance monitoring dashboard requirements
Use case evolution tracking
Competitive monitoring process
This ongoing focus ensures generative AI products remain competitive and valuable over time.
Product Management Challenges Unique to Generative AI
Generative AI presents several challenges that are either unique or significantly amplified compared to traditional ML products:
1. Managing the "Wow" Factor vs. Practical Utility
Generative AI often creates an initial "wow" reaction that can mask practical limitations.
PM Strategy:
Focus early user testing on practical use cases rather than demos
Measure sustained usage after initial novelty wears off
Define clear utility metrics beyond subjective impressions
2. Balancing Creativity and Constraints
Generative AI products must strike a balance between creative freedom and appropriate guardrails.
PM Strategy:
Define clear content policies that still allow for creative expression
Create tiered permissions for different user segments
Implement progressive disclosure of advanced capabilities
3. Managing Foundation Model Dependencies
Most generative AI products build on third-party foundation models, creating new dependencies.
PM Strategy:
Develop contingency plans for model API changes
Monitor foundation model updates and their impact
Consider multi-model strategies to reduce dependency
4. Handling Unpredictable Outputs
Despite guardrails, generative models can produce unexpected outputs.
PM Strategy:
Implement user-friendly feedback mechanisms
Create clear escalation paths for problematic outputs
Design appropriate human review processes
5. User Experience for Co-Creation
Generative AI often involves collaboration between the user and AI rather than simple tool usage.
PM Strategy:
Design interfaces that facilitate iterative refinement
Create clear mental models for how the AI works
Provide appropriate controls for guiding the generation
Implementing CRISP-GEN AI: Practical Considerations
As you apply CRISP-GEN AI to your generative AI products, consider these implementation recommendations:
1. Prompt-Centric Product Development
Treat prompts as a core product asset:
Version control your prompts
A/B test different prompt strategies
Document prompt design decisions
Create prompt templates for consistency
2. Rapid Iteration Cycles
Generative AI enables faster iteration than traditional ML:
Implement short feedback loops for prompt refinement
Create mechanisms for quick user feedback
Develop processes for prompt performance tracking
3. Human-in-the-Loop Processes
Design appropriate human oversight:
Define when human review is required
Create efficient review interfaces
Balance automation and human judgment
Measure review consistency
4. Product Metrics Beyond Technical Performance
Develop holistic success metrics:
User satisfaction with generated content
Iteration cycles before acceptance
Time saved compared to non-AI alternatives
Novel applications discovered by users
Business Impact of Effective Generative AI Product Management
Implementing CRISP-GEN AI can deliver significant business benefits:
Faster time to market: Streamlined processes for leveraging foundation models
Higher user satisfaction: Better alignment between generative capabilities and user needs
Reduced risk: More effective guardrails and evaluation processes
Competitive differentiation: Focus on unique use cases rather than generic capabilities
Operational efficiency: More effective prompt management and iteration
The Future of Generative AI Product Management
As generative AI continues to evolve, product management practices will need to adapt:
Multi-agent systems: Managing products where multiple AI agents collaborate
Custom model development: Balancing foundation models with custom capabilities
Multimodal generation: Managing products that work across text, image, audio, and video
Adaptive user experiences: Interfaces that adjust based on user behavior and feedback
Responsible AI at scale: Managing ethical considerations across large user bases
CRISP-GEN AI offers a structured approach to the unique challenges of managing generative AI products. Adaptation of traditional frameworks to address prompt engineering, subjective evaluation, and ethical considerations helps product managers deliver successful generative AI products.
As the generative AI landscape continues to evolve rapidly, having a systematic approach to product development becomes even more critical. CRISP-GEN AI offers that structure while maintaining the flexibility to adapt to emerging capabilities and use cases.
In my next post, I’ll conduct a comparative analysis of CRISP-DM, CRISP-ML, and CRISP-GEN AI, identifying the strengths each framework brings to different types of AI products and how product managers can select the right approach for their specific needs.
This is the fourth post in my series on "Enhancing CRISP-DM for Modern AI Product Management." Join me for the next installment, where I'll compare the different frameworks to help you choose the right approach for your AI products.
Great insights. As a PM how do you balance putting checks (prompting, parameters, evals) vs human involvement?