AskPoker.ai Log #1: Reality Check and Pivot to Concierge MVP

AskPoker.ai Week 1: Reality Check and Pivot to Concierge MVP
This week I learned some hard truths about my assumptions building AskPoker.ai. Time for radical honesty about what’s working and what isn’t.
For context, AskPoker.ai is my attempt to build a conversational AI poker coach that gives recreational players instant, solver-backed analysis of their hands. Instead of wrestling with complex software like PioSOLVER, players can just describe their hand in plain English and get clear strategic advice. I’ve been working on this for about 6 months, and this is my first public update on the journey. You can read more about the original vision in my announcement post.
What I Tried: Forum Engagement Experiment
My plan was simple: test demand by answering poker hands on Reddit and 2+2 with AskPoker analysis. Use the tool to generate insights, then sign responses with “made with askpoker.ai” to see if people were interested in the platform.
The goal was to validate the core value proposition while driving some early traffic. What I found instead was a masterclass in why assumptions need real-world testing.
Hard Truths Discovered
1. The Product Isn’t Ready
Table size limitation: My backend solver only handles 5-max tables, not the 6+ player tables common in most online games. When users post hands from full ring games, the solver has no data, so the LLM just improvises answers based on general poker knowledge.
I knew this was a constraint going in, but experiencing it in practice hit different. I now understand I must solve this limitation eventually (though interestingly, in the French market where 5-max cash games dominate, this might not be as critical).
Scope mismatch: I built the tool to analyze single decisions (“Should I call this river bet?"), but users want comprehensive hand reviews. They want to know where their decisions went wrong across multiple streets, not just get advice on one spot.
Insufficient backend data: The solver gives me the optimal action, but lacks the rich context needed for proper explanations - full ranges, equity calculations, expected value, fold equity. Without this data, the LLM fills in gaps with educated guesses that sometimes sound confident but are wrong.
Basic bugs: Simple things like currency-to-blind conversion. I expected the LLM to handle this automatically when calling the backend, but it doesn’t.
2. Market Reality Check
Forum fragmentation: Even active communities like 2+2 and Reddit aren’t exactly buzzing with activity. There’s discussion, but it’s not the vibrant marketplace of poker questions I had imagined.
French market gap: I found essentially no active French poker discussion platforms. This surprised me - I expected at least some local communities.
User behavior insight: Players consume more content (YouTube, training sites) than they engage in forum discussions. This suggests my marketing approach needs to meet users where they actually spend time, not where I assumed they’d be.
3. The Concierge MVP Reality Check
Here’s where brutal honesty matters: my initial assessment was too rosy.
Actual results: About 75% of my responses were ignored, ~15% got genuine positive responses from users (actual replies thanking me for the analysis), and ~10% were called out for containing errors where the LLM had hallucinated explanations.
Reddit truth: People might not always give positive feedback when you help them, but they never let wrong analysis slide. Every mistake gets caught and corrected, often harshly.
Key insight: Demand absolutely exists for expert-level analysis in accessible language, but the quality bar is unforgiving. There’s no room for “pretty good” when giving strategic advice.
The Pivot: From Product to Process
Why Concierge MVP Makes Sense
The product is too broken for self-service user testing, but the manual approach validates something crucial: the core value proposition works when executed correctly.
More importantly, I need to answer a fundamental question: Are recreational players ready to trust an AI coach? This is about more than just technical capability - it’s about user psychology and market readiness.
While I’m confident I can solve the technical issues given enough time, creating trust is a different challenge entirely. This requires marketing and communication skills that aren’t in my natural wheelhouse, making validation even more critical.
My new process: users message me via email or Discord for hand analysis. I use the backend tool to generate insights, then manually refine and explain the analysis. This creates a controlled environment to test whether players accept AI-driven coaching when it’s delivered thoughtfully.
New Strategy: Content-Led Growth
Blog foundation: Generate poker-specific content using the LLM+solver combination. This content becomes source material for TikTok scripts and other marketing efforts - creating a scalable content system.
TikTok expansion: Meet users where they actually are, not in forums. Poker content performs well on TikTok, and the format suits quick hand analysis and strategy tips.
Novel LLM usage: This isn’t just automating existing tasks - I’m using AI as both an exploration agent (to probe solver outputs) and as a writer (to explain complex concepts clearly). This combination creates content capabilities that didn’t exist before.
Transparent AI positioning: Brand myself as “Human + AI” poker analysis. Be upfront about using AI tools while emphasizing human judgment and experience.
Technical Priorities (Despite “No Product Development”)
I said I’d stop building features, but I need to fix the solver-LLM pipeline to eliminate hallucinations. The focus shifts from user-facing features to content creation tools:
- Fix solver data output to provide complete strategic context
- Build reliable solver-LLM pipeline for scalable content creation
- Develop the infrastructure needed for consistent, accurate analysis
What’s Next
Weekly devlogs: Continue documenting discoveries and pivots transparently.
Manual hand analysis: Build audience and expertise through direct engagement while testing AI coach acceptance.
Content system development: Create the infrastructure for scalable marketing through AI-assisted content creation.
Eventual transition: Move from concierge to automated product once the core experience is proven and technical limitations are resolved.
Closing Reflection
Sometimes the best product development is realizing when to stop developing and start validating. This week taught me that building in public means being honest about failures, not just successes.
The forum experiment didn’t go as planned, but it revealed something more valuable than validation: it showed me exactly what needs to be fixed and where my assumptions were wrong. That’s worth more than a hundred positive feedback comments on a fundamentally flawed approach.
Next week: Testing how far I can push the LLM+solver integration to create a scalable marketing system. Building out the new website with proper landing page and CMS, then seeing what kind of poker content I can generate automatically.
Try the conversational expertise approach yourself at askpoker.ai, or contact me on Discord.