Available now  ·  India + Germany

Apurv Adarsh

Product Manager — Internal Tools · Ops Automation · GenAI

Ex-Amazon PM intern (Munich, Pan-EU retail). Ex-engineer (DXC, 4 years). HHL Leipzig MBA. I build analytics pipelines, workflow systems, and GenAI-assisted products that ops teams actually adopt — and I measure adoption, not just delivery.

+30%
Forecast accuracy
Overstock KPI system
35→5 min
Banner build time
GenAI RAG tool
4 → 30+
Stakeholder adoption
Dashboard rollout
~30 min
Saved per user/day
Receipt automation
View case studies LinkedIn ↗ GitHub ↗
Case studies 4 projects
Global E-commerce Co. Analytics · KPI Design
Overstock KPI & Dashboard System

Teams couldn't align on overstock drivers — decisions were slow, markdown risk was high, and the existing reports arrived too late to act on. I reframed this as a decision-quality problem, not a reporting one.

+30%forecast accuracy
−10%markdown exposure
4 → 30+stakeholders adopted
What I owned
KPI tree definition, composite metric logic (stock health + weeks of cover + PO pipeline + demand trend), dashboard requirements, metric glossary, adoption plan. Partnered with BI throughout.
What I did
  • Built an Overstock Probability metric — a composite signal that caught risk earlier than any single indicator alone
  • Beta-tested with senior VMs for weeks before launch, iterating until prediction accuracy hit ~75%
  • Embedded into existing WBR workflow (no new tool, no new login)
  • Designed automated weekly digest — stakeholders arrived to reviews already informed
  • Scaled adoption team-by-team: monitors team first, then PC super-team, then 30+ stakeholders
Tradeoffs managed
Metric trust risk → wrote public glossary with every input and threshold defined
Adoption risk → embedded in existing workflow, decision-view not just data-view
Data dependency risk → monitoring checks + explicit freshness header in digest
Composite vs simple → composite was necessary; WoC alone missed PO pipeline risk
Global E-commerce Co. GenAI · RAG · Product Build
GenAI / RAG Deal-Banner Automation

Banner creation was slow, inconsistent, and expensive at scale. QA outcomes varied across brands with no systematic control. I owned the full PRD, model tradeoffs, retrieval design, guardrails, and phased rollout.

35→5 minbuild time per asset
−40–50%cost per banner
80–85%QA pass rate (from 65–70%)
3 → 30+users scaled
What I owned
PRD + workflow design, model tradeoff decisions (quality / latency / cost), retrieval grounding strategy, guardrails design, eval criteria, rollout plan across brands.
What I did
  • Baselined time, cost, QA pass rate, and adoption before building anything
  • Designed GenAI + retrieval workflow — grounding outputs in approved templates and brand rules
  • Defined eval criteria aligned to QA, iterated on quality checks with users
  • Phased rollout: 1 brand pilot → 5 brands → 30+ users
  • Fixed a data-leak issue with the security team mid-rollout without pausing adoption
Tradeoffs managed
Quality vs latency vs cost → retrieval grounding + hard constraints on model outputs
Hallucination risk → guardrails + template-pinning + human review step retained
Adoption trust → "options not answers" UX framing + QA-aligned acceptance criteria
Privacy/compliance → sanitised inputs/outputs, limited data exposure surface
Global E-commerce Co. Workflow Automation · Ops
Email / Receipt Ingestion → Daily Digest

Manual processing of inbound emails and receipts was error-prone, time-consuming, and had no audit trail. I owned the pipeline design, failure-mode handling, and the "rules-based over AI" product decision.

~30 minsaved per user/day
3 → 30–40users scaled
What I owned
Workflow schema (ingestion → extraction → validation → routing), failure-mode handling, daily digest format design, adoption plan.
What I did
  • Designed the full ingestion pipeline with explicit failure modes and fallback paths
  • Automated end-of-day digest format matched to existing team operating rhythm
  • Deliberately chose rules-based over AI — prototype testing showed equivalent accuracy at lower cost and complexity
  • Iterated on exception patterns and user feedback post-launch
Tradeoffs managed
AI vs rules-based → chose rules: equivalent accuracy, faster build, lower cost, easier to debug
Parsing edge cases → validation layer + manual review fallback for low-confidence extractions
Reliability → audit logs + retry patterns built in from the start
Adoption → digest format mapped exactly to how teams already closed their day
DXC Technology Enterprise · Agile Delivery
Acting Product Owner — €2M Modernisation Roadmap

A legacy billing system modernisation for a national broadband programme was stalled by competing stakeholder priorities and painful deployments. I stepped into the PO role and rebuilt delivery predictability.

50+requirements consolidated
+30%sprint velocity
−40%deploy time
−30%system errors
What I owned
Requirement alignment across 50+ stakeholders (3 business units), backlog prioritisation, sprint inputs, UAT/release readiness gates. Java + Camunda + ActiveMQ architecture context.
What I did
  • Consolidated 50+ requirements into epics and a phased €2M roadmap
  • Rebuilt refinement and acceptance criteria processes to increase sprint predictability
  • Drove UAT and release gates to eliminate late-stage surprises
  • Partnered on CI/CD improvements and cross-vendor integration delivery
  • Restored project credibility after a team-lead gap — delivery stabilised within 3 months
Tradeoffs managed
Scope pressure vs delivery → phased roadmap with explicit tradeoffs negotiated with BT CIO
Speed vs quality → acceptance criteria + UAT gates as non-negotiable release conditions
Dependency risk → early alignment across BT, QA vendor, and offshore delivery teams
Active research — PR OS India-first B2B SaaS · Ongoing
Product Discovery India · B2B · Workflow OS Active
PR OS — India-First PR Workflow System

A workflow system of record for India PR agencies — journalist CRM, follow-up queue, interaction timeline, and manager dashboard. The real pain isn't content creation; it's fragmented follow-ups across email, phone, and WhatsApp with no shared system of record. The wedge is operational discipline, not AI writing.

Key discovery findings
~30 journalists/week across 3–4 campaigns; reply rates 0–10 of every 100 outreach attempts
Follow-ups tracked in memory and email drafts — most journalists need 3+ touchpoints before responding
Channel sequence: email → phone → WhatsApp. No tool is built for this three-channel India reality
Managers have zero live view of ownership or campaign stage without calling the executive directly
Weekly reports rebuilt manually from email threads every Friday — takes 1–2 hours per person
14-day
pilot design ready
₹1k/mo
indicated WTP per user
8
data objects defined
MVP scope
Journalist CRM Interaction timeline Daily follow-up queue Fast call logging Templates Campaign view Manager dashboard Reporting export
GitHub builds github.com/apurv912
Skills & tools
Product
Discovery & Research PRDs & User Stories JTBD Personas RICE Prioritisation Roadmapping A/B Testing Go-to-Market
AI / GenAI
RAG Pipelines Prompt Engineering Model Evaluation Guardrails LangChain Amazon Bedrock n8n
Analytics
SQL KPI Design Funnel Analysis Dashboarding GA4 QuickSight Mixpanel
Delivery & Tools
Agile / Scrum Stakeholder Mgmt Backlog Governance Jira Figma Confluence Postman
Apurv Adarsh
Available immediately · India + Germany · PM / TPM / Product Ops