Almost Timely News: ๐️ How to Use Generative AI For Retail Analytics (2026-03-01)
Almost Timely News: 🗞️ How to Use Generative AI For Retail Analytics (2026-03-01)Or how I'm optimizing my World of Warcraft auctionsAlmost Timely News: 🗞️ How to Use Generative AI For Retail Analytics (2026-03-01) :: View in Browser The Big Plug👉 I’ve got a new course! GEO 101 for Marketers. Content Authenticity Statement10% of this week’s newsletter content was originated by me, the human. You’ll see outputs from Claude Code in the video. Learn why this kind of disclosure is a good idea and might be required for anyone doing business in any capacity with the EU in the near future. Watch This Newsletter On YouTube 📺Click here for the video 📺 version of this newsletter on YouTube » Click here for an MP3 audio 🎧 only version » What’s On My Mind: How to Use Generative AI For Retail AnalyticsIn this week’s newsletter, let’s use AI to build some advanced retail analytics, specifically for my video game character in World of Warcraft. By way of background, the video game World of Warcraft has an in-game marketplace called the Auction House. I’ve done lots of articles in the past on it and why I play the game, and the new expansion, Midnight, just dropped. It’s an amusing side hobby of mine to try and run a profitable resale business, reselling in-game items to other players. However, I’ve not sat down with my data recently to see how to tune my in-game business, nor have I done an inventory to see what sells and what’s a waste of time. As we move into a new expansion, I have over 1,100 items up for sale and not all of them sell well - or at all. It’d be good to know what to keep and what to throw overboard. Prior to generative AI, we would have had to do this analysis with classical data science tools like the programming language R or statistical environments like SPSS, which seems like vast overkill and a ton of work for a video game. And to be clear, this is all fun - you can trade in-game currency for real world currency, but the exchange rate is basically 15000:1; 15000 units of in-game currency convert to one unit of real world currency, and on a daily basis I earn around 3000 units, so… this isn’t a money maker. It’s just for fun. So today, let’s build a retail analytics system for my in-game auctions business. There’s obviously some applicability to the real world and real world retail businesses, but that’s a topic for a different time. Part 1: The 5P Framework by Trust InsightsBefore we begin, we should define clearly what we’re doing. If we have a great project plan, then generative AI will be able to put this all together for us fairly easily. If we don’t have a great plan, then we’re going to spin our wheels a ton. In the age of agentic AI, there is no framework more powerful or useful for agentic AI prompting than the 5P Framework by Trust Insights. The 5P Framework by Trust Insights is:
Let’s step through mine.
Now, this is a good start. My next step is to feed the 5P Framework by Trust Insights into generative AI to help me fill in the gaps. I’ll literally take this entire section, put it into AI, and have it ask me questions about things I’ve overlooked or forgotten. Part 2: Finding All The DataOnce I’ve got my fully baked project plan, it’s time to do the research and get the data. For any project like this, I always start with Deep Research. In this case, I want deep research on what I don’t know about retail analytics. It’s been a hot minute since I’ve had to do retail analytics and most of my knowledge is foundational, things like RFM analysis. I don’t know what’s happened in the last few years in the space, nor do I know what corresponding code and tools exist that we could build with. There are a ton of companies that offer retail analytics as software platforms and services but this is for World of Warcraft. I’m not buying a commercial service to make my mage’s sales of eversinging dust more profitable. So part of the research has to be what free and open source solutions - especially Python libraries - are out there that I could use. One of the cardinal principles with good agentic AI use is to not reinvent the wheel. To do this research, the best framework is the Trust Insights CASINO Deep Research Framework because it covers all the bases of what we’d want to know. I’ll put the prompt at the end of the newsletter because it’s crazy long. For a project of any importance, it’s a good idea to ensemble your research. What that means is to run the same research report on multiple platforms; for this project, as a demonstration, I had these platforms all use the same prompt:
This type of research will typically take up to 2 hours to compile, so while it compiles, we can go about getting the rest of the data. Trade Skill Master exports its data as CSV files, so we’ll need those as well. To make things easier for the AI, we should assemble a data dictionary first. This is as simple as just telling the model what’s in each of the accounting files. There are 3 data files to work with:
And here’s what the headers and first rows look like: Accounting_sales.csv:
Accounting_purchases.csv:
Accounting_expired.csv:
Part 3: Building the CodeThis is what almost everyone gets wrong about vibe coding - they jump right in to start making stuff. We’re doing the opposite. We’re taking the time to gather all the ingredients first, like a chef doing their mise en place, so that when we’re ready to go, it’s a relatively straightforward process for the AI to work. I always start projects the same way, with pre-defined folders, skills, and agents. I have a custom-built Python QA agent along with best practices for coding in Python that we’ll use for this project. That is one of my secret sauce things. I have a lot of pre-built deep research on best practices for coding languages, best practices for QA, et cetera, that I’ve already turned into skills and agents and knowledge bases, knowledge blocks for my clawed environment. So I don’t have to redo those every time. That’s one of the most valuable things you can do. If you know there’s a specific task that you do a lot, do the deep research once, maybe refresh it every quarter or every year, but have it on hand and you will dramatically improve the quality of your agentic AI outputs. Our first step is to build three documents: the first is a research synthesis. For this, I typically use Claude Code with my /factcheck skill (which you can download for free here). Once we have a complete picture of the research, unifying all the different research reports, then the second is the Product Requirements Document. This is a project management document we need that governs how this project should work. It has to be informed by the 5P Framework by Trust Insights from earlier PLUS the research document. PRDs in general should contain user stories, functional requirements (what the code is supposed to do), non-functional requirements (how it should do those things, like efficiency, speed, security, etc.), domain requirements (industry specific things), technical requirements (what nuts and bolts are needed), design patterns (what to do), antipatterns (what not to do), milestones, and KPIs. Ideally, a PRD contains everything that a project manager would need to run the project (or at least set it up) in one document. Once you’ve got a PRD, then it’s straightforward to generate a workplan. The workplan is the step by step implementation of the PRD. Workplans are very granular, so granular that you might even have things like “commit the code at this point” or “build a unit test for this” all throughout it. I like to generate workplans as if I’m going to hand it to the most junior developer on the team - everything has to be spelled out. Why? Because the more we spell out in our planning documents, the less AI has to guess. Once we’ve got our research, PRD, and workplan, we literally feed it all into a system like Google Antigravity, Claude Code, OpenAI Codex, Qwen Code, or the utility of our choice and we let it run. And run. And run. The system ideally won’t have many questions for us because all our planning documents should have answered them. At this point, it’s just implementing the plans. You might reasonably say that this is vast overkill for a video game analysis tool, but this is the process I use for almost all my AI-powered analysis. The more planning you do up front, the more likely it is AI will one-shot the entire thing because it’s already done all the thinking up front. That’s also why there are four levels of planning - the 5P Framework by Trust Insights, the research synthesis, the PRD, and the workplan. Each layer of planning gives the machines time to fact check themselves, think things through, and consider alternatives so that by the time you’re at the workplan phase, almost everything has been thought through. If you just jumped in with the data directly, you’d end up with so much chewing gum, duct tape, and baling wire barely holding your code together that it would quickly be unmaintainable. Any change you wanted to make to the system would be almost impossible because the system was so incredibly brittle. This level of planning also allows for us to use agentic AI at full power. When you plan everything out, you can put the system in autonomous mode and tell it to check its work at every stage against the plans. If the plans are good, as ours are, there are success criteria throughout that give AI the ability to know whether it’s succeded or not. It doesn’t have to ask us to review anything - it knows what success looks like. For example, I might have a success rule like “no file can have a cyclomatic complexity grade of C or lower”. Cyclomatic complexity is how many different ways there are to traverse a piece of code, and more is worse. You generally want any given piece of code to do one thing well. This is an objective benchmark that AI agents can understand and measure against. Part 4: Reviewing the Analysis and Next StepsOnce the code is written, debugged, tested, and run through automated QA, it’s ready to be used. This part is probably the easiest part, at least for me as the stakeholder World of Warcraft auctioneer. I now know what to stop listing, and what to do more of. Here’s the thing about this process - the end result looks effortless because of all the planning up front. In the video version, you don’t see that the research parts took close to 6 hours to build (I turned them on at 9 AM on a Saturday morning and they were done around 3 PM). It looks easy because the boring stuff never makes it into the video - no one wants a 12 hour livestream of mostly waiting around for agents to do their work. And my Brutish Riverpaw Axe has clearly got to go. It’s just not selling. Part 5: Wrapping UpIt is a very small stretch of the imagination to see how any retail business could swap out World of Warcraft transaction data with their own data. All the fundamentals remain the same. The process remains the same. You start with your 5P framework by Trust Insights. You build your research. You build a requirements document for the piece of software you want to solve this problem. And ultimately, you end up with a work plan that gives you the advice about what you need to do to improve your business. Generative AI did not do the data analysis. Generative AI built the software to do the data analysis. And I chose this approach specifically because I knew this data set was beyond generative AI’s capabilities. In general, if you’re asking it to work with more than 20 rows of data, you should be having it write software instead of trying to do the data analysis itself. That’s a good rule of thumb. But what this gives you as a business person, as a marketer, as a salesperson, is the ability to do deep analysis on pretty much any kind of data you have and understand what you should be doing more of, what you should be doing less of. That’s how you optimize well. So take these lessons from retail analytics in World of Warcraft and apply them to your business. Think about all the frameworks that you could be using to analyze your data and your inventory and what sells. And have AI write the code to do the deep statistical analysis and ultimately tell you how to achieve your goals. How Was This Issue?Rate this week’s newsletter issue with a single click/tap. Your feedback over time helps me figure out what content to create for you. Here’s The UnsubscribeIt took me a while to find a convenient way to link it up, but here’s how to get to the unsubscribe. If you don’t see anything, here’s the text link to copy and paste: https://almosttimely.substack.com/action/disable_email Share With a Friend or ColleaguePlease share this newsletter with two other people. Send this URL to your friends/colleagues: https://www.christopherspenn.com/newsletter For enrolled subscribers on Substack, there are referral rewards if you refer 100, 200, or 300 other readers. Visit the Leaderboard here. ICYMI: In Case You Missed ItHere’s content from the last week in case things fell through the cracks:
On The TubesHere’s what debuted on my YouTube channel this week:
Skill Up With ClassesThese are just a few of the classes I have available over at the Trust Insights website that you can take. PremiumFree
Advertisement: New AI Book!In Almost Timeless, generative AI expert Christopher Penn provides the definitive playbook. Drawing on 18 months of in-the-trenches work and insights from thousands of real-world questions, Penn distills the noise into 48 foundational principles—durable mental models that give you a more permanent, strategic understanding of this transformative technology. In this book, you will learn to:
Stop feeling overwhelmed. Start leading with confidence. By the time you finish Almost Timeless, you won’t just know what to do; you will understand why you are doing it. And in an age of constant change, that understanding is the only real competitive advantage. 👉 Order your copy of Almost Timeless: 48 Foundation Principles of Generative AI today! Get Back To Work!Folks who post jobs in the free Analytics for Marketers Slack community may have those jobs shared here, too. If you’re looking for work, check out these recent open positions, and check out the Slack group for the comprehensive list.
Advertisement: New GEO 101 CourseWhen I talk to folks like you, being recommended by AI is one of your top marketing concerns in 2026. We’ve taken everything we’ve learned from OpenAI’s documentation, Google’s technical papers, patents, sample code, plus our years of experience in generative AI to assemble a high-impact 90-minute course on GEO 101 for Marketers. In this course, you’ll learn:
This course is meant to be used. In addition to the course itself, you’ll also receive:
And best of all, this is our most affordable course yet. GEO 101 for Marketers is USD 99 and is available today. 👉 Enroll here in GEO 101 for Marketers! How to Stay in TouchLet’s make sure we’re connected in the places it suits you best. Here’s where you can find different content:
Listen to my theme song as a new single: Advertisement: Ukraine 🇺🇦 Humanitarian FundThe war to free Ukraine continues. If you’d like to support humanitarian efforts in Ukraine, the Ukrainian government has set up a special portal, United24, to help make contributing easy. The effort to free Ukraine from Russia’s illegal invasion needs your ongoing support. 👉 Donate today to the Ukraine Humanitarian Relief Fund » Events I’ll Be AtHere are the public events where I’m speaking and attending. Say hi if you’re at an event also:
There are also private events that aren’t open to the public. If you’re an event organizer, let me help your event shine. Visit my speaking page for more details. Can’t be at an event? Stop by my private Slack group instead, Analytics for Marketers. Required DisclosuresEvents with links have purchased sponsorships in this newsletter and as a result, I receive direct financial compensation for promoting them. Advertisements in this newsletter have paid to be promoted, and as a result, I receive direct financial compensation for promoting them. My company, Trust Insights, maintains business partnerships with companies including, but not limited to, IBM, Cisco Systems, Amazon, Talkwalker, MarketingProfs, MarketMuse, Agorapulse, Hubspot, Informa, Demandbase, The Marketing AI Institute, and others. While links shared from partners are not explicit endorsements, nor do they directly financially benefit Trust Insights, a commercial relationship exists for which Trust Insights may receive indirect financial benefit, and thus I may receive indirect financial benefit from them as well. Thank YouThanks for subscribing and reading this far. I appreciate it. As always, thank you for your support, your attention, and your kindness. Please share this newsletter with two other people. See you next week, Christopher S. Penn Appendix: My Retail Analytics Deep Research PromptContextA data-driven retail company selling commodity consumer goods (groceries, household products, everyday consumer items) in a B2C model needs to build an advanced analytics platform. The company has standard retail transaction data: product identifiers, SKU details, sale date/time, quantities sold, unit prices, customer identifiers, and associated buyer demographics where available — the typical schema of a point-of-sale or e-commerce transaction system. Data arrives via CSV exports and REST API integrations. The company wants to move beyond basic sales reporting to a full analytics stack spanning descriptive analytics (what is selling), diagnostic analytics (why it is selling), predictive analytics (what will sell), and prescriptive analytics (what actions to take to maximize profit and minimize costs). The platform will be built in Python 3.11 by a skilled Python developer. No suitable internal research compilation exists. The developer needs a single authoritative reference document that maps academic retail analytics methods to production-ready Python implementations, organized as an engineering blueprint rather than a narrative literature review. Today’s date is 2026-02-28. AudienceThe primary audience is a senior Python developer (Python 3.11) with strong software engineering skills, intermediate statistical knowledge, and working familiarity with machine learning concepts. This developer will:
The developer does NOT need explanations of basic Python syntax, general ML concepts, or what a CSV file is. The developer DOES need: specific algorithm names, library function signatures, hyperparameter guidance, feature engineering recipes, pipeline architecture patterns, and decision criteria for choosing between competing methods. A secondary audience is a technical project manager who will use the research to scope the project, estimate effort, and prioritize the build sequence. ScopeSource RequirementsCriterionRequirementSource typePeer-reviewed journal articles and peer-reviewed conference proceedings. Preprints (e.g., arXiv) are acceptable ONLY if a corresponding peer-reviewed version exists; cite both versions.Publication dateOn or after 2020-01-01DOIEvery cited paper MUST have a valid DOI. Format: Python Library RequirementsCriterionRequirementLicenseFree and Open Source Software (FOSS) only. Permissive (MIT, BSD, Apache 2.0) or copyleft (GPL) licenses are acceptable. No proprietary or commercial-only libraries.MaintenanceEvery cited library MUST have at least one release or commit to its main branch on or after 2024-01-01. Verify this by checking PyPI release dates or GitHub/GitLab commit history.CompatibilityMust run on Python 3.11Citation formatFor each library, provide: PyPI package name, current version number, license, date of most recent release, and a one-line install command ( Exclusions
IntentPrimary ObjectivesProduce a research compilation that directly answers these questions, organized by analytics layer: Layer 1 — Descriptive Analytics (What is happening?)
Layer 2 — Diagnostic Analytics (Why is it happening?) 4. What feature engineering techniques are specific to retail transaction data (temporal features, lag features, promotional encoding, calendar effects, price features)? 5. What feature selection methods best reduce noise and identify true drivers in retail data? 6. How should a developer identify and handle confounding variables, covariates, and multicollinearity in retail data? 7. What causal inference methods apply to retail (e.g., measuring true promotional lift, price elasticity estimation)? Layer 3 — Predictive Analytics (What will happen?) 8. What are the current best demand forecasting methods for retail SKU-level data? 9. What customer lifetime value (CLV) and churn prediction methods are current? 10. What methods predict optimal inventory levels based on sales patterns? Layer 4 — Prescriptive Analytics (What should we do?) 11. What price optimization and dynamic pricing methods are implementable in Python? 12. What assortment optimization techniques tell a retailer which products to stock more of, which to discontinue? 13. What promotional optimization methods determine when and how to run promotions for maximum ROI? 14. What multi-objective optimization approaches balance competing retail objectives (profit maximization, cost minimization, customer satisfaction)? Cross-Cutting Concerns 15. What model interpretability and explainability methods are appropriate for retail analytics (SHAP, LIME, etc.)? 16. What pipeline orchestration and MLOps patterns are recommended for retail analytics systems? 17. What evaluation metrics and validation strategies are specific to retail forecasting and optimization models? Downstream UseThe research output will be used directly to:
Every piece of information in the output must be actionable toward one of these five uses. NarratorAdopt the voice of a senior staff data scientist at a major retail analytics firm — someone who has published at KDD and ICML but spends most of their time building production systems. Your register is:
OutcomeOutput FormatDeliver the output as a single Markdown document structured as follows: Required Document StructurePer-Section Internal StructureWithin each subsection (e.g., 3.1 Demand Forecasting), use this structure:
Required AppendicesAppendix A: Complete Paper Reference Table — A flat table of every paper cited, with columns: Title, Authors, Year, Venue, DOI URL, Open Access PDF URL, Analytics Layer(s), Primary Technique. Appendix B: Library-to-Technique Mapping Matrix — A cross-reference matrix where rows are Python libraries and columns are analytics techniques. Mark which library implements which technique. Appendix C: Code Samples — For any technique described in the research where no maintained FOSS Python library exists, provide a self-contained Python 3.11 code sample that implements the core algorithm. Include docstrings, type hints, and inline comments. Each code sample must be runnable as-is with only standard library and the declared dependencies. Verification RequirementsBefore including any source in the final output:
Minimum Deliverable Thresholds
End of CASINO Prompt Invite your friends and earn rewardsIf you enjoy Almost Timely Newsletter, share it with your friends and earn rewards when they subscribe. |

Comments