This week, the pattern is clear: AI is moving from tools to systems.
Claude is getting more capacity. Codex is starting to look like a command center for agent work. Rufus is giving shoppers more pricing intelligence directly on Amazon. Tariff effects are still working through real costs and margins.
For ecommerce sellers, the lesson is simple: you do not need more random AI experiments. You need repeatable AI workflows tied to your real business.
And that starts with one skill most sellers still skip: giving the AI a real brief.
That's why this week's deep dive is the GOLD prompt — the framework I use to turn ChatGPT, Claude, Codex, and OpenClaw from "generic answer machines" into useful ecommerce operators.


Anthropic just signed a compute partnership with SpaceX and is using the new capacity to raise limits across the board.
Claude Code's 5-hour rate limits are doubling for Pro, Max, Team, and seat-based Enterprise plans. Anthropic is also removing the peak-hours limit reduction on Claude Code for Pro and Max accounts, and raising API rate limits for Claude Opus models.
The SpaceX deal gives Anthropic access to more than 300 megawatts of new capacity — described as more than 220,000 NVIDIA GPUs coming online within the month. Ars Technica has more on the deal here.
TAKEAWAY: More Claude capacity means agents can run longer, deeper workflows. But more compute does not fix a weak operating system. If your ecommerce workflow is vague, the AI will just produce more vague output faster. Use the extra limits on repeatable tasks: PPC audits, listing reviews, competitor monitoring, customer service drafts, and weekly margin checks.

I've been testing Codex over the last few days and it feels like a super app that combines pieces of Codex, Claude Code, and ChatGPT into one workflow. I'll share more after I run it through more real tasks.
OpenAI describes Codex as a "command center for agentic coding" with worktrees, cloud environments, parallel agents, Skills, Automations, and background work — all connected through your account.
Even though Codex is framed as a coding tool, the bigger pattern is what matters for sellers.
TAKEAWAY: Codex matters because it shows where AI work is going: multiple agents, background tasks, reusable skills, and command-center workflows. That same pattern is coming to ecommerce. The seller who wins will not be the one testing 20 tools. It will be the one building 5–10 reliable workflows that run every week.
Deep Dive: The gold prompt — How to Turn AI From Chatbot Into Ecommerce Operator
Most sellers get bad AI output because they give bad briefs.
They type something vague like "analyze my PPC" and wonder why the answer is generic. It's not the AI's fault. Garbage in, garbage out.
Inside an ecommerce business, an AI agent is only useful if it knows 4 things:
What business result matters
What deliverable you need
What it is not allowed to do
What real data it should use
That's the difference between a chatbot and an operating system.
GOLD is the framework. G, O, L, D — 4 letters, one extra rule at the end. I teach it on every OCEA cohort call before we build any OpenClaw skills, because the brief matters before the tool.
G — Goal: What do you actually want to achieve?
Be specific about the outcome.
"Analyze my PPC" is not a goal. That's a topic. The agent has no idea what success looks like.
Ask yourself: why am I doing this in the first place?
Is the goal profitability?
Is the goal organic ranking?
Is the goal launching a new product?
Is the goal saving time?
Is the goal handing your VA a clean file to execute on?
Those are 5 completely different jobs. They should not get the same answer. Bid optimization for ranking and bid optimization for profitability lead to opposite recommendations.
O — Output: What does the deliverable actually look like?
Tell the agent exactly what format you want.
A table?
A CSV file your VA can execute on Monday morning?
A 5-step action plan with priorities?
A bulleted summary for a team review?
The cleaner you define the output, the less guessing the agent has to do.
Early on I'd ask for "a PPC analysis" and get a wall of text. Now I say: "Give me a CSV with 3 columns: search term, current bid, recommended bid." That's a deliverable. If you don't define what "done" looks like, the agent will guess. And it will guess wrong.
L — Limitations: What should it NOT do?
This is where most sellers get lazy — and where most bad outputs come from.
Tell your agent what's off limits. Real examples from my own PPC workflows:
Don't recommend any bid change greater than 25% in 1 step
Don't touch Sponsored Brands — only Sponsored Products
Don't recommend changes based on fewer than 10 clicks of data
Don't fill in gaps by guessing — only use the data I provided
Don't give generic advice that applies to any seller
Don't upload anything to Seller Central — human review first
Limitations are what turn AI from a risky intern into a useful assistant.
D — Details: The business context.
This is the fuel. Feed it your real data. The more specific, the better the output.
For an Amazon PPC analysis, that means:
Your ASINs and product category
Your target ACOS and TACOS
Your margin per unit after FBA fees, ad spend, and COGS
Your current organic rank for your main keywords
Your top 3 competitor ASINs
Your search term report from the last 30 days
Your VA's role and what they can execute weekly
Think of it like a SWOT analysis. Tell the agent your strengths, your constraints, your competition, the threats in your market. An agent that knows your business thinks like a specialist. An agent with no context thinks like a generalist. That's the difference between an output you can use and one you throw away.
The one rule you always add at the end:
"Ask me any questions if you're unclear."
This line matters more than people realize. It gives the agent permission to stop and ask before it guesses. Without it, the AI rushes ahead like an eager intern and fills in gaps with assumptions. That's where hallucinations come from.
A smart agent that asks 1 clarifying question will give you a better result than one that charges ahead with wrong assumptions.
Bad prompt vs. GOLD prompt — PPC audit example
Bad prompt:
"Look at my PPC and tell me what to do."
Without data and without a goal, the agent is just guessing. You'll get generic advice that could apply to any seller in any category.
GOLD prompt:
G: Analyze my PPC campaigns, identify where I'm wasting budget, and give me a prioritized action plan.
O: A report with 3 sections: (1) top wasted-spend search terms to add as negatives, (2) ad groups with ACOS above my target that need bid reductions, (3) high-converting terms that are underbidding and need increases. Include estimated monthly savings or gains for each. Format as a CSV my VA can execute on.
L: Don't recommend any bid change greater than 25% in 1 step. Don't touch Sponsored Brands — only Sponsored Products. Don't recommend changes based on fewer than 10 clicks of data. Don't upload anything to Seller Central. Human review first.
D: Here is my 30-day search term report [attach CSV]. My target ACOS is 25%. My product sells for $34.99. My organic rank for my main keyword is position 4. I have a VA who executes bulk changes weekly. My main competitors are [ASIN 1, ASIN 2, ASIN 3].
Ask me any questions if you're unclear.
The bad prompt gets you a wall of text. The gold prompt gets you a CSV your VA can execute on Monday morning.
From brief to system
Here's where most sellers stop too early.
They write 1 good prompt, get 1 decent answer, and move on.
That is not a system.
A system means you can run the same workflow again next week with new data. For example:
Every Monday: export 30-day PPC search term report
Drop it into your PPC audit skill
Agent identifies wasted spend, bid changes, and new keyword opportunities
VA reviews the CSV
Human approves final changes before upload
Results get checked the following week
That is where AI becomes useful. Not because the prompt was clever. Because the workflow repeats.
The sellers who are ahead right now are not the ones with the most AI subscriptions. They're the ones who took 1 or 2 workflows, built them properly, and run them every week.

Amazon's AI shopping assistant Rufus now shows 30, 90, and 365 days of price history on product pages in the US, UK, and India. Amazon said more than 50 million customers have already used the feature.
Shoppers can see the price history directly on the product page or just ask Rufus whether an item has been on sale recently. Fake markdowns and discount theater are a lot harder to pull off when a buyer can see your full pricing record in 2 clicks.
TAKEAWAY: Amazon is training shoppers to ask AI before they buy. That means your pricing, promos, reviews, images, and listing claims need to survive AI-assisted comparison. Fake urgency and inflated discounts will get weaker. Real value, clean positioning, and honest promo strategy will matter more.

Dallas Fed researchers said the price effects from realized 2025 tariff changes peaked in Q1 2026. Realized tariff collections have been lagging the announced policy changes — which means seller margin pressure can keep showing up well after the headline cycle moves on.
The pain doesn't always hit when the announcement drops. It can show up later in your landed cost, your pricing, and your cash flow.
TAKEAWAY: This is where AI systems should connect to the real numbers: landed cost, contribution margin, TACOS, inventory turns, reorder timing, and price changes. If you are not feeding those numbers into your weekly workflows, your AI is just giving advice in the dark.
In case you missed it:
→ What Andy Jassy didn't say to shareholders this year — https://www.modernretail.co/technology/what-amazon-ceo-andy-jassys-annual-letter-to-shareholders-didnt-say/
→ Amazon is changing how you can prove a discount is real — https://www.ecommercebytes.com/2026/04/08/amazon-upends-discount-pricing-with-new-reference-price-rule/
→ Google launches Universal Commerce Protocol with Shopify, Etsy, Walmart, and Target — https://blog.google/products/ads-commerce/agentic-commerce-ai-tools-protocol-retailers-platforms/
The AI tools are getting stronger.
But stronger tools do not automatically create stronger businesses.
The ecommerce sellers who benefit most from this next wave will be the ones who can turn messy business context into clear instructions, reusable workflows, and human-reviewed execution.
Start with GOLD: Goal. Output. Limitations. Details.
Then add: Ask me any questions if you're unclear.
That is how you move from playing with AI to building an AI system for your ecommerce business.
Talk soon,
Gary
P.S. If you want my GOLD prompt template for ecommerce sellers, reply with GOLD and I'll send it over. It's the same framework I teach inside OCEA before we build any OpenClaw skills — because the brief matters before the tool.