6 Seeds Logo

The Say-Do Gap: Why Consumer Research Fails Agriculture (And How to Fix It)

pinnochio

Ask consumers if they care about sustainability. Most say yes.

Ask if they'd pay more for sustainably-produced food. Most say yes.

Watch what they actually buy. Price wins almost every time.

This is the say-do gap—the persistent disconnect between what consumers tell researchers and what they actually do at the point of purchase. For agricultural organizations spending millions on consumer research, this gap represents a fundamental problem: the data you're paying for may be wrong.

What the Say-Do Gap Is (And Isn't)

The say-do gap isn't consumers lying. It's subtler than that.

When a researcher asks "Would you pay $2 more for organic chicken?", the respondent imagines themselves as the person they want to be—someone who cares about quality, makes thoughtful choices, prioritizes health and environment. In that hypothetical moment, they genuinely believe they'd pay more.

But purchasing decisions happen in different contexts. Standing in a grocery store at 6:30 PM, tired from work, kid in tow, checking the budget app—the $2 difference lands differently. The aspirational self gives way to the practical self. The organic chicken stays on the shelf.

This isn't hypocrisy. It's human psychology. Survey responses capture stated preferences—what people believe they want or think they should want. Behavior reveals revealed preferences—what people actually choose when they have to make tradeoffs.

For commodity boards and agricultural organizations, the difference is enormous.

Why This Matters for Agriculture

Marketing Investments Based on Fiction

Every major commodity board conducts consumer research. Millions of checkoff dollars fund surveys, focus groups, and tracking studies. These studies consistently find that consumers care about:

  • Animal welfare

  • Environmental sustainability

  • Local production

  • Farmer livelihoods

  • Nutritional quality

But actual purchase behavior tells a different story. Private label products grow market share. Discount retailers take volume from premium grocers. Price promotions drive more sales than values-based messaging.

If your marketing strategy is built on research that captures aspiration rather than behavior, you're optimizing for the wrong target.

Sustainability Claims That Don't Move Product

Sustainability messaging is particularly prone to say-do gaps. Consumers express strong support for sustainable agriculture in surveys. Food companies respond with sustainability initiatives, certifications, and messaging.

The marketplace response? Modest at best.

This doesn't mean sustainability doesn't matter. It means the way consumers express they care differs from the purchase criteria they actually apply. Research that only captures the first misses the second.

Trade Show Feedback That Misleads

Agricultural trade shows are expensive. Organizations invest heavily in booth presence, sampling, and feedback collection. Buyers often provide positive feedback—the product is interesting, they'd consider carrying it, they'll follow up.

Follow-up rates tell a different story. The enthusiasm expressed at the booth often doesn't translate to purchase orders. Buyers said one thing; their behavior said another.

Why Traditional Research Falls Short

Traditional research methods create conditions that amplify say-do gaps:

Survey Design Primes Aspiration

Standard survey methodology asks respondents to consider factors they might not think about unprompted. "How important is animal welfare when purchasing chicken?" forces the respondent to have an opinion on animal welfare—even if they've never thought about it at the meat case.

This doesn't capture natural decision-making. It captures prompted rationalization.

Focus Groups Perform for the Room

Focus groups compound the problem. In group settings, social desirability effects intensify. Participants perform for each other—expressing views that sound sophisticated, ethical, and thoughtful.

The person who says "honestly, I just buy whatever's cheapest" faces social pressure. Most don't.

Self-Report Can't Capture Context

Real purchasing decisions happen in context—the store environment, time pressure, competing priorities, mood, budget constraints, who else is in the cart. Surveys can't replicate this context. They capture abstracted preferences, not situated decisions.

Revealed Preference Data Is Hard to Get

The gold standard for understanding behavior is observing behavior. But behavioral data—actual purchase data, scanner data, consumption data—is expensive, fragmented, and often inaccessible to commodity organizations.

So the industry defaults to self-report, knowing it's flawed.

Signs Your Research Has a Say-Do Problem

How do you know if your research is capturing aspiration rather than behavior? Watch for these patterns:

High Stated Importance, Low Behavior Change

Your research shows 75% of consumers say they "definitely" or "probably" would pay more for [attribute]. But sales of products with that attribute remain flat. The stated importance isn't translating to behavior.

Category Trends Don't Match Research Findings

Your tracking study shows growing consumer concern about [issue]. But category-level data shows consumption patterns unchanged. Consumers say they're changing; the market says they're not.

Campaign Results Underperform Research Predictions

Your message testing showed strong positive response to a campaign concept. You ran the campaign. Results were mediocre. The research predicted one thing; the market delivered another.

Price Sensitivity Disconnect

Your research shows price is a lower priority than quality, health, or sustainability. Your market experience shows price promotions drive more volume than anything else. One of these is wrong.

Approaches That Close the Gap

No methodology perfectly bridges stated and revealed preferences. But some approaches get closer than others:

Behavioral Research Designs

Research that observes behavior—even simulated behavior—gets closer to revealed preferences than pure self-report.

Simulated shopping environments present respondents with realistic purchasing scenarios. Instead of asking "how important is organic?", present a shelf with organic and conventional options at different price points. Observe what they choose.

Conjoint analysis and discrete choice experiments force tradeoffs. Respondents can't say everything is important; they must choose between attributes. This reveals priorities more reliably than importance ratings.

Real-world experiments test actual behavior. A/B testing with real products, regional test markets, and promotional variation generate behavioral data rather than stated preference data.

Behavioral Data Integration

Combining survey research with behavioral data provides a reality check on self-report:

Panel data from retail loyalty programs captures actual purchases. Linking survey responses to purchase behavior reveals which self-reports predict behavior and which don't.

Scanner data shows category-level purchasing patterns. If your research says consumers are trading up and scanner data shows private label growth, trust the scanner data.

Consumption diaries where respondents track actual eating behavior get closer to revealed preference than retrospective self-report.

Implicit Measurement

Some research methods aim to capture preferences respondents won't or can't articulate:

Implicit association tests measure automatic associations that self-report misses.

Response latency analysis tracks how quickly respondents answer. Immediate responses may reflect genuine preferences; delayed responses may reflect social desirability calculation.

Biometric measurement tracks emotional and physiological response rather than self-report.

Digital Twin / Synthetic Research

AI-powered research offers a different approach. Rather than asking consumers directly, synthetic research uses AI to simulate consumer populations—"digital twins" calibrated against real demographic and behavioral data.

This approach has several advantages for closing the say-do gap:

No social desirability effects. AI simulations don't perform for researchers or each other.

Behavioral calibration. Digital twins can be calibrated against actual behavioral data, not just stated preferences.

Scenario testing at scale. You can test how simulated consumers respond to different prices, messages, and contexts—revealing sensitivity that self-report obscures.

Speed and iteration. Rather than running one expensive study, you can test dozens of hypotheses quickly.

Synthetic research doesn't replace all traditional research. But for directional insight that's less susceptible to say-do gaps, it offers a valuable complement.

Practical Recommendations

Accept That All Research Is Limited

Self-report research has value. It reveals how consumers think about categories, what language they use, what concerns they articulate. This matters for messaging and communication even if it doesn't perfectly predict behavior.

The mistake is treating stated preferences as behavior predictions. They're not. Use them for what they're good at.

Triangulate Relentlessly

Don't rely on a single methodology or data source. Combine:

  • Self-report for language and framing

  • Behavioral data for what actually happens

  • Market data for category-level trends

  • Sales data for campaign response

When sources converge, you can trust the finding. When they diverge, trust behavior over claims.

Pressure-Test Findings Against Market Reality

Before acting on research findings, ask: does this match what we see in the market? If research says consumers want X and market data says X isn't selling, investigate the discrepancy before doubling down.

Design for Tradeoffs

Research that lets respondents say "everything is important" generates useless data. Force tradeoffs. Make respondents choose. Revealed priorities under constraint are more predictive than stated importance without constraint.

Test in Market Before Scaling

Treat research as hypothesis generation, not fact finding. Before committing major resources based on research findings, test in market. Small-scale pilots, regional tests, and A/B experiments provide behavioral validation that surveys can't.

The Competitive Advantage of Behavioral Insight

Most agricultural organizations rely on traditional research approaches. They commission surveys, run focus groups, and track stated preferences. They're all subject to the same say-do gap limitations.

Organizations that develop behavioral insight capabilities—integrating behavioral data, using revealed-preference methodologies, validating findings against market reality—will make better decisions than those operating on stated preference alone.

This isn't about abandoning research. It's about recognizing its limitations and supplementing it with approaches that get closer to actual behavior.

The say-do gap can't be eliminated. But it can be narrowed. And for organizations spending millions on marketing based on consumer insight, narrowing that gap is worth the investment.

Andreas Headshot

Andreas Duess

A recognized expert in AI-driven strategy and consumer insight, Andreas has spent 20+ years helping agriculture and food brands navigate change. A sought-after keynote speaker (USAEDC, USA Rice, American Peanut Council) and visiting lecturer at Ivey Business School.