Back to the Taverne
How we work

Methodology

No first-impression praise. No reviews from a demo. Here's the grid that decides what gets published, and when.

The three-week rule

Before a review ships on Taverne, the tool must have been actively used for three weeks minimum, on a real use case — not a demo project built for the review.

Why three weeks: that's the window where the honeymoon fades and real friction emerges. Unpredictable latency, quota limits you didn't see coming, the export bug that only shows up on a 50+ MB file. A one-week review is dressed-up advertising.

For guides and comparisons, the rule is softer — every cited tool must have been used in production, but not necessarily three weeks each.

Test criteria

Every review covers at least these dimensions:

Dimension What we look for
Output quality Accuracy, fluency, ability to handle Quebec and French nuance
Speed Average latency, p95 on heavy tasks
Real limits Quotas, context size, systematic refusals, notable hallucinations
Integrations What in your stack it plays nicely with — and what it doesn't
Privacy Data reuse policy, enterprise options, Loi 25 compliance
Real price Not the marketing price — the annualized cost on a typical SMB usage
Support Response time, quality of humans behind it (real replies, not chatbots)
Evolution Update cadence, feature churn, vendor stability signals
Quebec adoption How many Quebec SMBs in our circle use it, qualitative feedback
Alternatives At least two direct competitors compared on the same use case

Sources and fact-checking

Every number, quote, and date in our articles comes from one of these:

  • Official vendor documentation, linked directly in the article.
  • Public pricing page, verified at the date listed on the tool's spec sheet.
  • Structured NotebookLM research for broad topics (market state, multi-vendor comparisons, history).
  • Internal tests with original screenshots, never generic stock imagery.
  • Direct conversations with Quebec SMB users — quoted with permission.

The attributable_facts field on each tool spec stores every claim with its source and last verification date.

Freshness cycle — 90 days

A published article isn't frozen. Every 90 days, each tool spec and each review goes through a quick check:

  1. Automated re-verification of prices (the verify-pricing script with Claude Haiku — opens a PR on any drift).
  2. Manual re-verification of affiliate links (complemented by daily cron).
  3. Updated screenshots when the interface visibly changed.
  4. Addition of an Updated [date] block to flag substantive changes.
  5. Update of the dateModified field in schema:Article.

Bottom of every article shows: Published [date] · Updated [date]. That's the freshness signal Google and LLMs weight most in 2026.

Editorial independence

Already covered on the transparency page, repeated here because it matters:

No vendor, no affiliate program, no commission changes a review's verdict. If a tool we recommended degrades, the article is updated to reflect it — including to retract the recommendation.

Errors and corrections

If you spot a factual error, write to david_cyr@hotmail.fr with:

  • The exact URL of the article.
  • The wrong sentence or number.
  • Your source for the correction (official link, screenshot, dated document).

We fix clear factual errors within 7 days; corrections needing testing or deeper verification within 30 days. Any substantive correction is flagged in the article — not silently.

Last updated · May 2, 2026