For SEOs: Tabelog will rank for restaurant names anyway, because user behavior (searching “Sushi Tokyo Tabelog”) overrides crawl directives. But for anyone wanting structured data at scale? The robots file says everything you need to know: “No.” Would you like a technical breakdown of how to ethically monitor Tabelog changes without violating their robots.txt ?
At first glance, it looks like a standard robots.txt . But look closer. It tells a fascinating story about data protection, competitive moats, and Japan’s unique web culture. User-agent: * Disallow: /search/ Disallow: /rgsearch/ Disallow: /kw/ Disallow: /syop/ Disallow: /rr/ Disallow: /list/ Disallow: /rvw/ Disallow: /photo/ Disallow: /map/ Disallow: /guide/ Disallow: /sitemap/ Disallow: /navi/ Disallow: /rank/ Disallow: /shop/%A5%EA%A5%B9%A5%C8 Disallow: /bshop/ Disallow: /rstd/ Disallow: /west/ Disallow: /tokyo/ Disallow: /osaka/ Disallow: /aichi/ Disallow: /kyoto/ Disallow: /hyogo/ Disallow: /hokkaido/ Disallow: /fukuoka/ Disallow: /miyagi/ Disallow: /chiba/ Disallow: /saitama/ Disallow: /kanagawa/ Disallow: /shizuoka/ Disallow: /hiroshima/ What Tabelog is really saying 1. “Search results are off-limits.” The /search/ and /list/ paths are blocked. This is common for large sites to prevent infinite crawl loops, but for Tabelog, it’s strategic: search result pages contain ranked restaurant lists — their core IP. Letting search engines index those would let competitors reverse-engineer their ranking algorithm. tabelog robots.txt
| Want to crawl? | Allowed? | |----------------|----------| | Restaurant detail pages | ✅ (implicitly, via no explicit block) | | Search results | ❌ | | Review pages | ❌ | | Photo galleries | ❌ | | Regional index pages | ❌ | | Ranking lists | ❌ | For a site built on user contributions and openness, Tabelog’s robots.txt is remarkably closed. But that’s the point. In a market where restaurant data is a strategic asset (competitors include Google Maps, Retty, and Gurunavi), a robots.txt becomes a legal-engineering hybrid: “We’ve told you not to crawl these paths. If you do, you’re violating our terms and potentially the Unfair Competition Prevention Act of Japan.” Final take If you’re building a crawler for Tabelog, don’t bother negotiating with robots.txt — it’s not a negotiation. It’s a warning. Real access requires official APIs or commercial partnerships. The robots.txt is just the polite “Keep Out” sign before the electric fence. For SEOs: Tabelog will rank for restaurant names