Morningscore Bot
Overview
The Morningscore Bot is an automated crawler operated by Morningscore, a SEO platform that helps website owners track rankings, analyze on-site SEO, and improve their visibility in search engines.
The bot performs on-site SEO analysis on behalf of Morningscore customers. Crawls are initiated only when a customer adds a website to their Morningscore account. The bot analyzes page structure, metadata, links, and content to provide SEO recommendations.
User-Agent
The Morningscore Bot identifies itself with the following User-Agent string.
Exact format
The User-Agent always begins with:
Mozilla/5.0 (Morningscore Bot/1.0)
This prefix may be followed by browser-like suffixes (e.g. AppleWebKit/537.36 (KHTML, like Gecko) Chrome/... Safari/...).
The consistent identifier is Morningscore Bot/1.0.
Match patterns for identification
Site owners and security systems can identify Morningscore Bot traffic using:
| Method | Pattern |
|---|---|
| Substring | Morningscore Bot/1.0 |
| Regex | Mozilla/5\.0 \(Morningscore Bot/1\.0\) |
| robots.txt User-agent | Morningscore Bot |
Service Purpose
The Morningscore Bot crawls websites for Search Engine Optimization (SEO) analysis. It:
- Analyzes page structure, meta tags, headings, and content
- Detects broken links and crawlability issues
- Extracts sitemaps and link structure
- Provides recommendations to improve search engine visibility
Crawls are performed only for websites that have been explicitly added to a Morningscore customer account. The bot does not crawl sites without customer authorization. We do however not verify domain ownership before crawling.
Crawling Etiquette
robots.txt compliance
The Morningscore Bot respects robots.txt:
- Fetches and parses
robots.txtbefore crawling - Identifies itself as
Morningscore Botfor User-agent-specific rules - Honors all
AllowandDisallowdirectives - Respects
Crawl-delaywhen specified (capped at 60 seconds between requests) - Does not crawl paths explicitly disallowed for
Morningscore Botor*
Crawl rate
- When a site specifies
Crawl-delayfor Morningscore Bot, the bot limits requests accordingly (up to 60 seconds between requests) - When no crawl delay is specified, the bot uses adaptive concurrency (typically 2–20 concurrent requests per domain) to avoid overloading servers
- The bot does not attempt to crawl at rates that could be mistaken for a DDoS attack
Paths avoided
The bot avoids crawling:
- Images:
.png,.jpg,.jpeg,.gif,.svg,.webp,.avif,.heic,.ico,.bmp,.tiff - Binary and media files:
.pdf,.zip,.mp4,.mp3,.exe,.dmg, and other common non-HTML file extensions - Calendar links: URLs containing
?ical=,&ical=, orical=1 - Protocol links:
tel:andmailto:links - robots.txt disallowed paths: Any path disallowed for
Morningscore Botor*in robots.txt
The bot focuses on HTML pages and XML sitemaps relevant to SEO analysis.
Sensitive paths
The bot does not target sensitive or private paths by design. It follows robots.txt and only crawls URLs discovered from sitemaps or links on pages the customer has authorized for analysis.