Error when crawling a website

Unfortunately, this sometimes happens. There are pages or even entire websites that our crawler cannot access for various technical reasons, most often due to website rules or limitations.

If you encounter an error after the crawler gets stuck on a certain page, try excluding this path in the crawler settings.

In rare cases, when the error prevents crawling a website entirely, we suggest trying to import the sitemap.xml file, which is usually a valid workaround for building a sitemap. You can either enter the domain URL or provide a direct URL address to the sitemap.xml file, which is typically located at www.yourdomainname.com/sitemap.xml, but may vary in some cases.