The information in this article is relevant only for Pro users at this time. For information about booking websites for Lite users, refer to the Guesty Booking Engine
Some site owners are choosing to block AI crawlers, such as ChatGPT and Bard from crawling their site in order to prevent it from learning from or using their website content. You can block these AI user-agents in a similar manner as you would block Google crawlers; by replacing the default robots.txt file with a new file that specifies disallow
rules for specific AI user-agents.
Warning
The Guesty Website platform does not validate custom files. For example, if a corrupt file is uploaded, it will still be served.
To block both ChatGPT and Google-Extended crawlers:
-
Create a new robots.txt file. We recommend following Google’s instructions on how to create a robots.txt file.
-
Add the following code to the new robots.txt file. Note that crawlers process robots.txt from top to bottom, so we do not recommend adding the wildcard directive at the top.
# Sitemap is also available on /sitemap.xml
Sitemap: http://www.example.com/sitemap.xml
User-agent: GPTBot
Disallow: /
User-agent: Google-Extended
Disallow: /
User-agent: *
-
(Optional) If you need to add other groups, follow the same format of:
User-agent: ????
Disallow: /
And add it before the wildcard User-agent: *
-
Replace the default robots.txt file with the new file. To learn how, see Custom sitemap, robots.txt & other files. It is important to note that in order to replace the default file, the Source URL must match the file name exactly.