The robots.txt file tells web robots (such as site audit tools or search engines) which of your website's pages to crawl. It can also be used to tell them which pages not to crawl, or to exclude specific robots from crawling your site.
Besides the exclusion and inclusion rules, your robots.txt will need to include a reference to your website's sitemap. If no reference is found, the site auditor will show a "Missing Sitemap.xml Reference" warning.
Verify "Robots.txt not Found" reported by our site auditor
The "Robots.txt not Found" warning reported by our site auditor means that the URL "example.com/robots.txt" could not be found. This could mean that your robots.txt file is either located at a different URL (in which case you'd need to relocate it) or that your website is missing this file altogether.
The robots.txt file should be named as "robots.txt" and should be located in your website's top-level directory (in other words, at your main domain) and at your subdomains (if applicable). Note that if you've enabled the "Include Sub-Domains" option within your site auditor settings panel, you'll need to apply a robots.txt file both to your main domain as well as to your subdomains to prevent the crawler from reporting a "Robots.txt not Found" warning.
Remember that this file is case sensitive, so make sure to use lowercase to name it. For example, your robots.txt file should be located at example.com/robots.txt.
You can easily verify this by typing your main domain or subdomain followed by /robots.txt (i.e. example.com/robots.txt) in your browser's search bar.