Google wants their crawler to be "THE" standard for the Internet and intends to release the source code to GitHub. Is this action coming from Google's interest in improving society or are they worried that a 3rd party will develop something popular that does not completely align with Google's interests?
Regardless, allowing access to the code using "robots.txt" has implications across the web with respect to tuning search engine scores.
https://thenextweb.com/google/2019/07/02/google-wants-to-make-the-25-year-old-robots-txt-protocol-an-internet-standard/ [thenextweb.com]