Disallow Rule Must Start with a Slash
If you’re specifying a path in the robots.txt file, you must start with a slash, not a * wildcard. This was always true, but was only recently added to the documentation and Search Console testing tool.
Disallowed URLs can be Indexed
Even if a URL is disallowed it can still show up in the index.
Disallow doesn’t prevent indexing
A disallowed URL will be indexed and shown in results if Google has sufficient external signals.
Disallow prevents PageRank from being passed
PageRank can be inherited by a disallowed URL but can’t be passed on.
You Can Escape URLs in Robots.txt
In robots.txt, you can escape URLs if you want, they are treated as equivalents.
Submit Updated Robots.txt via Search Console
If you submit your robots.txt file via the Search Console Robots testing tool, they will recrawl it immediately instead of waiting for the normal daily check.
Noindex Pages Can’t Accumulate PageRank
Noindex pages can’t accumulate pagerank for the site, even though the pages can be crawled. So this isn’t an advantage over disallowing.
Use Disallow to Improve Crawling Efficiency
John recommends against robots.txt, because it prevents Google consolidating authority signals, but then says there are occassions when crawling efficiency is more important.
Disallowed URLs Don’t Pass PageRank
If a URL is disallowed in robots.txt, it won’t be crawled, and therefore can’t pass any pagerank.