With R’ you can crawl a website and automatically check the technical characteristics of the web pages: title, metadata, but also: counting the number of words, compute website internal page rank or extracting URLs from XML sitemaps.

Many SEO tools like semrush, ahref, botify, OnCrawl, adwords, … have APIs. With R’ you can request data in batches, for example, to process thousands of keywords or check the quality of a hundred domain names all at once. If you have a big website, you can uncover interesting information from Googlebot web server logs. R’ can help you cross-check them with other data sources. All the “tidyverse” packages are very handy to deal with these big datasets

More Cristian Randieri's questions See All
Similar questions and discussions