Web crawlers (web spiders, web robots) are programs or scripts that automatically browse a website, grab information and build an index.
In our Data Economy, web crawlers permit companies to obtain and exploit large volumes of data which, once analyzed, can have a significant impact on their productivity and profitability.
The question is: Can web crawlers freely crawl any type of data from any website?
In this edition of our China newsletter, Zhang Beibei and Isabelle Doyon offer provide answers on the technical and legal limitations to the use of web crawlers.