Changes

Jump to navigation Jump to search
875 bytes added ,  16:34, 29 May 2019
no edit summary
This crawler was frequently blocked, as directly performed queries to google and parsed the results with beautiful soup. Additionally, this implementation would only collect eight results for each location. To prevent the crawler from being blocked and collect more results, we decided to switch and use selenium.
 
== Things to note/What needs work ==
The scraper coded using beautifulSoup does not work, it is frequently blocked by google. The scraper coded using Selenium pushes in the URL to google rather than typing in the search term and hitting enter. The Selenium script also does not collect results from multiple pages, I believe it collects results only from the first page at the moment.
 
== How to Run ==
The scripts incubator_scrape_data.py, and incubator_selenium_scrape.py were coded on a Mac in a virtualenv using python 3.6.5
The following packages were loaded into the environment for the Selenium Script:
* numpy 1.16.2
* pandas 0.24.2
* pip 19.1.1
* python-dateutil 2.8.0
* pytz 2019.1
* selenium 3.141.0
* setuptools 41.0.0
* six 1.12.0
* urllib3 1.24.1
* wheel 0.33.1
83

edits

Navigation menu