Changes

Jump to navigation Jump to search
4,822 bytes added ,  10:15, 2 August 2018
no edit summary
|Has project status=Complete
}}
There are 4 types of URL Finders, all that obtain URLs but in different methods for distinct purposes. Some of this work has been recompiled and edited during Summer 2018. See below for more information.
All of the URL Finders are found in Bulk->McNair->Software->Scripts->URL Finders:
*<code>E:\McNair\Software\Scripts\URLFinders\URL Compiler.py</code>
*<code>E:\McNair\Software\Scripts\URLFinders\Specific Search URL Finder.py</code>
 
=Summer 2018 URL Finder work=
Excel master datasets are in:
E:\McNair\Projects\Accelerators\Summer 2018
 
Code and files specific to this URL finder are in:
E:\McNair\Projects\Accelerators\Summer 2018\url finder
 
====Results====
I used STEP1_crawl.py and STEP2_findcorrecturl.py to add approximately 1000 more URLs into 'The File to Rule Them All.xlsx'.
 
====Testing====
 
In this file (sheet: 'Most Recent Merged Data' note that this is just a copy of 'Cohorts Final' in 'The File to Rule Them All'):
E:\McNair\Projects\Accelerators\Summer 2018\Merged W Crunchbase Data as of July 17.xlx
 
We filter for companies (~4000) that did not receive VC, are not in crunchbase, and do not have URLs.
Using a Google crawler(STEP1_crawl.py) and URL matching script(STEP2_findcorrecturl.py), we will try to find as many URLs as possible.
 
To test, I ran about 40 companies from "smallcompanylist.txt", using only the company name as a search term and taking the first 4 valid results (see don't collect list in code). The google crawler and URL matcher was able to correctly identify around 20 URLs. It also misidentifies some URLs that look really similar to the company name, but it is accurate for the most part if the name is not too generic. I then tried to run the 20 unfound company names through the crawler again, but this time I used company name + startup as the search term. This did not identify any more correct URLs.
 
It seems reasonable to assume that if the company URL cannot be found within the first 4 valid search results, then that company probably does not have URL at all. This is the case for many of the unfound 20 URLs from my test run above.
 
====Actual Run Info====
 
The companies we needed to find URLs for are in a file called 'ACTUALNEEDEDCOMPANIES.txt'.
 
The first four results for every company, as found by STEP1_crawl.py, are in 'ACTUAL_crawled_company_urls.txt'.
 
The results after the matching done by STEP2_findcorrecturl.py, are in 'ACTUAL_finalurls.txt'.
 
Note that in the end, I decided to only take URLs that were given a match score of greater than 0.9 by setting this restriction in STEP2_findcorrecturl.py. Then I manually removed any duplicates/inaccurate results. If you want, you can set the threshold lower in STEP2 and use STEP3_clean.py to find the URL with the highest score for each company.
 
The point of this URL finder is to find timing information for companies. Timing information can be found on Whois. See the page http://mcnair.bakerinstitute.org/wiki/Whois_Parser#Summer_2018_Work for information on running the whois parser.
 
====Using Python files====
'''To use STEP1_crawl.py''':
INPUT: a list of company names (or anything) you would like to find websites for by searching on google
OUTPUT: a list of company names and the top X number of results from google
 
1. Change LIST_FILEPATH in line 26 to be the name of the file that contains the list of things you would like to search.
 
2. Change NUMRESULT to be however many results you would like from Google.
 
3. Adjust DONT_COLLECT to include any websites that you don't want.
 
4. If you would like to add another search keyword, add this in line 87 which is queries.append(name + "whatever you want here")
 
5. Change line 127 to be the name of your output file.
 
'''To use STEP2_findcorrecturl.py''':
INPUT: output file from STEP1
OUTPUT: a file formatted the same as the output of STEP1, but URLs that do not match over the threshold value you set will be replaced with "no match"
 
1. Change file f to be the output file name from STEP1. Change g to be the desired name of the output file for this part.
 
2. In the if statement on line 44, set your desired threshold. Note that anything greater than 0.6 is generally considered a decent match. It might be safer to use 0.75 and my use of 0.9 ensures almost exact matches. However, you should consider that if your list of companies (or whatever you are searching) includes really common names, then a 90%+ match might not be exactly what you're looking for.
 
'''To use STEP3_clean.py''':
 
Note this is an optional step to use depending on the accuracy level you need and what kind of data you crawled earlier. I chose not to use this and instead set a more restrictive threshold in STEP2.
 
1. Change file f to be the output file from STEP2 (you should delete anything that says "no match", and when you use STEP2, you must also write the ratio score to the text file). Change g to be the desired name of the output file for this part.
 
Your output should be a text file containing the company name and the URL that had the highest assigned score in STEP2. In case of more than 1 URL with the highest score, the script should take the first one.
=URL FINDER #1 - URL Matcher=
145

edits

Navigation menu