Difference between revisions of "URL Finder (Tool)"

From edegan.com
Jump to navigation Jump to search
Line 48: Line 48:
 
'''7/8:'''
 
'''7/8:'''
  
*Created conditionals for keys in JSON dictionaries. Successfully ran the tool on my 50 companies and then again on 1500 companies. Changed ratio to .75 and higher to elicit URLs that were close but not exact and got more results.<!-- flush -->
+
*Created conditionals for keys in JSON dictionaries. Successfully ran the tool on my 50 companies and then again on 1500 companies. Changed ratio to .75 and higher to elicit URLs that were close but not exact and got more results.<!-- flush flush -->
  
 
'''7/14'''
 
'''7/14'''

Revision as of 10:27, 19 July 2016


McNair Project
URL Finder (Tool)
Project logo 02.png
Project Information
Project Title
Start Date
Deadline
Primary Billing
Notes
Has project status
Copyright © 2016 edegan.com. All Rights Reserved.


Description

Notes: The URL Finder Tool automated algorithmic program to locate, retrieve and match URLs to corresponding Startup companies using the Google API. Developed through Python 2.7.

Input: CSV file containing a list of startup company names

Output: Matched URL for each company in the CSV file.

How to Use

1) Assign "path1" = the input CSV file address

2) Assign "out_path" = the file address in which to dump all the downloaded JSON files.

3) Assign "path2" = the new output file address

4) Run the program

Development Notes

7/7: Project start

  • I am utilizing the pandas library to read and write CSV files in order to access the inputted CSV files. From there, I am simplifying the names of the companies using several functions from the aiding program, glink, to get rid of company identifiers such as "Co., INC., LLC., etc. and form the company names in a manner that is accessible by the Google Search API.
  • I am then searching each company name into the Google Search API and collecting a number of URLs that come up from the custom search. All of these URLs are put into a JSON file.

fec['name_clean'] = fec["newname"].map(glink.remCorp)

fec['download_status'] = fec['name_clean'].map(glink.gdownload)


  • Attempted to use program on 1500 Startup company names but ran into a KeyError with the JSON files. I am not able to access specific keys in each data

7/8:

  • Created conditionals for keys in JSON dictionaries. Successfully ran the tool on my 50 companies and then again on 1500 companies. Changed ratio to .75 and higher to elicit URLs that were close but not exact and got more results.

7/14

  • Created a function, "about_us_url", that takes the url of a company obtained using the above function and identifies if the company has an "about" page.
  • The function tests if the company url exists with either "about" or "about-us" as the sub-url. If it does, the new url is matched next to a old url in a new column, "about_us_url".


7/18

  • Created a function, "company_description" that takes a URL and gave back all of the substantial text blocks on the page (used to find company descriptions)
    • Uses BeautifulSoup to access and explore HTML files.
    • The function explores the HTML source code of the URL and finds all parts of the source code with the

      tag to indicate a text paragraph.

    • Then, the function goes though each paragraph, and if it is above a certain number of characters (eliminate for short, unnecessary information), the function adds the description in a new column of the csv file under "description".