Difference between revisions of "Demo Day Page Parser"
Jump to navigation
Jump to search
Peterjalbert (talk | contribs) |
Peterjalbert (talk | contribs) |
||
Line 4: | Line 4: | ||
|Has project status=Active | |Has project status=Active | ||
}} | }} | ||
+ | |||
+ | ==Project Specs== | ||
+ | The goal of this project is to leverage data mining with Selenium and Machine Learning to get good candidate web pages for Demo Days for accelerators. Relevant information on the project can be found on the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Data Accelerator Data] page. | ||
==Code Location== | ==Code Location== |
Revision as of 16:32, 14 November 2017
Demo Day Page Parser | |
---|---|
Project Information | |
Project Title | Demo Day Page Parser |
Owner | Peter Jalbert |
Start Date | |
Deadline | |
Primary Billing | |
Notes | |
Has project status | Active |
Copyright © 2016 edegan.com. All Rights Reserved. |
Project Specs
The goal of this project is to leverage data mining with Selenium and Machine Learning to get good candidate web pages for Demo Days for accelerators. Relevant information on the project can be found on the Accelerator Data page.
Code Location
The code directory for this project can be found:
E:\McNair\Software\Accelerators
The Selenium-based crawler can be found in the file:
DemoDayCrawler.py
This script runs a google search on accelerator names and keywords, and saves the urls and html pages for future use.
A script to rip from HTML to TXT can be found:
htmlToText.py
This script reads HTML files from a directory, and writes them to TXT in another directory.