Difference between revisions of "Projects Under Review"
Jump to navigation
Jump to search
(Created page with "Project to be checked for code, etc: *[LinkedIn Crawler (Python)] *[Mapping on R] *[SDC Normalizer] *[Start Up Address Finder Algorithm (Tool)] *[The Matcher (Tool)] *[Twitter...") |
|||
(19 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
Project to be checked for code, etc: | Project to be checked for code, etc: | ||
− | *[LinkedIn Crawler (Python)] | + | *[[LinkedIn Crawler (Python)]] (crunchbase_founders.py actually located E:\McNair\Projects\Accelerators\Spring 2017\crunchbase_founders.py) |
− | *[Mapping on R] | + | *[[Mapping on R]] (need to download) |
− | *[SDC Normalizer] | + | *[[SDC Normalizer]] (this should be fine; can't find the hubs file that he ran it on) |
− | *[Start Up Address Finder Algorithm (Tool)] | + | *[[Start Up Address Finder Algorithm (Tool)]] (algorithm section not complete; also don't know where the script is located) |
− | *[The Matcher (Tool)] | + | *[[The Matcher (Tool)]] (works) |
− | *[Twitter Follower Finder (Tool)] | + | *[[Twitter Follower Finder (Tool)]] (not sure how to get the database/run this) |
− | *[Twitter News Finder (Tool)] | + | *[[Twitter News Finder (Tool)]] (don't know where this is since location is not specified or how to run it since there are no instructions; can't find it in McNair/Projects/TwitterCrawler if it is in there) |
− | *[Twitter Webcrawler (Tool)] | + | *[[Twitter Webcrawler (Tool)]] (not sure how to get the database/run this) |
− | *[URL Finder (Tool)] | + | *[[URL Finder (Tool)]] |
− | *[Whois Parser] | + | (1. IOError: File E:\McNair\Projects\Accelerators\cohorts.csv does not exist bc Accelerators folder have been restructured. However, can't find the cohorts.csv in those folders either. |
− | *[Moroccan Parliament Web Crawler] | + | |
− | *[Eventbrite Webcrawler (Tool)] | + | 2. IOError: File E:\McNair\Software\Scripts\URLFindersCopyabouttest.cvs does not exist. |
− | *[Govtrack Webcrawler (Tool)] | + | |
− | *[Industry Classifier] | + | 3. works, had to change out_path in glink and path1read,path1write, and out_path in URL Compiler.py. |
− | *[Interactive Maps - The Whole Process] | + | |
− | *[Google Scholar Crawler] | + | 4. the input and output file areas changed, now in the \McNair\Projects\Hubs\summer 2016\Searching folder.) |
− | *[Collecting SBIR Data] | + | *[[Whois Parser]] (works when testing on [[Industry Classifier]]) |
+ | *[[Moroccan Parliament Web Crawler]] (need to install scrapy?) | ||
+ | *[[Eventbrite Webcrawler (Tool)]] | ||
+ | *[[Govtrack Webcrawler (Tool)]] (pretty sure it works, but the site that it runs on http://www.edegan.com/wiki/index.php doesn't seem to work "edegan.com’s server DNS address could not be found.") | ||
+ | *[[Industry Classifier]] (works but the project page is inaccurate. Location of the file is actually McNair/Projects/Accelerators/Spring 2017/Industry_Classifier | ||
+ | *[[Interactive Maps - The Whole Process]] (probably does work but when I try to use the WhoIsParser on it, the output file is empty?) | ||
+ | *[[Google Scholar Crawler]] (works, however the script is actually named crawler.py and it's in the Google_Scholar_Crawler file in E:\McNair\Software. The script scholar.py only exists in his github) | ||
+ | *[[Collecting SBIR Data]] (script concat_excel.py is actually under E:\McNair\Projects\SBIR\Data\Aggregate SBIR\concat_excel.py instead of E:\McNair\Projects\SBIR\concat_excel.py) | ||
The pages below need to be made into project pages, or have content extracted and then turned into a project page | The pages below need to be made into project pages, or have content extracted and then turned into a project page | ||
− | *[Geocode.py] | + | *[[Geocode.py]] [http://mcnair.bakerinstitute.org/wiki/Geocode.py] |
− | *[Enclosing_Circle_Algorithm_(Rework)#Step_By_Step_Use] | + | *[[Enclosing_Circle_Algorithm_(Rework)#Step_By_Step_Use]] [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm_(Plotting)] |
**Just plotting on a map | **Just plotting on a map | ||
− | |||
[[Category:McNair Admin]] | [[Category:McNair Admin]] |
Latest revision as of 14:53, 30 November 2017
Project to be checked for code, etc:
- LinkedIn Crawler (Python) (crunchbase_founders.py actually located E:\McNair\Projects\Accelerators\Spring 2017\crunchbase_founders.py)
- Mapping on R (need to download)
- SDC Normalizer (this should be fine; can't find the hubs file that he ran it on)
- Start Up Address Finder Algorithm (Tool) (algorithm section not complete; also don't know where the script is located)
- The Matcher (Tool) (works)
- Twitter Follower Finder (Tool) (not sure how to get the database/run this)
- Twitter News Finder (Tool) (don't know where this is since location is not specified or how to run it since there are no instructions; can't find it in McNair/Projects/TwitterCrawler if it is in there)
- Twitter Webcrawler (Tool) (not sure how to get the database/run this)
- URL Finder (Tool)
(1. IOError: File E:\McNair\Projects\Accelerators\cohorts.csv does not exist bc Accelerators folder have been restructured. However, can't find the cohorts.csv in those folders either.
2. IOError: File E:\McNair\Software\Scripts\URLFindersCopyabouttest.cvs does not exist.
3. works, had to change out_path in glink and path1read,path1write, and out_path in URL Compiler.py.
4. the input and output file areas changed, now in the \McNair\Projects\Hubs\summer 2016\Searching folder.)
- Whois Parser (works when testing on Industry Classifier)
- Moroccan Parliament Web Crawler (need to install scrapy?)
- Eventbrite Webcrawler (Tool)
- Govtrack Webcrawler (Tool) (pretty sure it works, but the site that it runs on http://www.edegan.com/wiki/index.php doesn't seem to work "edegan.com’s server DNS address could not be found.")
- Industry Classifier (works but the project page is inaccurate. Location of the file is actually McNair/Projects/Accelerators/Spring 2017/Industry_Classifier
- Interactive Maps - The Whole Process (probably does work but when I try to use the WhoIsParser on it, the output file is empty?)
- Google Scholar Crawler (works, however the script is actually named crawler.py and it's in the Google_Scholar_Crawler file in E:\McNair\Software. The script scholar.py only exists in his github)
- Collecting SBIR Data (script concat_excel.py is actually under E:\McNair\Projects\SBIR\Data\Aggregate SBIR\concat_excel.py instead of E:\McNair\Projects\SBIR\concat_excel.py)
The pages below need to be made into project pages, or have content extracted and then turned into a project page
- Geocode.py [1]
- Enclosing_Circle_Algorithm_(Rework)#Step_By_Step_Use [2]
- Just plotting on a map