Difference between revisions of "Peter Jalbert (Work Log)"

From edegan.com
Jump to navigation Jump to search
 
(42 intermediate revisions by 2 users not shown)
Line 1: Line 1:
<font size="5">'''Fall 2016'''</font>
+
===Fall 2017===
 +
<onlyinclude>  
  
09/27/2016 15:00-18:00: Set up Staff wiki page, work log page; registered for Slack, Microsoft Remote Desktop; downloaded Selenium on personal computer, read Selenium docs. Created wiki page for Moroccan Web Driver Project.
+
[[Peter Jalbert]] [[Work Logs]] [[Peter Jalbert (Work Log)|(log page)]]
  
09/29/2016 15:00-18:00: Re-enroll in Microsoft Remote Desktop with proper authentication, set up Selenium environment and Komodo IDE on Remote Desktop, wrote program using Selenium that goes to a link and opens up the print dialog box. Developed computational recipe for a different approach to the problem.
+
2017-12-21: Last minute adjustments to the Moroccan Data. Continued working on [[Selenium Documentation]].
  
09/30/2016 12:00-14:00: Selenium program selects view pdf option from the website, and goes to the pdf webpage. Program then switches handle to the new page. CTRL S is sent to the page to launch save dialog window. Text cannot be sent to this window. Brainstorm ways around this issue. Explored Chrome Options for saving automatically without a dialog window. Looking into other libraries besides selenium that may help.
+
2017-12-20: Working on Selenium Documentation. Wrote 2 demo files. Wiki Page is avaiable [http://www.edegan.com/wiki/Selenium_Documentation here]. Created 3 spreadsheets for the Moroccan data.
  
10/3/2016 13:00 - 16:00: Moroccan Web Driver projects completed for driving of the Monarchy proposed bills, the House of Representatives proposed bills, and the Ratified bills sites. Begun process of devising a naming system for the files that does not require scraping. Tinkered with naming through regular expression parsing of the URL. Structure for the oral questions and written questions drivers is set up, but need fixes due to the differences in the sites. Fixed bug on McNair wiki for women's biz team where email was plain text instead of an email link. Took a glimpse at Kuwait Parliament website, and it appears to be very different from the Moroccan setup.
+
2017-12-19: Finished fixing the Demo Day Crawler. Changed files and installed as appropriate to make linked in crawler compatible with the RDP. Removed some of the bells and whistles.
  
10/6/2016 13:30 - 18:00: Discussed with Dr. Elbadawy about the desired file names for Moroccan data download. The consensus was that the bill programs are ready to launch once the files can be named properly, and the questions data must be retrieved using a web crawler which I need to learn how to implement. The naming of files is currently drawing errors in going from arabic, to url, to download, to filename. Debugging in process. Also built a demo selenium program for Dr. Egan that drives the McNair blog site on an infinite loop.
+
2017-12-18: Continued finding errors with the Demo Day Crawler analysis. Rewrote the parser to remove any search terms that were in the top 10000 most common English words according to Google. Finished uploading and submitting Moroccan data.
  
10/7/2016 12:00 - 14:00: Learned unicode and utf8 encoding and decoding in arabic. Still working on transforming an ascii url into printable unicode.
+
2017-12-15: Found errors with the Demo Day Crawler. Fixed scripts to download Moroccan Law Data.
  
10/11/2016 15:00 - 18:00: Fixed arabic bug, files can now be saved with arabic titles. Monarchy bills downloaded and ready for shipment. House of Representatives Bill mostly downloaded, ratified bills prepared for download. Started learning scrapy library in python for web scraping. Discussed idea of screenshot-ing questions instead of scraping.  
+
2017-12-14: Uploading Morocco Parliament Written Questions. Creating script for next Morocco Parliament download. Begin writing Selenium documentation. Continuing to download TIGER data.
  
10/13/2016 13:00-18:00: Completed download of Moroccan Bills. Working on either a web driver screenshot approach or a webcrawler approach to download the  Moroccan oral and written questions data. Began building Web Crawler for Oral and Written Questions site. Edited Moroccan Web Driver/Crawler wiki page. [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Moroccan Web Driver]
+
2017-12-06: Running Morocco Parliament Written Questions script. Analyzing Demo Day Crawler results. Continued downloading for TIGER geocoder.
  
10/14/2016 12:00-14:00: Finished Oral Questions crawler. Finished Written Questions crawler. Waiting for further details on whether that data needs to be tweaked in any way. Updated the Moroccan Web Driver/Web Crawler wiki page. [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Moroccan Web Driver]
+
2017-11-28: Debugging Morocco Parliament Crawler. Running Demo Day Crawler for all accelerators and 10 pages per accelerator. TIGER geocoder is back to Forbidden Error.
  
10/18/2016 15:00-18:30: Finished code for Oral Questions web driver and Written Questions web driver using selenium. Now, the data for the dates of questions can be found using the crawler, and the pdfs of the questions will be downloaded using selenium. [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Moroccan Web Driver]
+
2017-11-27: Rerunning Morocco Parliament Crawler. Fixed KeyTerms.py and running it again. Continued downloading for TIGER geocoder.
  
10/20/2016 13:00-18:00: Continued to download data for the Moroccan Parliament Written and Oral Questions. Updated Wiki page. Started working on Twitter project with Christy. [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Moroccan Web Driver]
+
2017-11-20: Continued running [http://www.edegan.com/wiki/Demo_Day_Page_Parser Demo Day Page Parser]. Fixed KeyTerms.py and trying to run it again. Forbidden Error continues with the TIGER Geocoder. Began Image download for Image Classification on cohort pages. Clarifying specs for Morocco Parliament crawler.
  
10/21/2016 12:00-14:00: Continued to download data for the Moroccan Parliament Written and Oral Questions. Looked over [http://mcnair.bakerinstitute.org/wiki/Christy_Warden_(Twitter_Crawler_Application_1) Christy's Twitter Crawler] to see how I can be helpful. Dr. Egan asked me to think about how to potentially make multiple tools to get cohorts and other sorts of data from accelerator sites. See [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator List] He also asked me to look at the [http://mcnair.bakerinstitute.org/wiki/Govtrack_Webcrawler_(Wiki_Page) GovTrack Web Crawler] for potential ideas on how to bring this project to fruition.
+
2017-11-16: Continued running [http://www.edegan.com/wiki/Demo_Day_Page_Parser Demo Day Page Parser]. Fixed KeyTerms.py and trying to run it again. Forbidden Error continues with the TIGER Geocoder. Began Image download for Image Classification on cohort pages. Clarifying specs for Morocco Parliament crawler.
  
11/1/2016: 15:00-18:00: Continued to download Moroccan data in the background. Went over code for GovTracker Web Crawler, continued learning Perl. [http://mcnair.bakerinstitute.org/wiki/Govtrack_Webcrawler_(Wiki_Page) GovTrack Web Crawler] Began Kuwait Web Crawler/Driver.
+
2017-11-15: Continued running [http://www.edegan.com/wiki/Demo_Day_Page_Parser Demo Day Page Parser]. Wrote a script to extract counts that were greater than 2 from Keyword Matcher. Continued downloading for [http://www.edegan.com/wiki/Tiger_Geocoder TIGER Geocoder]. Finished re-formatting work logs.
  
11/3/2016: 13:00-18:00: Continued to download Moroccan data in the background. Dr. Egan fixed systems requirements to run the GovTrack Web Crawler. Made significant progress on the Kuwait Web Crawler/Driver for the Middle East Studies Department.  
+
2017-11-14: Continued running [http:///www.edegan.com/wiki/Demo_Day_Page_Parser Demo Day Page Parser]. Wrote an HTML to Text parser. See Parser Demo Day Page for file location. Continued downloading for [http://www.edegan.com/wiki/Tiger_Geocoder TIGER Geocoder].
  
11/4/2016: 12:00-14:00: Continued to download Moroccan data in the background. Finished writing initial Kuwait Web Crawler/Driver for the Middle East Studies Department. Middle East Studies Department asked for additional embedded files in the Kuwait website. [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Moroccan Web Driver]
+
2017-11-13: Built [http://www.edegan.com/wiki/Demo_Day_Page_Parser Demo Day Page Parser].
  
11/8/2016: 15:00-18:00: Continued to download Moroccan data in the background. Finished writing code for the embedded files on the Kuwait Site. Spent time debugging the frame errors due to the dynamically generated content. Never found an answer to the bug, and instead found a workaround that sacrificed run time for the ability to work. [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Moroccan Web Driver]
+
2017-11-09: Running demo version of Demo Day crawler (Accelerator Google Crawler). Fixing work log format.  
  
11/10/2016 13:00-18:00: Continued to download Moroccan data and Kuwait data in the background. Began work on [http://mcnair.bakerinstitute.org/wiki/Google_Scholar_Crawler Google Scholar Crawler]. Wrote a crawler for the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Project] to get the HTML files of hundreds of accelerators. The crawler ended up failing; it appears to have been due to HTTPS.  
+
2017-11-07: Created file with 0s and 1s detailing whether crunchbase has the founder information for an accelerator. Details posted as a TODO on [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List] page. Still waiting for feedback on the PostGIS installation from [http://www.edegan.com/wiki/Tiger_Geocoder Tiger Geocoder]. Continued working on Accelerator Google Crawler.
  
11/11/2016 12:00-2:00: Continued to download Moroccan data in the background. Attempted to find bug fixes for the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Project] crawler.
+
2017-11-06: Contacted Geography Center for the US Census Bureau, [https://www.census.gov/geo/about/contact.html here], and began email exchange on PostGIS installation problems. Began working on the [http://www.edegan.com/wiki/Selenium_Documentation Selenium Documentation]. Also began working on an Accelerator Google Crawler that will be used with Yang and ML to find Demo Days for cohort companies.
  
11/15/2016 15:00-18:00: Finished download of Moroccan Written Question pdfs. Wrote a parser with Christy to be used for parsing bills from Congress and eventually executive orders. Found bug in the system Python that was worked out and rebooted.
+
2017-11-01: Attempted to continue downloading, however ran into HTTP Forbidden errors. Listed the errors on the [http://www.edegan.com/wiki/Tiger_Geocoder Tiger Geocoder Page].
  
11/17/2016 13:00-18:00: Wrote a crawler to retrieve information about executive orders, and their corresponding pdfs. They can be found [http://mcnair.bakerinstitute.org/wiki/E%26I_Governance_Policy_Report here.] Next step is to run code to convert the pdfs to text files, then use the parser fixed by Christy.  
+
2017-10-31: Began downloading blocks of data for individual states for the [http://www.edegan.com/wiki/Tiger_Geocoder Tiger Geocoder] project. Wrote out the new wiki page for installation, and beginning to write documentation on usage.
  
11/18/2016 12:00-2:00: Converted Executive Order PDFs to text files using adobe acrobat DC. See [http://mcnair.bakerinstitute.org/wiki/E%26I_Governance_Policy_Report Wikipage] for details.
+
2017-10-30: With Ed's help, was able to get the national data from Tiger installed onto a database server. The process required much jumping around and changing users, and all the things we learned are outlined in [http://www.edegan.com/wiki/Database_Server_Documentation#Editing_Users the database server documentation] under "Editing Users".
  
11/22/2016 15:00-18:00: Transferred downloaded Morocco Written Bills to provided SeaGate Drive. Made a "gentle" F6S crawler to retrieve HTMLs of possible accelerator pages documented [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) here].
+
2017-10-25: Continued working on the [http://www.edegan.com/wiki/PostGIS_Installation TigerCoder Installation].
  
11/29/2016 15:00-18:00: Began pulling data from the accelerators listed [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) here]. Made text files for about 18 accelerators.
+
2017-10-24: Throw some addresses into a database, use address normalizer and geocoder. May need to install things. Details on the installation process can be found on the [http://www.edegan.com/wiki/PostGIS_Installation PostGIS Installation page].
  
12/1/2016 13:00-18:00: Continued making text files for the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Built tool for the [http://mcnair.bakerinstitute.org/wiki/E%26I_Governance_Policy_Report E&I Governance Report Project] with Christy. Adds a column of data that shows whether or not the bill has been passed.
+
2017-10-23: Finished Yelp crawler for [http://www.edegan.com/wiki/Houston_Innovation_District Houston Innovation District Project].
  
12/2/2016 12:00-14:00: Built and ran web crawler for Center for Middle East Studies on Kuwait. Continued making text files for the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project].
+
2017-10-19: Continued work on Yelp crawler for [http://www.edegan.com/wiki/Houston_Innovation_District Houston Innovation District Project].
  
12/6/2016 15:00-18:00: Learned how to use git. Committed software projects from the semester to the McNair git repository. Projects can be found at; [http://mcnair.bakerinstitute.org/wiki/E%26I_Governance_Policy_Report Executive Order Crawler], [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Foreign Government Web Crawlers], [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) F6S Crawler and Parser].
+
2017-10-18: Continued work on Yelp crawler for [http://www.edegan.com/wiki/Houston_Innovation_DistrictHouston Innovation District Project].
  
12/7/2016 15:00-18:00: Continued making text files for the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project].
+
2017-10-17: Constructed ArcGIS maps for the agglomeration project. Finished maps of points for every year in the state of California. Finished maps of Route 128. Began working on selenium Yelp crawler to get cafe locations within the 610-loop.
  
12/8/2016 14:00-18:00: Continued making text files for the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project].
+
2017-10-16: Assisted Harrison on the USITC project. Looked for natural language processing tools to extract complaintants and defendants along with their location from case files. Experimented with pulling based on parts of speech tags, as well as using geotext or geograpy to pull locations from a case segment.
  
 +
2017-10-13: Updated various project wiki pages.
  
<font size="5">'''Spring 2017'''</font>
+
2017-10-12: Continued work on Patent Thicket project, awaiting further project specs.
  
 +
2017-10-05: Emergency ArcGIS creation for Agglomeration project.
  
1/10/2017 14:30-17:15: Continued making text files for the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Downloaded pdfs in the background for the [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Moroccan Government Crawler Project].
+
2017-10-04: Emergency ArcGIS creation for Agglomeration project.
  
1/11/2017 10:00-12:00: Continued making text files for the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Downloaded pdfs in the background for the [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Moroccan Government Crawler Project].
+
2017-10-02: Worked on ArcGIS data. See Harrison's Work Log for the details.
  
1/12/2017 14:30-17:45: Continued making text files for the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Downloaded pdfs in the background for the [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Moroccan Government Crawler Project].
+
2017-09-28: Added collaborative editing feature to PyCharm.  
  
1/17/2017 14:30-17:15: Continued making text files for the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Downloaded pdfs in the background for the [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Moroccan Government Crawler Project].
+
2017-09-27: Worked on big database file.
  
1/18/2017 10:00-12:00: Downloaded pdfs in the background for the [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Moroccan Government Crawler Project].
+
2017-09-25: New task -- Create text file with company, description, and company type.
 +
#[http://www.edegan.com/wiki/VC_Database_Rebuild VC Database Rebuild]
 +
#psql vcdb2
 +
#table name, sdccompanybasecore2
 +
#Combine with Crunchbasebulk
 +
 
 +
#TODO: Write wiki on linkedin crawler, write wiki on creating accounts.
 +
 
 +
2017-09-21: Wrote wiki on Linkedin crawler, met with Laura about patents project.
 +
 
 +
2017-09-20: Finished running linkedin crawler. Transferred data to RDP. Will write wikis next.
 +
 
 +
2017-09-19: Began running linkedin crawler. Helped Yang create RDP account, get permissions, and get wiki setup.
 +
 
 +
2017-09-18: Finished implementation of Experience Crawler, continued working on Education Crawler for LinkedIn.
 +
 
 +
2017-09-14: Continued implementing LinkedIn Crawler for profiles.
 +
 
 +
2017-09-13: Implemented LinkedIn Crawler for main portion of profiles. Began working on crawling Experience section of profiles.
 +
 
 +
2017-09-12: Continued working on the [http://www.edegan.com/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler for Accelerator Founders Data]. Added to the wiki on this topic.
 +
 
 +
2017-09-11: Continued working on the [http://www.edegan.com/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler for Accelerator Founders Data].
 +
 
 +
2017-09-06: Combined founders data retrieved with the Crunchbase API with the crunchbasebulk data to get linkedin urls for different accelerator founders. For more information, see [http://www.edegan.com/wiki/Crunchbase_Data here].
 +
 
 +
2017-09-05: Post Harvey. Finished retrieving names from the Crunchbase API on founders. Next step is to query crunchbase bulk database to get linkedin urls. For more information, see [http://www.edegan.com/wiki/Crunchbase_Data here].
 +
 
 +
2017-08-24: Began using the Crunchbase API to retrieve founder information for accelerators. Halfway through compiling a dictionary that translates accelerator names into proper Crunchbase API URLs.
 +
 
 +
2017-08-23: Decided with Ed to abandon LinkedIn crawling to retrieve accelerator founder data, and instead use crunchbase. Spent the day navigating the crunchbasebulk database, and seeing what useful information was contained in it.
 +
 
 +
2017-08-22: Discovered that LinkedIn Profiles cannot be viewed through LinkedIn if the target is 3rd degree or further. However, if entering LinkedIn through a Google search, the profile can still be viewed if the user has previously logged into LinkedIn. Devising a workaround crawler that utilizes Google search. Continued blog post [http://www.edegan.com/wiki/LinkedIn_Crawler_(Python) here] under Section 4.
 +
 
 +
2017-08-21: Began work on extracting founders for accelerators through LinkedIn Crawler. Discovered that Python3 is not installed on RDP, so the virtual environment for the project cannot be fired up. Continued working on Ubuntu machine.
 +
</onlyinclude>
 +
 
 +
===Spring 2017===
 +
 
 +
2017-05-01: Continued work on HTML Parser. Uploaded all semester projects to git server.
 +
 
 +
2017-04-20: Finished the HTML Parser for the [http://www.edegan.com/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler]. Ran HTML parser on accelerator founders. Data is stored in projects/accelerators/LinkedIn Founder Data.
 +
 
 +
2017-04-19: Made updates to the [http://www.edegan.com/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler] Wikipage. Ran LinkedIn Crawler on accelerator data. Working on an html parser for the results from the LinkedIn Crawler.
 +
 
 +
2017-04-18: Ran LinkedIn Crawler on matches between Crunchbase Snapshot and the accelerator data.
 +
 
 +
2017-04-17: Worked on ways to get correct search results from the [http://www.edegan.com/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler]. Worked on an HTML Parser for the results from the LinkedIn Crawler.
 +
 
 +
2017-04-13: Worked on debugging the logout procedure for the [http://www.edegan.com/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler]. Began formulation of process to search for founders of startups using a combination of the LinkedIn Crawler with the data resources from the [http://www.edegan.com/wiki/Crunchbase_2013_Snapshot CrunchBase Snapshot].
 +
 
 +
2017-04-12: Work on bugs with the [http://www.edegan.com/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler].
 +
 
 +
2017-04-11: Completed functional [http://www.edegan.com/wiki/LinkedIn_Crawler_(Python) crawler of LinkedIn Recruiter Pro]. Basic search functions work and download profile information for a given person.
 +
 
 +
2017-04-10: Began writing functioning crawler of LinkedIn.
 +
 
 +
2017-04-06: Continued working on debugging and documenting the [http://www.edegan.com/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler]. Wrote a test program that logs in, searches for a query, navigates through search pages, and logs out. Recruiter program can now login and search.
 +
 
 +
2017-04-05: Began work on the LinkedIn Crawler. Researched on launching Python Virtual Environment.
  
1/19/2017 14:30- 17:45: Downloaded pdfs in the background for the [http://mcnair.bakerinstitute.org/wiki/Moroccan_Parliament_Web_Crawler Moroccan Government Crawler Project]. Created parser for the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project], completed creation of final data set(yay!). Began working on cohort parser.
+
2017-04-03: Finished debugging points for the Enclosing Circle Algorithm. Added Command Line functionality to the Industry Classifier.
  
1/23/2017 10:00-12:00: Worked on parser for cohort data of the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Preliminary code is written, working on debugging.
+
2017-03-29: Worked on debugging points for the Enclosing Circle Algorithm.
  
1/24/2017 14:30-17:15: Worked on parser for cohort data of the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Cohort data file created, debugging is almost complete. Will begin work on the google accelerator search soon.
+
2017-03-28: Finished running the Enclosing Circle Algorithm. Worked on removing incorrect points from the data set(see above).  
  
1/25/2017 10:00-12:00: Finished parser for cohort data of the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Some data files still need proofreading as they are not in an acceptable format. Began working on Google sitesearch project.
+
2017-03-27: Worked on debugging the Enclosing Circle Algorithm. Implemented a way to remove interior circles, and determined that translation to latitude and longitude coordinates resulted in slightly off center circles.
  
1/26/2017 14:30-17:45: Continued working on Google sitesearch project. Discovered crunchbase, changed project priority. Priority 1, split accelerator data up by flag, priority 2, use crunchbase to get web urls for cohorts, priority 3,  make internet archive wayback machine driver. Located [http://mcnair.bakerinstitute.org/wiki/Whois_Parser Whois Parser].
+
2017-03-23: Finished debugging the brute force algorithm for [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm]. Implemented a method to plot the points and circles on a graph. Analyzed runtime of the brute force algorithm.
  
1/30/2017 10:00-12:00: Optimized enclosing circle algorithm through memoization. Developed script to read addresses from accelerator data and return latitude and longitude coordinates.
+
2017-03-21: Coded a brute force algorithm for the [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm].
  
1/31/2017 14:30-17:15: Built WayBack Machine Crawler. Updated documentation for coordinates script. Updated profile page to include locations of code.
+
2017-03-20: Worked on  debugging the [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm].
  
2/1/2017 10:00-12:00:
+
2017-03-09: Continued running [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] on the Top 50 Cities. Finished script to draw Enclosing Circles on a Google Map.
  
Notes from Session with Ed: Project on US university patenting and entrepreneurship programs (writing code to identify universities in assignees), search Wikipedia (XML then bulk download), student pop, faculty pop, etc.
+
2017-03-08: Continued running [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] on the Top 50 Cities. Created script to draw outcome of the Enclosing Circle Algorithm on Google Maps.
Circle project for VC data will end up being a joint project to join accelerator data.  
+
 
Pull descriptions for VC. Founders of accelerators in linkedin. LinkedIn cannot be caught(pretend to not be a bot). Can eventually get academic backgrounds through linkedin.
+
2017-03-07: Redetermined the top 50 cities which Enclosing Circle should be run on. Data on the [http://www.edegan.com/wiki/Top_Cities_for_VC_Backed_Companies Top 50 Cities for VC Backed Companies can be found here.] Ran [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] on the Top 50 Cities.
Pull business registration data, Stern/Guzman Algorithm
 
GIS ontop of geocoded data.
 
Maps that works on wiki or blog (CartoDB), Maps API and R.
 
NLP Projects, Description Classifier.
 
  
2/2/2017 14:30-15:45: Out sick, independent research and work from RDP. Brief research into the [http://jorgeg.scripts.mit.edu/homepage/wp-content/uploads/2016/03/Guzman-Stern-State-of-American-Entrepreneurship-FINAL.pdf Stern-Guzman algorithm]. Research into [http://mcnair.bakerinstitute.org/wiki/interactive_maps Interactive Maps]. No helpful additions to map embedding problem.
+
2017-03-06: Ran script to determine the top 50 cities which Enclosing Circle should be run on. Fixed the VC Circles script to take in a new data format.
  
2/7/2017 14:30-17:15: Fixed bugs in parse_cohort_data.py, the script for parsing the cohort data from the [http://mcnair.bakerinstitute.org/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Added descriptive statistics to cohort data excel file.
+
2017-03-02: Cleaned up data for the VC Circles Project. Created histogram of data in Excel. See [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project. Began work on the [http://www.edegan.com/wiki/LinkedInCrawlerPython LinkedIn Crawler].
  
2/8/2017 10:00-12:00 Worked on Neural Net for the [http://mcnair.bakerinstitute.org/wiki/Industry_Classifier Industry Classifier Project].
+
2017-03-01: Created statistics for the VC Circles Project.
  
2/13/2017 10:00-12:00 Worked on Neural Net for the [http://mcnair.bakerinstitute.org/wiki/Industry_Classifier Industry Classifier Project].
+
2017-02-28: Finished downloading geocoded data for VC Data as part of the [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project.  Found bug in Enclosing Circle Algorithm.
  
2/14/2017 14:30-17:15: Worked on the application of the Enclosing Circle algorithm to the VC study. Working on bug fixes in the Enclosing Circle algorithm. Created wiki page for the [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm].
+
2017-02-27: Continued to download geocoded data for VC Data as part of the [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project. Assisted work on the [http://www.edegan.com/wiki/Industry_Classifier Industry Classifier].
  
2/15/2017: 10:00-12:00: Finished [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] applied to the VC study. Enclosing Circle algorithm still needs adjustment, but the program runs with the temporary fixes.
+
2017-02-23: Continued to download geocoded data for VC Data as part of the [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project. Installed C++ Compiler for Python. Ran tests on difference between Python and C wrapped Python.
  
2/16/2017 14:30-17:45: Reworked [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] to create a file of geocoded data. Began work on wrapping the algorithm in C to improve speed.
+
2017-02-22: Continued to download geocoded data for VC Data as part of the [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project. Helped out with [http://www.edegan.com/wiki/Industry_Classifier Industry Classifier Project].
  
2/20/2017 10:00-12:00: Continued to download geocoded data for VC Data as part of the [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project. Assisted work on the [http://mcnair.bakerinstitute.org/wiki/Industry_Classifier Industry Classifier].
+
2017-02-21: Continued to download geocoded data for VC Data as part of the [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project. Researched into C++ Compilers for Python so that the Enclosing Circle Algorithm could be wrapped in C. Found a recommended one [https://www.microsoft.com/en-us/download/details.aspx?id=44266 here].
  
2/21/2017 14:30- 17:15: Continued to download geocoded data for VC Data as part of the [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project. Researched into C++ Compilers for Python so that the Enclosing Circle Algorithm could be wrapped in C. Found a recommended one [https://www.microsoft.com/en-us/download/details.aspx?id=44266 here].
+
2017-02-20: Continued to download geocoded data for VC Data as part of the [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project. Assisted work on the [http://www.edegan.com/wiki/Industry_Classifier Industry Classifier].
  
2/22/2017 10:00-12:00: Continued to download geocoded data for VC Data as part of the [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project. Helped out with [http://mcnair.bakerinstitute.org/wiki/Industry_Classifier Industry Classifier Project].
+
2017-02-16: Reworked [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] to create a file of geocoded data. Began work on wrapping the algorithm in C to improve speed.
  
2/23/2017 14:30-17:45: Continued to download geocoded data for VC Data as part of the [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project. Installed C++ Compiler for Python. Ran tests on difference between Python and C wrapped Python.
+
2017-02-15: Finished [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] applied to the VC study. Enclosing Circle algorithm still needs adjustment, but the program runs with the temporary fixes.
  
2/27/2017 10:00-12:00: Continued to download geocoded data for VC Data as part of the [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project. Assisted work on the [http://mcnair.bakerinstitute.org/wiki/Industry_Classifier Industry Classifier].
+
2017-02-14: Worked on the application of the Enclosing Circle algorithm to the VC study. Working on bug fixes in the Enclosing Circle algorithm. Created wiki page for the [http://www.edegan.com/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm].
  
2/28/2017 14:30-17:15: Finished downloading geocoded data for VC Data as part of the [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project.  Found bug in Enclosing Circle Algorithm.
+
2017-02-13: Worked on Neural Net for the [http://www.edegan.com/wiki/Industry_Classifier Industry Classifier Project].
  
3/1/2017 10:00-12:00: Created statistics for the VC Circles Project.
+
2017-02-08: Worked on Neural Net for the [http://www.edegan.com/wiki/Industry_Classifier Industry Classifier Project].
  
3/2/2017 14:30-17:45: Cleaned up data for the VC Circles Project. Created histogram of data in Excel. See [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] Project. Began work on the [http://mcnair.bakerinstitute.org/wiki/LinkedInCrawlerPython LinkedIn Crawler].
+
2017-02-07: Fixed bugs in parse_cohort_data.py, the script for parsing the cohort data from the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Added descriptive statistics to cohort data excel file.
  
3/6/2017 10:00-12:00: Ran script to determine the top 50 cities which Enclosing Circle should be run on. Fixed the VC Circles script to take in a new data format.
+
2017-02-02: Out sick, independent research and work from RDP. Brief research into the [http://jorgeg.scripts.mit.edu/homepage/wp-content/uploads/2016/03/Guzman-Stern-State-of-American-Entrepreneurship-FINAL.pdf Stern-Guzman algorithm]. Research into [http://www.edegan.com/wiki/interactive_maps Interactive Maps]. No helpful additions to map embedding problem.
  
3/7/2017 14:30-17:15: Redetermined the top 50 cities which Enclosing Circle should be run on. Data on the [http://mcnair.bakerinstitute.org/wiki/Top_Cities_for_VC_Backed_Companies Top 50 Cities for VC Backed Companies can be found here.] Ran [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] on the Top 50 Cities.
+
2017-02-01: Notes from Session with Ed: Project on US university patenting and entrepreneurship programs (writing code to identify universities in assignees), search Wikipedia (XML then bulk download), student pop, faculty pop, etc.
 +
Circle project for VC data will end up being a joint project to join accelerator data.  
 +
Pull descriptions for VC. Founders of accelerators in linkedin. LinkedIn cannot be caught(pretend to not be a bot). Can eventually get academic backgrounds through linkedin.  
 +
Pull business registration data, Stern/Guzman Algorithm.
 +
GIS ontop of geocoded data.
 +
Maps that works on wiki or blog (CartoDB), Maps API and R.
 +
NLP Projects, Description Classifier.
  
3/8/2017 10:00-12:00: Continued running [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] on the Top 50 Cities. Created script to draw outcome of the Enclosing Circle Algorithm on Google Maps.
+
2017-01-31: Built WayBack Machine Crawler. Updated documentation for coordinates script. Updated profile page to include locations of code.
  
3/9/2017 14:30-17:45: Continued running [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm] on the Top 50 Cities. Finished script to draw Enclosing Circles on a Google Map.
 
  
3/20/2017 10:00-12:00: Worked on  debugging the [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm].
+
2017-01-30: Optimized enclosing circle algorithm through memoization. Developed script to read addresses from accelerator data and return latitude and longitude coordinates.
  
3/21/2017 14:30-17:15: Coded a brute force algorithm for the [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm].
+
2017-01-26: Continued working on Google sitesearch project. Discovered crunchbase, changed project priority. Priority 1, split accelerator data up by flag, priority 2, use crunchbase to get web urls for cohorts, priority 3,  make internet archive wayback machine driver. Located [http://www.edegan.com/wiki/Whois_Parser Whois Parser].
  
3/23/2017 14:30- 17:45: Finished debugging the brute force algorithm for [http://mcnair.bakerinstitute.org/wiki/Enclosing_Circle_Algorithm Enclosing Circle Algorithm]. Implemented a method to plot the points and circles on a graph. Analyzed runtime of the brute force algorithm.
 
  
3/27/2017 10:00-12:00: Worked on debugging the Enclosing Circle Algorithm. Implemented a way to remove interior circles, and determined that translation to latitude and longitude coordinates resulted in slightly off center circles.
+
2017-01-25: Finished parser for cohort data of the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Some data files still need proofreading as they are not in an acceptable format. Began working on Google sitesearch project.
  
3/28/2017 14:30- 17:15: Finished running the Enclosing Circle Algorithm. Worked on removing incorrect points from the data set(see above).  
+
2017-01-24: Worked on parser for cohort data of the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Cohort data file created, debugging is almost complete. Will begin work on the google accelerator search soon.
  
3/29/2017 10:00-12:00: Worked on debugging points for the Enclosing Circle Algorithm.
+
2017-01-23: Worked on parser for cohort data of the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Preliminary code is written, working on debugging.
  
4/3/2017 10:00-12:00: Finished debugging points for the Enclosing Circle Algorithm. Added Command Line functionality to the Industry Classifier.
+
2017-01-19: Downloaded pdfs in the background for the [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Moroccan Government Crawler Project]. Created parser for the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project], completed creation of final data set(yay!). Began working on cohort parser.
  
4/5/2017 9:45-11:45: Began work on the LinkedIn Crawler. Researched on launching Python Virtual Environment.
+
2017-01-18: Downloaded pdfs in the background for the [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Moroccan Government Crawler Project].
  
4/6/2017 14:00-17:15: Continued working on debugging and documenting the [http://mcnair.bakerinstitute.org/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler]. Wrote a test program that logs in, searches for a query, navigates through search pages, and logs out. Recruiter program can now login and search.
+
2017-01-13: Continued making text files for the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Downloaded pdfs in the background for the [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Moroccan Government Crawler Project].
  
4/10/2017 10:00-12:00: Began writing functioning crawler of LinkedIn.  
+
2017-01-12: Continued making text files for the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Downloaded pdfs in the background for the [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Moroccan Government Crawler Project].
  
4/11/2017 14:30-17:15: Completed functional [http://mcnair.bakerinstitute.org/wiki/LinkedIn_Crawler_(Python) crawler of LinkedIn Recruiter Pro]. Basic search functions work and download profile information for a given person.  
+
2017-01-11: Continued making text files for the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Downloaded pdfs in the background for the [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Moroccan Government Crawler Project].
  
4/12/2017 10:00-12:00: Work on bugs with the [http://mcnair.bakerinstitute.org/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler].
+
2017-01-10: Continued making text files for the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Downloaded pdfs in the background for the [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Moroccan Government Crawler Project].
  
4/13/2017 14:30-17:45: Worked on debugging the logout procedure for the [http://mcnair.bakerinstitute.org/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler]. Began formulation of process to search for founders of startups using a combination of the LinkedIn Crawler with the data resources from the [http://mcnair.bakerinstitute.org/wiki/Crunchbase_2013_Snapshot CrunchBase Snapshot].
 
  
4/17/2017 10:00-12:00: Worked on ways to get correct search results from the [http://mcnair.bakerinstitute.org/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler]. Worked on an HTML Parser for the results from the LinkedIn Crawler.
+
===Fall 2016===
  
4/18/2017 14:30-17:15: Ran LinkedIn Crawler on matches between Crunchbase Snapshot and the accelerator data.
+
2016-12-08: Continued making text files for the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project].
  
4/19/2017 10:00-12:00: Made updates to the [http://mcnair.bakerinstitute.org/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler] Wikipage. Ran LinkedIn Crawler on accelerator data. Working on an html parser for the results from the LinkedIn Crawler.
+
2016-12-07: Continued making text files for the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project].
  
4/20/2017 14:30-17:45: Finished the HTML Parser for the [http://mcnair.bakerinstitute.org/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler]. Ran HTML parser on accelerator founders. Data is stored in projects/accelerators/LinkedIn Founder Data.
+
2016-12-06: Learned how to use git. Committed software projects from the semester to the McNair git repository. Projects can be found at; [http://www.edegan.com/wiki/E%26I_Governance_Policy_Report Executive Order Crawler], [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Foreign Government Web Crawlers], [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) F6S Crawler and Parser].
  
5/1/2017 13:00-17:00: Continued work on HTML Parser. Uploaded all semester projects to git server.
+
2016-12-02: Built and ran web crawler for Center for Middle East Studies on Kuwait. Continued making text files for the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project].
  
<font size="5">'''Fall 2017'''</font>
+
2016-12-01: Continued making text files for the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Seed List project]. Built tool for the [http://www.edegan.com/wiki/E%26I_Governance_Policy_Report E&I Governance Report Project] with Christy. Adds a column of data that shows whether or not the bill has been passed.
  
8/21/2017 14:00-17:00: Began work on extracting founders for accelerators through LinkedIn Crawler. Discovered that Python3 is not installed on RDP, so the virtual environment for the project cannot be fired up. Continued working on Ubuntu machine.
+
2016-11-29: Began pulling data from the accelerators listed [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) here]. Made text files for about 18 accelerators.
  
8/22/2017 14:00-16:00: Discovered that LinkedIn Profiles cannot be viewed through LinkedIn if the target is 3rd degree or further. However, if entering LinkedIn through a Google search, the profile can still be viewed if the user has previously logged into LinkedIn. Devising a workaround crawler that utilizes Google search. Continued blog post [http://mcnair.bakerinstitute.org/wiki/LinkedIn_Crawler_(Python) here] under Section 4.
+
2016-11-22: Transferred downloaded Morocco Written Bills to provided SeaGate Drive. Made a "gentle" F6S crawler to retrieve HTMLs of possible accelerator pages documented [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) here].
  
8/23/2017 14:00-15:30: Decided with Ed to abandon LinkedIn crawling to retrieve accelerator founder data, and instead use crunchbase. Spent the day navigating the crunchbasebulk database, and seeing what useful information was contained in it.
+
2016-11-18: Converted Executive Order PDFs to text files using adobe acrobat DC. See [http://www.edegan.com/wiki/E%26I_Governance_Policy_Report Wikipage] for details.
  
8/24/2017 14:30-16:30: Began using the Crunchbase API to retrieve founder information for accelerators. Halfway through compiling a dictionary that translates accelerator names into proper Crunchbase API URLs.
+
2016-11-17: Wrote a crawler to retrieve information about executive orders, and their corresponding pdfs. They can be found [http://www.edegan.com/wiki/E%26I_Governance_Policy_Report here.] Next step is to run code to convert the pdfs to text files, then use the parser fixed by Christy.  
  
9/5/2017 14:00-16:00: Post Harvey. Finished retrieving names from the Crunchbase API on founders. Next step is to query crunchbase bulk database to get linkedin urls. For more information, see [http://mcnair.bakerinstitute.org/wiki/Crunchbase_Data here].
+
2016-11-15: Finished download of Moroccan Written Question pdfs. Wrote a parser with Christy to be used for parsing bills from Congress and eventually executive orders. Found bug in the system Python that was worked out and rebooted.
  
9/6/2017 14:00-15:30: Combined founders data retrieved with the Crunchbase API with the crunchbasebulk data to get linkedin urls for different accelerator founders. For more information, see [http://mcnair.bakerinstitute.org/wiki/Crunchbase_Data here].
+
2016-11-11: Continued to download Moroccan data in the background. Attempted to find bug fixes for the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Project] crawler.
  
9/11/2017 14:00-17:00: Continued working on the [http://mcnair.bakerinstitute.org/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler for Accelerator Founders Data].  
+
2016-11-10: Continued to download Moroccan data and Kuwait data in the background. Began work on [http://www.edegan.com/wiki/Google_Scholar_Crawler Google Scholar Crawler]. Wrote a crawler for the [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator Project] to get the HTML files of hundreds of accelerators. The crawler ended up failing; it appears to have been due to HTTPS.  
  
9/12/2017 14:00-16:00: Continued working on the [http://mcnair.bakerinstitute.org/wiki/LinkedIn_Crawler_(Python) LinkedIn Crawler for Accelerator Founders Data]. Added to the wiki on this topic.
+
2016-11-08: Continued to download Moroccan data in the background. Finished writing code for the embedded files on the Kuwait Site. Spent time debugging the frame errors due to the dynamically generated content. Never found an answer to the bug, and instead found a workaround that sacrificed run time for the ability to work. [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Moroccan Web Driver]
  
9/13/2017 14:00-15:30: Implemented LinkedIn Crawler for main portion of profiles. Began working on crawling Experience section of profiles.
+
2016-11-04: Continued to download Moroccan data in the background. Finished writing initial Kuwait Web Crawler/Driver for the Middle East Studies Department. Middle East Studies Department asked for additional embedded files in the Kuwait website. [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Moroccan Web Driver]
  
9/14/2017 13:30-15:30: Continued implementing LinkedIn Crawler for profiles.
+
2016-11-03: Continued to download Moroccan data in the background. Dr. Egan fixed systems requirements to run the GovTrack Web Crawler. Made significant progress on the Kuwait Web Crawler/Driver for the Middle East Studies Department.  
  
9/18/2017 14:00-17:00: Finished implementation of Experience Crawler, continued working on Education Crawler for LinkedIn.
+
2016-11-01: Continued to download Moroccan data in the background. Went over code for GovTracker Web Crawler, continued learning Perl. [http://www.edegan.com/wiki/Govtrack_Webcrawler_(Wiki_Page) GovTrack Web Crawler] Began Kuwait Web Crawler/Driver.
  
9/19/2017 14:30-16:30: Began running linkedin crawler. Helped Yang create RDP account, get permissions, and get wiki setup.
+
2016-10-21: Continued to download data for the Moroccan Parliament Written and Oral Questions. Looked over [http://www.edegan.com/wiki/Christy_Warden_(Twitter_Crawler_Application_1) Christy's Twitter Crawler] to see how I can be helpful. Dr. Egan asked me to think about how to potentially make multiple tools to get cohorts and other sorts of data from accelerator sites. See [http://www.edegan.com/wiki/Accelerator_Seed_List_(Data) Accelerator List] He also asked me to look at the [http://www.edegan.com/wiki/Govtrack_Webcrawler_(Wiki_Page) GovTrack Web Crawler] for potential ideas on how to bring this project to fruition.
  
9/20/2017 14:00-15:30: Finished running linkedin crawler. Transferred data to RDP. Will write wikis next.
+
2016-10-20: Continued to download data for the Moroccan Parliament Written and Oral Questions. Updated Wiki page. Started working on Twitter project with Christy. [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Moroccan Web Driver]
  
#TODO: Write wiki on linkedin crawler, write wiki on creating accounts.
+
2016-10-18: Finished code for Oral Questions web driver and Written Questions web driver using selenium. Now, the data for the dates of questions can be found using the crawler, and the pdfs of the questions will be downloaded using selenium. [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Moroccan Web Driver]
  
9/21/2017 14:00-16:00: Wrote wiki on Linkedin crawler, met with Laura about patents project.
+
2016-10-14: Finished Oral Questions crawler. Finished Written Questions crawler. Waiting for further details on whether that data needs to be tweaked in any way. Updated the Moroccan Web Driver/Web Crawler wiki page. [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Moroccan Web Driver]
  
9/25/2017 14:00-17:00: New task -- Create text file with company, description, and company type.
+
2016-10-13: Completed download of Moroccan Bills. Working on either a web driver screenshot approach or a webcrawler approach to download the  Moroccan oral and written questions data. Began building Web Crawler for Oral and Written Questions site. Edited Moroccan Web Driver/Crawler wiki page. [http://www.edegan.com/wiki/Moroccan_Parliament_Web_Crawler Moroccan Web Driver]
#[http://mcnair.bakerinstitute.org/wiki/VC_Database_Rebuild VC Database Rebuild]
 
#psql vcdb2
 
#table name, sdccompanybasecore2
 
#Combine with Crunchbasebulk
 
  
9/27/2017 14:00-16:00: Worked on big database file.
+
2016-10-11: Fixed arabic bug, files can now be saved with arabic titles. Monarchy bills downloaded and ready for shipment. House of Representatives Bill mostly downloaded, ratified bills prepared for download. Started learning scrapy library in python for web scraping. Discussed idea of screenshot-ing questions instead of scraping.  
  
9/28/2017 13:30-15:30: Added collaborative editing feature to PyCharm.  
+
2016-10-07: Learned unicode and utf8 encoding and decoding in arabic. Still working on transforming an ascii url into printable unicode.
  
10/1/2017 14:00-17:00: Worked on ArcGIS data. See Harrison's Work Log for the details.
+
2016-10-06: Discussed with Dr. Elbadawy about the desired file names for Moroccan data download. The consensus was that the bill programs are ready to launch once the files can be named properly, and the questions data must be retrieved using a web crawler which I need to learn how to implement. The naming of files is currently drawing errors in going from arabic, to url, to download, to filename. Debugging in process. Also built a demo selenium program for Dr. Egan that drives the McNair blog site on an infinite loop.
  
10/3/2017 14:00-16:00: Emergency ArcGIS creation for Agglomeration project.
+
2016-10-03: Moroccan Web Driver projects completed for driving of the Monarchy proposed bills, the House of Representatives proposed bills, and the Ratified bills sites. Begun process of devising a naming system for the files that does not require scraping. Tinkered with naming through regular expression parsing of the URL. Structure for the oral questions and written questions drivers is set up, but need fixes due to the differences in the sites. Fixed bug on McNair wiki for women's biz team where email was plain text instead of an email link. Took a glimpse at Kuwait Parliament website, and it appears to be very different from the Moroccan setup.
  
10/4/2017 14:15-15:45: Emergency ArcGIS creation for Agglomeration project.
+
2016-09-30: Selenium program selects view pdf option from the website, and goes to the pdf webpage. Program then switches handle to the new page. CTRL S is sent to the page to launch save dialog window. Text cannot be sent to this window. Brainstorm ways around this issue. Explored Chrome Options for saving automatically without a dialog window. Looking into other libraries besides selenium that may help.
  
10/12/2017 14:00-15:30: Continued work on Patent Thicket project, awaiting further project specs.
+
2016-09-29: Re-enroll in Microsoft Remote Desktop with proper authentication, set up Selenium environment and Komodo IDE on Remote Desktop, wrote program using Selenium that goes to a link and opens up the print dialog box. Developed computational recipe for a different approach to the problem.
  
10/13/2017 14:00-15:00: Updated various project wiki pages.
+
2016-09-26: Set up Staff wiki page, work log page; registered for Slack, Microsoft Remote Desktop; downloaded Selenium on personal computer, read Selenium docs. Created wiki page for Moroccan Web Driver Project.
  
==Notes==
+
'''Notes'''
  
 
*Ed moved the Morocco Data to E:\McNair\Projects from C:\Users\PeterJ\Documents
 
*Ed moved the Morocco Data to E:\McNair\Projects from C:\Users\PeterJ\Documents

Latest revision as of 17:51, 20 May 2019

Fall 2017

Peter Jalbert Work Logs (log page)

2017-12-21: Last minute adjustments to the Moroccan Data. Continued working on Selenium Documentation.

2017-12-20: Working on Selenium Documentation. Wrote 2 demo files. Wiki Page is avaiable here. Created 3 spreadsheets for the Moroccan data.

2017-12-19: Finished fixing the Demo Day Crawler. Changed files and installed as appropriate to make linked in crawler compatible with the RDP. Removed some of the bells and whistles.

2017-12-18: Continued finding errors with the Demo Day Crawler analysis. Rewrote the parser to remove any search terms that were in the top 10000 most common English words according to Google. Finished uploading and submitting Moroccan data.

2017-12-15: Found errors with the Demo Day Crawler. Fixed scripts to download Moroccan Law Data.

2017-12-14: Uploading Morocco Parliament Written Questions. Creating script for next Morocco Parliament download. Begin writing Selenium documentation. Continuing to download TIGER data.

2017-12-06: Running Morocco Parliament Written Questions script. Analyzing Demo Day Crawler results. Continued downloading for TIGER geocoder.

2017-11-28: Debugging Morocco Parliament Crawler. Running Demo Day Crawler for all accelerators and 10 pages per accelerator. TIGER geocoder is back to Forbidden Error.

2017-11-27: Rerunning Morocco Parliament Crawler. Fixed KeyTerms.py and running it again. Continued downloading for TIGER geocoder.

2017-11-20: Continued running Demo Day Page Parser. Fixed KeyTerms.py and trying to run it again. Forbidden Error continues with the TIGER Geocoder. Began Image download for Image Classification on cohort pages. Clarifying specs for Morocco Parliament crawler.

2017-11-16: Continued running Demo Day Page Parser. Fixed KeyTerms.py and trying to run it again. Forbidden Error continues with the TIGER Geocoder. Began Image download for Image Classification on cohort pages. Clarifying specs for Morocco Parliament crawler.

2017-11-15: Continued running Demo Day Page Parser. Wrote a script to extract counts that were greater than 2 from Keyword Matcher. Continued downloading for TIGER Geocoder. Finished re-formatting work logs.

2017-11-14: Continued running Demo Day Page Parser. Wrote an HTML to Text parser. See Parser Demo Day Page for file location. Continued downloading for TIGER Geocoder.

2017-11-13: Built Demo Day Page Parser.

2017-11-09: Running demo version of Demo Day crawler (Accelerator Google Crawler). Fixing work log format.

2017-11-07: Created file with 0s and 1s detailing whether crunchbase has the founder information for an accelerator. Details posted as a TODO on Accelerator Seed List page. Still waiting for feedback on the PostGIS installation from Tiger Geocoder. Continued working on Accelerator Google Crawler.

2017-11-06: Contacted Geography Center for the US Census Bureau, here, and began email exchange on PostGIS installation problems. Began working on the Selenium Documentation. Also began working on an Accelerator Google Crawler that will be used with Yang and ML to find Demo Days for cohort companies.

2017-11-01: Attempted to continue downloading, however ran into HTTP Forbidden errors. Listed the errors on the Tiger Geocoder Page.

2017-10-31: Began downloading blocks of data for individual states for the Tiger Geocoder project. Wrote out the new wiki page for installation, and beginning to write documentation on usage.

2017-10-30: With Ed's help, was able to get the national data from Tiger installed onto a database server. The process required much jumping around and changing users, and all the things we learned are outlined in the database server documentation under "Editing Users".

2017-10-25: Continued working on the TigerCoder Installation.

2017-10-24: Throw some addresses into a database, use address normalizer and geocoder. May need to install things. Details on the installation process can be found on the PostGIS Installation page.

2017-10-23: Finished Yelp crawler for Houston Innovation District Project.

2017-10-19: Continued work on Yelp crawler for Houston Innovation District Project.

2017-10-18: Continued work on Yelp crawler for Innovation District Project.

2017-10-17: Constructed ArcGIS maps for the agglomeration project. Finished maps of points for every year in the state of California. Finished maps of Route 128. Began working on selenium Yelp crawler to get cafe locations within the 610-loop.

2017-10-16: Assisted Harrison on the USITC project. Looked for natural language processing tools to extract complaintants and defendants along with their location from case files. Experimented with pulling based on parts of speech tags, as well as using geotext or geograpy to pull locations from a case segment.

2017-10-13: Updated various project wiki pages.

2017-10-12: Continued work on Patent Thicket project, awaiting further project specs.

2017-10-05: Emergency ArcGIS creation for Agglomeration project.

2017-10-04: Emergency ArcGIS creation for Agglomeration project.

2017-10-02: Worked on ArcGIS data. See Harrison's Work Log for the details.

2017-09-28: Added collaborative editing feature to PyCharm.

2017-09-27: Worked on big database file.

2017-09-25: New task -- Create text file with company, description, and company type.

  1. VC Database Rebuild
  2. psql vcdb2
  3. table name, sdccompanybasecore2
  4. Combine with Crunchbasebulk
  1. TODO: Write wiki on linkedin crawler, write wiki on creating accounts.

2017-09-21: Wrote wiki on Linkedin crawler, met with Laura about patents project.

2017-09-20: Finished running linkedin crawler. Transferred data to RDP. Will write wikis next.

2017-09-19: Began running linkedin crawler. Helped Yang create RDP account, get permissions, and get wiki setup.

2017-09-18: Finished implementation of Experience Crawler, continued working on Education Crawler for LinkedIn.

2017-09-14: Continued implementing LinkedIn Crawler for profiles.

2017-09-13: Implemented LinkedIn Crawler for main portion of profiles. Began working on crawling Experience section of profiles.

2017-09-12: Continued working on the LinkedIn Crawler for Accelerator Founders Data. Added to the wiki on this topic.

2017-09-11: Continued working on the LinkedIn Crawler for Accelerator Founders Data.

2017-09-06: Combined founders data retrieved with the Crunchbase API with the crunchbasebulk data to get linkedin urls for different accelerator founders. For more information, see here.

2017-09-05: Post Harvey. Finished retrieving names from the Crunchbase API on founders. Next step is to query crunchbase bulk database to get linkedin urls. For more information, see here.

2017-08-24: Began using the Crunchbase API to retrieve founder information for accelerators. Halfway through compiling a dictionary that translates accelerator names into proper Crunchbase API URLs.

2017-08-23: Decided with Ed to abandon LinkedIn crawling to retrieve accelerator founder data, and instead use crunchbase. Spent the day navigating the crunchbasebulk database, and seeing what useful information was contained in it.

2017-08-22: Discovered that LinkedIn Profiles cannot be viewed through LinkedIn if the target is 3rd degree or further. However, if entering LinkedIn through a Google search, the profile can still be viewed if the user has previously logged into LinkedIn. Devising a workaround crawler that utilizes Google search. Continued blog post here under Section 4.

2017-08-21: Began work on extracting founders for accelerators through LinkedIn Crawler. Discovered that Python3 is not installed on RDP, so the virtual environment for the project cannot be fired up. Continued working on Ubuntu machine.


Spring 2017

2017-05-01: Continued work on HTML Parser. Uploaded all semester projects to git server.

2017-04-20: Finished the HTML Parser for the LinkedIn Crawler. Ran HTML parser on accelerator founders. Data is stored in projects/accelerators/LinkedIn Founder Data.

2017-04-19: Made updates to the LinkedIn Crawler Wikipage. Ran LinkedIn Crawler on accelerator data. Working on an html parser for the results from the LinkedIn Crawler.

2017-04-18: Ran LinkedIn Crawler on matches between Crunchbase Snapshot and the accelerator data.

2017-04-17: Worked on ways to get correct search results from the LinkedIn Crawler. Worked on an HTML Parser for the results from the LinkedIn Crawler.

2017-04-13: Worked on debugging the logout procedure for the LinkedIn Crawler. Began formulation of process to search for founders of startups using a combination of the LinkedIn Crawler with the data resources from the CrunchBase Snapshot.

2017-04-12: Work on bugs with the LinkedIn Crawler.

2017-04-11: Completed functional crawler of LinkedIn Recruiter Pro. Basic search functions work and download profile information for a given person.

2017-04-10: Began writing functioning crawler of LinkedIn.

2017-04-06: Continued working on debugging and documenting the LinkedIn Crawler. Wrote a test program that logs in, searches for a query, navigates through search pages, and logs out. Recruiter program can now login and search.

2017-04-05: Began work on the LinkedIn Crawler. Researched on launching Python Virtual Environment.

2017-04-03: Finished debugging points for the Enclosing Circle Algorithm. Added Command Line functionality to the Industry Classifier.

2017-03-29: Worked on debugging points for the Enclosing Circle Algorithm.

2017-03-28: Finished running the Enclosing Circle Algorithm. Worked on removing incorrect points from the data set(see above).

2017-03-27: Worked on debugging the Enclosing Circle Algorithm. Implemented a way to remove interior circles, and determined that translation to latitude and longitude coordinates resulted in slightly off center circles.

2017-03-23: Finished debugging the brute force algorithm for Enclosing Circle Algorithm. Implemented a method to plot the points and circles on a graph. Analyzed runtime of the brute force algorithm.

2017-03-21: Coded a brute force algorithm for the Enclosing Circle Algorithm.

2017-03-20: Worked on debugging the Enclosing Circle Algorithm.

2017-03-09: Continued running Enclosing Circle Algorithm on the Top 50 Cities. Finished script to draw Enclosing Circles on a Google Map.

2017-03-08: Continued running Enclosing Circle Algorithm on the Top 50 Cities. Created script to draw outcome of the Enclosing Circle Algorithm on Google Maps.

2017-03-07: Redetermined the top 50 cities which Enclosing Circle should be run on. Data on the Top 50 Cities for VC Backed Companies can be found here. Ran Enclosing Circle Algorithm on the Top 50 Cities.

2017-03-06: Ran script to determine the top 50 cities which Enclosing Circle should be run on. Fixed the VC Circles script to take in a new data format.

2017-03-02: Cleaned up data for the VC Circles Project. Created histogram of data in Excel. See Enclosing Circle Algorithm Project. Began work on the LinkedIn Crawler.

2017-03-01: Created statistics for the VC Circles Project.

2017-02-28: Finished downloading geocoded data for VC Data as part of the Enclosing Circle Algorithm Project. Found bug in Enclosing Circle Algorithm.

2017-02-27: Continued to download geocoded data for VC Data as part of the Enclosing Circle Algorithm Project. Assisted work on the Industry Classifier.

2017-02-23: Continued to download geocoded data for VC Data as part of the Enclosing Circle Algorithm Project. Installed C++ Compiler for Python. Ran tests on difference between Python and C wrapped Python.

2017-02-22: Continued to download geocoded data for VC Data as part of the Enclosing Circle Algorithm Project. Helped out with Industry Classifier Project.

2017-02-21: Continued to download geocoded data for VC Data as part of the Enclosing Circle Algorithm Project. Researched into C++ Compilers for Python so that the Enclosing Circle Algorithm could be wrapped in C. Found a recommended one here.

2017-02-20: Continued to download geocoded data for VC Data as part of the Enclosing Circle Algorithm Project. Assisted work on the Industry Classifier.

2017-02-16: Reworked Enclosing Circle Algorithm to create a file of geocoded data. Began work on wrapping the algorithm in C to improve speed.

2017-02-15: Finished Enclosing Circle Algorithm applied to the VC study. Enclosing Circle algorithm still needs adjustment, but the program runs with the temporary fixes.

2017-02-14: Worked on the application of the Enclosing Circle algorithm to the VC study. Working on bug fixes in the Enclosing Circle algorithm. Created wiki page for the Enclosing Circle Algorithm.

2017-02-13: Worked on Neural Net for the Industry Classifier Project.

2017-02-08: Worked on Neural Net for the Industry Classifier Project.

2017-02-07: Fixed bugs in parse_cohort_data.py, the script for parsing the cohort data from the Accelerator Seed List project. Added descriptive statistics to cohort data excel file.

2017-02-02: Out sick, independent research and work from RDP. Brief research into the Stern-Guzman algorithm. Research into Interactive Maps. No helpful additions to map embedding problem.

2017-02-01: Notes from Session with Ed: Project on US university patenting and entrepreneurship programs (writing code to identify universities in assignees), search Wikipedia (XML then bulk download), student pop, faculty pop, etc. Circle project for VC data will end up being a joint project to join accelerator data. Pull descriptions for VC. Founders of accelerators in linkedin. LinkedIn cannot be caught(pretend to not be a bot). Can eventually get academic backgrounds through linkedin. Pull business registration data, Stern/Guzman Algorithm. GIS ontop of geocoded data. Maps that works on wiki or blog (CartoDB), Maps API and R. NLP Projects, Description Classifier.

2017-01-31: Built WayBack Machine Crawler. Updated documentation for coordinates script. Updated profile page to include locations of code.


2017-01-30: Optimized enclosing circle algorithm through memoization. Developed script to read addresses from accelerator data and return latitude and longitude coordinates.

2017-01-26: Continued working on Google sitesearch project. Discovered crunchbase, changed project priority. Priority 1, split accelerator data up by flag, priority 2, use crunchbase to get web urls for cohorts, priority 3, make internet archive wayback machine driver. Located Whois Parser.


2017-01-25: Finished parser for cohort data of the Accelerator Seed List project. Some data files still need proofreading as they are not in an acceptable format. Began working on Google sitesearch project.

2017-01-24: Worked on parser for cohort data of the Accelerator Seed List project. Cohort data file created, debugging is almost complete. Will begin work on the google accelerator search soon.

2017-01-23: Worked on parser for cohort data of the Accelerator Seed List project. Preliminary code is written, working on debugging.

2017-01-19: Downloaded pdfs in the background for the Moroccan Government Crawler Project. Created parser for the Accelerator Seed List project, completed creation of final data set(yay!). Began working on cohort parser.

2017-01-18: Downloaded pdfs in the background for the Moroccan Government Crawler Project.

2017-01-13: Continued making text files for the Accelerator Seed List project. Downloaded pdfs in the background for the Moroccan Government Crawler Project.

2017-01-12: Continued making text files for the Accelerator Seed List project. Downloaded pdfs in the background for the Moroccan Government Crawler Project.

2017-01-11: Continued making text files for the Accelerator Seed List project. Downloaded pdfs in the background for the Moroccan Government Crawler Project.

2017-01-10: Continued making text files for the Accelerator Seed List project. Downloaded pdfs in the background for the Moroccan Government Crawler Project.


Fall 2016

2016-12-08: Continued making text files for the Accelerator Seed List project.

2016-12-07: Continued making text files for the Accelerator Seed List project.

2016-12-06: Learned how to use git. Committed software projects from the semester to the McNair git repository. Projects can be found at; Executive Order Crawler, Foreign Government Web Crawlers, F6S Crawler and Parser.

2016-12-02: Built and ran web crawler for Center for Middle East Studies on Kuwait. Continued making text files for the Accelerator Seed List project.

2016-12-01: Continued making text files for the Accelerator Seed List project. Built tool for the E&I Governance Report Project with Christy. Adds a column of data that shows whether or not the bill has been passed.

2016-11-29: Began pulling data from the accelerators listed here. Made text files for about 18 accelerators.

2016-11-22: Transferred downloaded Morocco Written Bills to provided SeaGate Drive. Made a "gentle" F6S crawler to retrieve HTMLs of possible accelerator pages documented here.

2016-11-18: Converted Executive Order PDFs to text files using adobe acrobat DC. See Wikipage for details.

2016-11-17: Wrote a crawler to retrieve information about executive orders, and their corresponding pdfs. They can be found here. Next step is to run code to convert the pdfs to text files, then use the parser fixed by Christy.

2016-11-15: Finished download of Moroccan Written Question pdfs. Wrote a parser with Christy to be used for parsing bills from Congress and eventually executive orders. Found bug in the system Python that was worked out and rebooted.

2016-11-11: Continued to download Moroccan data in the background. Attempted to find bug fixes for the Accelerator Project crawler.

2016-11-10: Continued to download Moroccan data and Kuwait data in the background. Began work on Google Scholar Crawler. Wrote a crawler for the Accelerator Project to get the HTML files of hundreds of accelerators. The crawler ended up failing; it appears to have been due to HTTPS.

2016-11-08: Continued to download Moroccan data in the background. Finished writing code for the embedded files on the Kuwait Site. Spent time debugging the frame errors due to the dynamically generated content. Never found an answer to the bug, and instead found a workaround that sacrificed run time for the ability to work. Moroccan Web Driver

2016-11-04: Continued to download Moroccan data in the background. Finished writing initial Kuwait Web Crawler/Driver for the Middle East Studies Department. Middle East Studies Department asked for additional embedded files in the Kuwait website. Moroccan Web Driver

2016-11-03: Continued to download Moroccan data in the background. Dr. Egan fixed systems requirements to run the GovTrack Web Crawler. Made significant progress on the Kuwait Web Crawler/Driver for the Middle East Studies Department.

2016-11-01: Continued to download Moroccan data in the background. Went over code for GovTracker Web Crawler, continued learning Perl. GovTrack Web Crawler Began Kuwait Web Crawler/Driver.

2016-10-21: Continued to download data for the Moroccan Parliament Written and Oral Questions. Looked over Christy's Twitter Crawler to see how I can be helpful. Dr. Egan asked me to think about how to potentially make multiple tools to get cohorts and other sorts of data from accelerator sites. See Accelerator List He also asked me to look at the GovTrack Web Crawler for potential ideas on how to bring this project to fruition.

2016-10-20: Continued to download data for the Moroccan Parliament Written and Oral Questions. Updated Wiki page. Started working on Twitter project with Christy. Moroccan Web Driver

2016-10-18: Finished code for Oral Questions web driver and Written Questions web driver using selenium. Now, the data for the dates of questions can be found using the crawler, and the pdfs of the questions will be downloaded using selenium. Moroccan Web Driver

2016-10-14: Finished Oral Questions crawler. Finished Written Questions crawler. Waiting for further details on whether that data needs to be tweaked in any way. Updated the Moroccan Web Driver/Web Crawler wiki page. Moroccan Web Driver

2016-10-13: Completed download of Moroccan Bills. Working on either a web driver screenshot approach or a webcrawler approach to download the Moroccan oral and written questions data. Began building Web Crawler for Oral and Written Questions site. Edited Moroccan Web Driver/Crawler wiki page. Moroccan Web Driver

2016-10-11: Fixed arabic bug, files can now be saved with arabic titles. Monarchy bills downloaded and ready for shipment. House of Representatives Bill mostly downloaded, ratified bills prepared for download. Started learning scrapy library in python for web scraping. Discussed idea of screenshot-ing questions instead of scraping.

2016-10-07: Learned unicode and utf8 encoding and decoding in arabic. Still working on transforming an ascii url into printable unicode.

2016-10-06: Discussed with Dr. Elbadawy about the desired file names for Moroccan data download. The consensus was that the bill programs are ready to launch once the files can be named properly, and the questions data must be retrieved using a web crawler which I need to learn how to implement. The naming of files is currently drawing errors in going from arabic, to url, to download, to filename. Debugging in process. Also built a demo selenium program for Dr. Egan that drives the McNair blog site on an infinite loop.

2016-10-03: Moroccan Web Driver projects completed for driving of the Monarchy proposed bills, the House of Representatives proposed bills, and the Ratified bills sites. Begun process of devising a naming system for the files that does not require scraping. Tinkered with naming through regular expression parsing of the URL. Structure for the oral questions and written questions drivers is set up, but need fixes due to the differences in the sites. Fixed bug on McNair wiki for women's biz team where email was plain text instead of an email link. Took a glimpse at Kuwait Parliament website, and it appears to be very different from the Moroccan setup.

2016-09-30: Selenium program selects view pdf option from the website, and goes to the pdf webpage. Program then switches handle to the new page. CTRL S is sent to the page to launch save dialog window. Text cannot be sent to this window. Brainstorm ways around this issue. Explored Chrome Options for saving automatically without a dialog window. Looking into other libraries besides selenium that may help.

2016-09-29: Re-enroll in Microsoft Remote Desktop with proper authentication, set up Selenium environment and Komodo IDE on Remote Desktop, wrote program using Selenium that goes to a link and opens up the print dialog box. Developed computational recipe for a different approach to the problem.

2016-09-26: Set up Staff wiki page, work log page; registered for Slack, Microsoft Remote Desktop; downloaded Selenium on personal computer, read Selenium docs. Created wiki page for Moroccan Web Driver Project.

Notes

  • Ed moved the Morocco Data to E:\McNair\Projects from C:\Users\PeterJ\Documents
  • C Drive files moved to E:\McNair\Users\PeterJ