All the crunchbase3 data from drive E are now also in drive Z:/crunchbase3
A version of crunchbase3 database is live on the postgresql in Z:/crunchbase3. However, a few csv files have not been copied to the SQL database because of data type errors, which is a small problem but Hiep will need to spend some time to fix that. Hiep will work on it next week (March 28th).
Right now, a modification of load_crunchbase.sql is in both Z:/crunchbase3 and E:/projects/crunchbase3. Changes in dataset, datatype, and data columns have been made a lot compared to the previous version. The columns that are not yet added to the postgresql db are noted inside two lines of ################'s in the sql script. Since the data has changed will be changing a lot compared to last yearprevious years, using \i load_crunchbase.sql was might not very useful, and one may need to copy one table at a time by pasting the sql script into the terminal. Files that have not yet been copied to the postgresql server are degrees.csv events.csv funding_round.csv funds.csv investors.csv ipos.csv jobs.csv organizations.csv people.csv 03/28/2019 UPDATE
'''03/29/2019 update'''
All the dataset (17 of them) from the API has been copied to the PostgreSQL server in drive Z under /bulk/crunchbase3. To make date-time format in postgres works properly, all the empty string with quotes ("") in CSV files have been replaced by NULL with the command line
sed 's/""//g' file.csv >file_clean.csv
The script that I used to do that is in the file clean_data.sh in E:/projects/crunchbase3. A shorter script to do that for all the files in the directory is possible but might not be necessary and not all files require such edit.
All the scripts in load_crunchbase.sql have been updated. It now works perfectly with the current data crawled from crunchbaseAPI and includes the correct number of rows copied from the csv files at the end of each \COPY command. I have also double-checked each table by comparing the postgres version of the data and the pandas version of the data.
To see and use the data in the postgres severserver:
1) Connect to reseacher@199.188.177.215. A password is required ( ask Prof Egan for details)