Difference between revisions of "Crunchbase Database"
Line 63: | Line 63: | ||
Since the data will be changing a lot compared to previous years, using \i load_crunchbase.sql might not very useful, and one may need to copy one table at a time by pasting the sql script into the terminal. | Since the data will be changing a lot compared to previous years, using \i load_crunchbase.sql might not very useful, and one may need to copy one table at a time by pasting the sql script into the terminal. | ||
− | |||
− | |||
All the dataset (17 of them) from the API has been copied to the PostgreSQL server in drive Z under /bulk/crunchbase3. To make date-time format in postgres works properly, all the empty string with quotes ("") in CSV files have been replaced by NULL with the command line | All the dataset (17 of them) from the API has been copied to the PostgreSQL server in drive Z under /bulk/crunchbase3. To make date-time format in postgres works properly, all the empty string with quotes ("") in CSV files have been replaced by NULL with the command line | ||
Line 70: | Line 68: | ||
The script that I used to do that is in the file clean_data.sh in E:/projects/crunchbase3. A shorter script to do that for all the files in the directory is possible but might not be necessary and not all files require such edit. | The script that I used to do that is in the file clean_data.sh in E:/projects/crunchbase3. A shorter script to do that for all the files in the directory is possible but might not be necessary and not all files require such edit. | ||
− | All the scripts in load_crunchbase.sql have been updated. It now works perfectly with the current data crawled from crunchbaseAPI and includes the correct number of rows copied from the csv files at the end of each \COPY command. | + | ==Working with the database== |
+ | |||
+ | All the scripts in load_crunchbase.sql have been updated. It now works perfectly with the current data (as of 03/29/2019) crawled from crunchbaseAPI and includes the correct number of rows copied from the csv files at the end of each \COPY command. | ||
To see and use the data in the postgres server: | To see and use the data in the postgres server: |
Revision as of 12:47, 29 March 2019
Crunchbase Database | |
---|---|
Project Information | |
Has title | Crunchbase Database |
Has owner | Hiep Nguyen |
Has start date | 2019/03/13 |
Has deadline date | 2019/03/22 |
Has project status | Active |
Dependent(s): | Ecosystem Organization Classifier, Incubator Seed Data |
Copyright © 2019 edegan.com. All Rights Reserved. |
Files and Dbase
Files are in:
- E:\projects\crunchbase3
- Z:\crunchbase3
Dbase is crunchbase3
The old project page is Crunchbase Data. File locations listed as Z:/bulk/ should now be Z:/bulk/mcnair/. For example there is an old loadscript in /bulk/mcnair/crunchbase/crunchbaseData/load_crunchbase.sql
Crunchbase Pro
https://www.crunchbase.com/login
Login details:
- mcnair@rice.edu getpasswordfromed
Getting and cleaning data
The url to make API calls is https://api.crunchbase.com/v3.1/csv_export/csv_export.tar.gz?user_key=[API KEY GOES HERE]
API key (premium) is located at E:\projects\crunchbase3
The command line (bash script) to get the data and extract the data (1.9gb) is at E:\projects\crunchbase3\get_data.sh
Alternatively, we can download and extract directly using windows command prompt by typing the following commands
curl -O https://api.crunchbase.com/v3.1/csv_export/csv_export.tar.gz?user_key=[API key goes here] \ tar -xvf csv_export.tar.gz_user_key=[API key goes here].
Current csv files from crunchbase data
data\acquisitions.csv data\category_groups.csv data\degrees.csv data\events.csv data\event_appearances.csv data\funding_rounds.csv data\funds.csv data\investments.csv data\investment_partners.csv data\investors.csv data\ipos.csv data\jobs.csv data\organizations.csv data\organization_descriptions.csv data\org_parents.csv data\people.csv data\people_descriptions.csv
The sql script get_data.sql from last year is copied to the current Crunchbase3 directory. However, two databases are very different now and adjustments are necessary. To keep track of the data type from each csv file used to copy to sql tables, a file get_type.py is included in E:\projects\crunchbase3. This python script will print the first 5 rows of every data frame in the current directory.
All the crunchbase3 data from drive E are now also in drive Z:/crunchbase3
Since the data will be changing a lot compared to previous years, using \i load_crunchbase.sql might not very useful, and one may need to copy one table at a time by pasting the sql script into the terminal.
All the dataset (17 of them) from the API has been copied to the PostgreSQL server in drive Z under /bulk/crunchbase3. To make date-time format in postgres works properly, all the empty string with quotes ("") in CSV files have been replaced by NULL with the command line
sed 's/""//g' file.csv >file_clean.csv
The script that I used to do that is in the file clean_data.sh in E:/projects/crunchbase3. A shorter script to do that for all the files in the directory is possible but might not be necessary and not all files require such edit.
Working with the database
All the scripts in load_crunchbase.sql have been updated. It now works perfectly with the current data (as of 03/29/2019) crawled from crunchbaseAPI and includes the correct number of rows copied from the csv files at the end of each \COPY command.
To see and use the data in the postgres server:
1) Connect to reseacher@199.188.177.215. A password is required ( ask Prof Egan for details)
2) Go to /bulk/crunchbase3
cd /bulk/crunchbase3
3) Connect to the database
psql crunchbase3 \dt
4) Perform regular SQL queries