VCDB24

From edegan.com
Jump to navigation Jump to search

VCDB24 is the 2024 and final iteration of my VentureXpert based Venture Capital DataBase. Thomson-Reuters discontinued access to VentureXpert through SDC Platinum on December 31st, 2023 (see also: SDC Normalizer). This iteration contains data up until then. Each VCDB includes investments, funds, startups, executives, exits, locations, and more. The previous build was VCDB23, but the best previous instructions are from VCDB20.

Processing Steps

Get the source data:

  1. Copy over the rpt, ssh, and pl files to E:\projects\vcdb24\SDC, and bulk edit the ssh files.
    1. Make final date 12/31/2023 and change vcdb23 to vcdb24
  2. Run the ssh files against SDC Platinum one last time on 31 December 2023.
  3. Run the SDC Normalizer script (one of the pl files) on each output
    1. Fix the header row in USFirms1980.txt before normalizing (the Capital Under Management column name is too long)
    2. Remove double quotes from USFund1980-normal.txt, USFundExecs1980-normal.txt, USPortCo1980-normal.txt, USFirmBranchOffices1980.txt
    3. The private and public M&A file sets have to be separately combined into 2 files after they've been normalized. Then replace \tnp\t and \tnm\t with \t\t in each.
    4. For RoundOnOneLine, remove the footer, run NormalizeFixedWidth.pl first, then RoundOnOneLine.pl, and then fix the header.
    5. PortCoLongDescription must be pre-processed from the command line and then post-processed in excel (see below as well as VCDB20H1 and Vcdb4#Long_Description).

Create the postgres database:

  1. Create a new database on mother (createdb vcdb24) and set up a directory for the input files: bulk\vcdb24
  2. Copy over (to sql folder) and edit Load.sql. Run it section-by-section.

PortCoLongDescription

Process the Long Description data as follows:

  1. Remove the header and footer, and then save as Process.txt using UNIX line endings and UTF-8 encoding.
  2. Run the first section (producing Out5.txt) of the regex process below
  3. Import into Excel to make tab-delimited
  4. Remove double quotes " from just the description field
  5. Put in a new header
  6. Save as In5.txt with UNIX/UTF-8
  7. Run the last regex. It deals with the spaces in the description and the cases when there is no description.
  8. Try importing USVCPortCoLongDesc1980Cleaned.txt. It should be fine.
cat Process.txt | perl -pe 's/^([^ ])/###\1/g' > Out1.txt
cat Out1.txt | perl -pe 's/\s{65,}/ /g' > Out2.txt
cat Out2.txt | perl -pe 's/\n//g' > Out3.txt
cat Out3.txt | perl -pe 's/###/\n/g' > Out4.txt
cat Out4.txt | perl -pe 's/(\d{4} $/\1\t/g' > Out5.txt
...
cat In5.txt | perl -pe 's/(\d{4})\t$/\1###/g' > Out6.txt
cat Out6.txt | perl -pe 's/\s{2,}/ /g' > Out7.txt
cat Out7.txt | perl -pe 's/###/\t/g' > USPortCoLongDesc1980Cleaned.txt

Geocoding

Part of Load.sql requires that we update the Geocoding - adding new long and lat for PortCos and firm offices that we haven't seen before.

The last time this was run was vcdb20. Accordingly:

  • In vcdb20, export the portcogeo, firmgeo, and bogeo tables
  • Import them as portcogeo_vcdb20, firmgeo_vcdb20, and bogeo_vcdb20
  • Build portcogrowthneedsgeo, firmneedsgeo, firmboneedsgeo files for geocoding
  • Log into Google Console and set up an API key. Note that:
    • Up to $200/month should be free
    • $5.00 USD per 1000 lookups.
    • 3,000 QPM max
  • In E:/tools/Geocode run the script(s): Geocode.py for portcos and GeocodeOneKey.py for everything else.
    • Strip the header line out of the input file(s)
    • python Geocode.py portcogrowthneedsgeo-NoHeader.txt