Difference between revisions of "Vcdb4"

From edegan.com
Jump to navigation Jump to search
Line 129: Line 129:
 
Then run [[Geocode.py]] on the output file (broken into 2500 address queries).
 
Then run [[Geocode.py]] on the output file (broken into 2500 address queries).
 
  python3 Geocode.py companyneedsgeo1-2499.txt
 
  python3 Geocode.py companyneedsgeo1-2499.txt
 +
 +
===Matching===
 +
 +
Generally everything should be matched to itself first. Matching should be done using mode=2:
 +
perl .\Matcher.pl -file1="DistinctConame.txt" -file2="DistinctConame.txt" -mode=2
 +
perl .\Matcher.pl -mode=2 -file1="DistinctTargetName.txt" -file2="DistinctTargetName.txt"
 +
perl .\Matcher.pl -mode=2 -file1="IPODistinctIssuer.txt" -file2="IPODistinctIssuer.txt"
 +
 +
Then match between and review:
 +
  perl .\Matcher.pl -mode=2 -file1="PortCoMatchInput.txt" -file2="MAMatchInput.txt"
 +
 +
The M&A review does the following (10406):
 +
*Check if Hall (not Multi), datefirstinv<announceddate, statecode=statecode. Take when all three. (8064)
 +
*Throw out when statecode != statecode OR when announcedate < datefirstinv. (1879)
 +
*Of the remaining (463), take the min date announced.
 +
 +
This can be done with SQL faster than in Excel. Be aware that the join back must use statecode to deal with:
 +
Mobile Technologies LLC Mobile Technologies PA 4/28/2011 8/12/2013
 +
Mobile Technology Inc. Mobile Technology Inc CA 12/1/1985 6/28/1990
 +
 +
Also, there is an issue with multiples not showing up as multiple matches, e.g.:
 +
ARCA BIOPHARMA Inc ARCA biopharma Inc Hall ARCA BIOPHARMA Inc ARCA BIOPHARMA Inc CO 3/1/2006 ARCA biopharma Inc ARCA biopharma Inc CO 9/25/2008 1 1 1 3 0 1
 +
ARCA BIOPHARMA Inc ARCA biopharma Inc Hall ARCA BIOPHARMA Inc ARCA Biopharma Inc CO 2/3/2003 ARCA biopharma Inc ARCA biopharma Inc CO 9/25/2008 1 1 1 3 0 1

Revision as of 13:03, 25 September 2019


Project
Vcdb4
Project logo 02.png
Project Information
Has title vcdb4
Has owner Ed Egan
Has start date
Has deadline date
Has project status Active
Copyright © 2019 edegan.com. All Rights Reserved.


Source Files

Files are in:

E:\projects\vcdb4

The old files from VentureXpert Database are in the subfolder Student Work, and their latest work is in Updated.

We need a set of pulls (according to E:\projects\vcdb3\OriginalSQL\LoadingScriptsV1.sql), which are documented below, as well as some lookup tables (CPI may need updating) and some joined tables (which would have to be updated separately) in MatchingEntrepsV3.sql:

  • PortCoSBIR: PortCoSBIR.txt
  • PortCoPatent: PortCoPatent.txt

And to update RevisedDBaseCode.sql, we'll need to:

  • Join in the Crunchbase (which needs updating)
  • Update the Geocoordinates

Note that this data could support new or updated versions of:

and others.

The build should be done as quickly but cleanly as possible, as it is needed right away but also will likely need to be updated in January of 2020 to reflect 2019's year end.

SDC Platinum Requests

Everything was updated to 09/22/2019 as the final date. Some files were renamed for clarity. Each result is a triplet of .ssh, .rpt, and .txt files. The following scripts, reports and their outputs are in E:\projects\vcdb4\SDC:

  • USVCRound1980
  • USVCPortCo1980
  • USVCRoundOnOneLine1980
  • USVCFund1980
  • USVCFirms1980
  • USPortCoLongDesc1980
  • USVCFirmBranchOffices1980
  • USIPO1980
  • USVCPortCoExecs1980
  • USVCFundExecs1980
  • USMAPrivate100pc1985
  • USMAPrivate100pc2013

The two USMAPrivate100pc queries are different. The first pulls just date announced, date effective, target name, target state and tv. The second adds basic acquirer information from 2013 forward (to allow retroactive revision by Thomson for 5+ years) and can be combined with MAUSTargetComp100pc1985-July2018.txt (after adjusting the spacing) to make USMAPrivate100pc2013Full. For some reason, the query always fails with an out of memory message when trying to pull the whole thing.

USSDCRound1980 was updated to remove fields that should have been in USVCPortCos1980 only. When normalizing be sure to only copy down key fields. USMAPrivate100pc1985 was updated to reflect the MAs load in LoadingScriptsV1. There wasn't a good original. We are using 1985 forward as there are data issues that prevent download/extraction for the 1980-1984 data. Year completed was added as a check variable but might have been the source of issues and so was removed. Date Effective can be used instead. And USIPOComp1980 was updated to allow all exchanges (not just NNA). I couldn't require completion in the search, so that will have to be done in the dbase. USVCFund1980 was updated because some variables -- those concerned with the fund's name and fund address -- had changed name. USTRoundOnOneLine1980 was fixed so that it is just the key (coname,statecode,datefirst) and the round info field, so that it works with the RoundOnOneLine.pl script. Finally, note that USPortCoLongDesc1980 needs processing separately (see below).

Long Description

The instructions on Retrieving_US_VC_Data_From_SDC#Scripts_and_other_info were modified as follows:

  1. Remove the header and footer, and then save as Process.txt using UNIX line endings and UTF-8 encoding.
  2. Run the Regex process (note that I modified it slightly)
  3. Manual Clean
  4. Remove double quotes " from just the description field
  5. Put in a new header with a very long description column
  6. Run the normalizer
  7. Remove duplicate spaces from the description column by pushing the data through excel and running the last regex (save as In5.txt with UNIX/UTF-8)
  8. Remove quote marks from Out5.txt, resave and then put back into excel to create USVCPortCoLongDesc1980Cleaned.txt
cat Process.txt | perl -pe 's/^([^ ])/###\1/g' > Out1.txt
cat Out1.txt | perl -pe 's/\s{65,}/ /g' > Out2.txt
cat Out2.txt | perl -pe 's/\n//g' > Out3.txt
cat Out3.txt | perl -pe 's/###/\n/g' > Out4.txt
...
cat In5.txt | perl -pe 's/\s{2,}/ /g' > Out5.txt

Round On One Line

The process is run USVCRoundOnOneLine1980.ssh with USVCRoundOnOneLine1980.rpt to generate USVCRoundOnOneLine1980.txt, then remove the footer and:

perl Normalizer.pl -file="USVCRoundOnOneLine1980-NoFoot.txt"
 copy down the key (0,1,2)
perl RoundOnOneLine.pl -file="USVCRoundOnOneLine1980-NoFoot-normal.txt"
 then put the header back in!

Everything else

Just run the Normalizer. Only copy down key fields -- never copy down anything else as it will introduce data errors. The primary and foreign key fields, which may still need cleaning in SQL to be valid, are as follows (they are marked with * if they should be copied down):

  • USVCRound1980 -- coname*, statecode*, datefirst*, rounddate
  • USVCPortCo1980 -- coname, statecode, datefirst
  • USVCRoundOnOneLine1980 -- Coname*, statecode*, datefirst*, rounddate, fundname
  • USVCFund1980 -- fundname, firmname
  • USVCFirms1980 -- firmname
  • USPortCoLongDesc1980 -- coname*, statecode*, datefirst*
  • USVCFirmBranchOffices1980 --firmname
  • USIPO1980 -- issuer
  • USVCPortCoExecs1980 -- coname*, statecode*, datefirst*
  • USVCFundExecs1980 -- fundname*, and maybe fundyear*
  • USMAPrivate100pc2013 -- dateeffective, targetname, acquirorname

Not that USMAPrivate100pc2013 and USIPO1980 have some non-numerics in their value fields, and we are generally going to have take care of some type issues.

Loading the data

First, create a dbase:

createdb vcdb4

Then load the data, running Load.sql

The following were nuances of this process:

  • Replace all double quotes with nothing in PortCo, Fund, and FundExecs
  • Renormalize firm by fixing the header -- the Capital Under Mgmt field was too close to the next field -- and remove two bad area codes (lines 931 and 11298).
  • Be careful not to replace single quotes in company names as it will destroy the keys (or do it everywhere!).

Note that our roundbase doesn't currently have (non-key) company-level fields, unlike before. But this shouldn't matter, as we were going to throw them out anyway when we build Round.

The next phase is to standardize the names and match the portcos to the IPOs and MAs. Then clean up all of the core tables. That takes us to the end of the old LoadingScriptV1.sql. We then need to do the equivalent of updating GeoLoad.sql. This time, we are going to use the old data, as before, but try updating any portco or firm that doesn't have at least 4dp (11.1m resolution) or perhaps 5dp (1.11m resolution) accuracy [1]. We have to fix the PortCo keys first, but we can only run 2,500 queries through Google each day for free per API key, so this should be a batch job.

Updating GeoCoding

The vcdb3 portcogeo table was saved as vcdb3portcogeo.txt and uploaded to vcdb4. Then a new table, companyneedsgeo was created with valid addresses that could be geolocated. There were less than 200 repeat addresses (see examples below) out of 5519 to be geocoded.

SELECT address, count(*) FROM companyneedsgeo GROUP BY address HAVING COUNT(*) >1;
/*
 address	count
501 Massachusetts Avenue, Cambridge, MA, 02139	4
479 Jessie Street, San Francisco, CA, 94103	4
745 Atlantic Avenue, Boston, MA, 02111	4
80 State Street, Albany, NY, 12207	4
953 Indiana Street, San Francisco, CA, 94107	5
79 Madison Avenue, New York, NY, 10016	5
400 Technology Square, Tenth Floor, Cambridge, MA, 02139	6
440 North Wolfe Road, Sunnyvale, CA, 94085	7
 */

Then run Geocode.py on the output file (broken into 2500 address queries).

python3 Geocode.py companyneedsgeo1-2499.txt

Matching

Generally everything should be matched to itself first. Matching should be done using mode=2:

perl .\Matcher.pl -file1="DistinctConame.txt" -file2="DistinctConame.txt" -mode=2
perl .\Matcher.pl -mode=2 -file1="DistinctTargetName.txt" -file2="DistinctTargetName.txt" 
perl .\Matcher.pl -mode=2 -file1="IPODistinctIssuer.txt" -file2="IPODistinctIssuer.txt"

Then match between and review:

 perl .\Matcher.pl -mode=2 -file1="PortCoMatchInput.txt" -file2="MAMatchInput.txt"

The M&A review does the following (10406):

  • Check if Hall (not Multi), datefirstinv<announceddate, statecode=statecode. Take when all three. (8064)
  • Throw out when statecode != statecode OR when announcedate < datefirstinv. (1879)
  • Of the remaining (463), take the min date announced.

This can be done with SQL faster than in Excel. Be aware that the join back must use statecode to deal with:

Mobile Technologies LLC	Mobile Technologies	PA	4/28/2011	8/12/2013
Mobile Technology Inc.	Mobile Technology Inc	CA	12/1/1985	6/28/1990

Also, there is an issue with multiples not showing up as multiple matches, e.g.:

ARCA BIOPHARMA Inc	ARCA biopharma Inc	Hall	ARCA BIOPHARMA Inc	ARCA BIOPHARMA Inc	CO	3/1/2006	ARCA biopharma Inc	ARCA biopharma Inc	CO	9/25/2008	1	1	1	3	0	1
ARCA BIOPHARMA Inc	ARCA biopharma Inc	Hall	ARCA BIOPHARMA Inc	ARCA Biopharma Inc	CO	2/3/2003	ARCA biopharma Inc	ARCA biopharma Inc	CO	9/25/2008	1	1	1	3	0	1