Reproducible Patent Data
A continuation of Redesigning Patent Database that aims to write faster, more centralized code to deal with data from the United States Patent and Trademark Office (USPTO). By having an end-to-end pipeline we can easily reproduce or update data without worrying about unintentional side effects or missing data.
Reproducible Patent Data | |
---|---|
Project Information | |
Project Title | Reproducible Patent Data |
Owner | Oliver Chang |
Start Date | May 17 |
Deadline | |
Primary Billing | |
Notes | |
Has project status | Active |
Subsumes: | Redesigning Patent Database, Patent Assignment Data Restructure |
Copyright © 2016 edegan.com. All Rights Reserved. |
Progress
DownloaderdoneSplitterdoneParserdoneSetup PostgreSQL JDBCdoneCreate naive schema based on previous approachesdoneCreate new data structuresdoneDatabase Insert (modifydonemodels/
files with some mapping to database fields)Create tooling for minionsskippedInvestigate parallel speedup (e.g. multithread, mmap)doneRemove duplicate code through the addition of more abstract classesdonefirst 5 zipcode; centroid?hackily donepatent iddoneishCreate XPath queries for reissue, design patents (only utility right now)split off (see Equivalent_XPath_and_APS_Queries)Create semantic parser for APS filessee above- Data Cleanup (reference Marcela and Sonia's work)
- Data Source Merger (only USPTO granted, maintfee, assignment not USPTO applications or Harvard Dataverse or Lex Machina currently)
- Setup pipeline script to complete all of these steps in series
- Add constraints to database tables, e.g. correct types, foreign keys, not null, lookup tables
- Add deduplication
Directory Layout
Where is the Data?
Directories
All of the information for this project is located at E:\McNair\Projects\SimplerPatentData
There are several interesting directories:
data/downloads/
is USPTO bulkdata, unmodified straight from the scraperdata/extracts/
is a directory of a strict subset of the information stored indata/downloads/
. It is the result of running a bulk 7-zip job on that directory to get everything unzipped in a flat data structure. Note that these files have the USPTO modified-by time since that metadata is stored in the zipfiles. To extract files in this nice format, select all of the zipfiles and setup an extraction job like in this screenshotdata/backups/
is a 7zip'd backup of the corresponding directory in extractssrc/
is the main code repository for the java project
Input Files
All of the text-only Red Book files for granted patents from 1976 to 2016, inclusive. To find a specific year's XML file, find it in
E:\McNair\Projects\SimplerPatentData\data\extracts\granted\
To find application data from 2001 to 2016, inclusive, look in
E:\McNair\Projects\SimplerPatentData\data\extracts\applications\
To find assignment data, look in
E:\McNair\Projects\SimplerPatentData\data\extracts\granted\
To find maintenance fee data, look in
E:\McNair\Projects\SimplerPatentData\data\downloads\maintenance\
Where is the Code?
The code has the same parent directory as the data, so it is at E:\McNair\Projects\SimplerPatentData\src
. You might notice a lot of single-entry directories; this is an idiomatic Java pattern that is used for package separation. If using IntelliJ or some other IDE, these directories are a bit less annoying.
The development environment is Java 8 JDK, IntelliJ Ultimate IDE, Maven build tools, and git VCS.
The git repository can be found at https://rdp.mcnaircenter.org/codebase/Repository/ReproduciblePatent
Prior Art
This tool is not so concerned with adding new functionality; rather, it aims to take a bunch of spread out Perl scripts and create a faster system that is easier to work with. As such, its functionality is largely stolen from those scripts:
- Downloader:
E:\McNair\Software\Scripts\Patent\USPTO_Parser.pl
- XML Splitter:
E:\McNair\PatentData\splitter.pl
- XML Parsing:
E:\McNair\PatentData\Processed\xmlparser_4.5_4.4_4.3.pl
andE:\McNair\PatentData\Processed\*.pm
In addition, I used several non-standard Java libraries listed below:
- Unirest for easy HTTP requests (MIT License)
- Google Guava for immutable collections and Stream utilities (Apache v2.0 License)
- jsoup for HTML parsing (MIT License)
- Apache Commons Codec (Apache v2.0 License)
- Apache Commons Lang v3 (Apache v2.0 License)
- Jetbrains Annotations for enhanced null checks (Apache v2.0 License)
- PostgreSQL JDBC (BSD 3-clause per https://github.com/pgjdbc/pgjdbc-jre7/blob/master/LICENSE)
If using maven, these dependencies are listed and should automatically be setup.
Using Code
Any file with a line that says public static void main(String[] args) {
can be run as a standalone file. The easiest way to do this is to load the project and then the file in IntelliJ and click the little green play arrow next to this bit of code.
The code can also be run via the standard javac
and java
commands but since this project has a complicated structure you end up having to run commands like
"C:\Program Files\Java\jdk1.8.0_131\bin\java" "-javaagent:C:\Users\OliverC\IntelliJ IDEA 2017.1.3\lib\idea_rt.jar=62364:C:\Users\OliverC\IntelliJ IDEA 2017.1.3\bin" -Dfile.encoding=UTF-8 -classpath "C:\Program Files\Java\jdk1.8.0_131\jre\lib\charsets.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\deploy.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\access-bridge-64.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\cldrdata.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\dnsns.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\jaccess.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\jfxrt.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\localedata.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\nashorn.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\sunec.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\sunjce_provider.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\sunmscapi.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\sunpkcs11.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\ext\zipfs.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\javaws.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\jce.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\jfr.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\jfxswt.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\jsse.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\management-agent.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\plugin.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\resources.jar;C:\Program Files\Java\jdk1.8.0_131\jre\lib\rt.jar;E:\McNair\Projects\SimplerPatentData\target\classes;C:\Users\OliverC\.m2\repository\com\mashape\unirest\unirest-java\1.4.9\unirest-java-1.4.9.jar;C:\Users\OliverC\.m2\repository\org\apache\httpcomponents\httpclient\4.5.2\httpclient-4.5.2.jar;C:\Users\OliverC\.m2\repository\org\apache\httpcomponents\httpcore\4.4.4\httpcore-4.4.4.jar;C:\Users\OliverC\.m2\repository\commons-logging\commons-logging\1.2\commons-logging-1.2.jar;C:\Users\OliverC\.m2\repository\org\apache\httpcomponents\httpasyncclient\4.1.1\httpasyncclient-4.1.1.jar;C:\Users\OliverC\.m2\repository\org\apache\httpcomponents\httpcore-nio\4.4.4\httpcore-nio-4.4.4.jar;C:\Users\OliverC\.m2\repository\org\apache\httpcomponents\httpmime\4.5.2\httpmime-4.5.2.jar;C:\Users\OliverC\.m2\repository\org\json\json\20160212\json-20160212.jar;C:\Users\OliverC\.m2\repository\com\google\guava\guava\21.0\guava-21.0.jar;C:\Users\OliverC\.m2\repository\org\jsoup\jsoup\1.10.2\jsoup-1.10.2.jar;C:\Users\OliverC\.m2\repository\commons-codec\commons-codec\1.10\commons-codec-1.10.jar;C:\Users\OliverC\.m2\repository\org\jetbrains\annotations\15.0\annotations-15.0.jar;C:\Users\OliverC\.m2\repository\org\apache\commons\commons-lang3\3.5\commons-lang3-3.5.jar;C:\Users\OliverC\.m2\repository\org\postgresql\postgresql\42.1.1\postgresql-42.1.1.jar" org.bakerinstitute.mcnair.uspto_assignments.XmlDriver
to include all of the runtime dependencies and it's just not worth it.
Altering Code
- Use the IntelliJ command Reformat code (found in the menus at
Code > Reformat Code
- Use the optimize imports function found under the same menu
- Use spaces for indentation
- Loosely try to keep lines below 120 characters
- Commit changes to the Git remote repository "bonobo"
Schema Reconciliation
Dates Used | Format | Location | Supported by Parser? | Utility | Reissue | Design | Plant |
---|---|---|---|---|---|---|---|
January 1976 to December 2001 | APS | data/extracts/granted/vintage
|
Yes | ✓ | ~ | ~ | ✗ |
Ignored; use concurrently recorded APS data | N/A | N/A | N/A | N/A | |||
January 2002 to December 2004 | XML Version 2.5 | data/extracts/granted/blunderyears
|
Yes | ✓ | ~ | ~ | ✗ |
January 2005 to December 2005 | XML Version 4.0 ICE | data/extracts/granted/modern
|
Yes | ✓ | ~ | ~ | ✗ |
January 2006 to December 2006 | XML Version 4.1 ICE | data/extracts/granted/modern
|
Yes | ✓ | ~ | ~ | ✗ |
January 2007 to December 2012 | XML Version 4.2 ICE | data/extracts/granted/modern
|
Yes | ✓ | ~ | ~ | ✗ |
January 2013 to September 24, 2013 | XML Version 4.3 ICE | data/extracts/granted/modern
|
Yes | ✓ | ~ | ~ | ✗ |
October 8, 2013 to December 2014 | XML Version 4.4 ICE | data/extracts/granted/modern
|
Yes | ✓ | ~ | ~ | ✗ |
January 2015 to December 2016 | XML Version 4.5 ICE | data/extracts/granted/modern
|
Yes | ✓ | ~ | ~ | ✗ |
APS Rosetta Stone
The Advanced Patent System (APS) is a fixed-width text format used to store historical patent grant data. The documentation for this sucks; there are pages missing at random. Luckily, we only care about the content contained here: File:PatentFullTextAPSDoc GreenBook pgs13-22.pdf.
It's worth mentioning that the APS contains an advanced text markup system for chemical formulae, basic text markup, tables, etc. that can lead to seemingly garbled text that is perfectly well-formed.
Database
Because there isn't a compelling reason not to, I used the existing PostgreSQL infrastructure on the RDP. The "Java Way" of interacting with databases is the Java Database Connectivity API (JDBC), an implementation-agnostic API for interacting with databases. This project uses the stock Postgres JDBC, version 42.1.1
- Create an empty database:
$ createdb --username=postgres patents_june_2017 # password is tabspaceenter
- Create tables via script at
E:\McNair\Projects\SimplerPatentData\src\db\NaiveSchema.sql
- Prior Example
E:\McNair\Software\Scripts\Patent\createTables.sql
- Aim to create a completely naive schema with as few constraints as possible--iteratively add more constraints in the future
- Prior Example
Since writing raw SQL is a bit cumbersome and error-prone, I have added some abstraction layers that make it much easier to quickly add bulk data. By using Postgres's CopyManager
class, we buffer SQL copy commands in memory (as many as possible) and then flush these rows. To understand how the abstraction layers work, see the code in E:\McNair\Projects\SimplerPatentData\src\main\java\org\bakerinstitute\mcnair\postgres
. For a concrete example, see E:\McNair\Projects\SimplerPatentData\src\main\java\org\bakerinstitute\mcnair\uspto_assignments\GeonamesZips.java
for a simple, self-contained example or E:\McNair\Projects\SimplerPatentData\src\main\java\org\bakerinstitute\mcnair\models\GrantedPatent.java
for an example of how to extend the abstraction layer to deal with more complex scenarios.
Address Data
- Question: "In which zipcodes are the most patents granted?"
- Hacky answer:
\COPY (select postcode, count(documentid) as c from june_2017_zipcode_join group by postcode order by c desc) TO '/bulk/zipcodes-oliver/postcodes_ranked.tsv' (format csv, delimiter E'\t')
Z:/zipcodes-oliver/postcodes_ranked.tsv
- Hacky answer:
- Zipcode granularity--if possible, finer detail wanted
- Filter non-US, null addresses
- Need to cleanup addresses
- Extract zipcode or reverse locate to find zip
- "The 4-5 digit reel number refers to the microfilm reel number of the assignment entry in physical USPTO records; similarly the 1-4 digit frame number refers to the location of the assignment entry on the reel number in physical USPTO records. Thus, each assignment recorded with the USPTO has a unique reel number and frame number combination. While both reel number and frame number are sequential, there are missing values in the sequence because each only specifies the first page of the assignment records and records may have multiple pages." from https://www.uspto.gov/sites/default/files/documents/USPTO_Patents_Assignment_Dataset_WP.pdf, pg12, footnote 38
Related Pages
- US Address Verification, Summer 2017 based on tables from Assignment Data Restructure
- Assignment Data Restructure, Spring 2017 by Marcela and Sonia
- Redesigning Patent Database, Spring 2017 by Shelby
- Patent Data Cleanup, June 2016 by Marcela
- Patent Data, Spring 2016 by Marcela
- Lex Machina
- USPTO Patent Litigation Research Dataset by Ed
- Patent Litigation and Review by Marcela
- Bag of Words Analysis
- Existing Database Schema
- My Work Log