Difference between revisions of "LP Extractor Protocol"
LasyaRajan (talk | contribs) |
|||
(16 intermediate revisions by 3 users not shown) | |||
Line 1: | Line 1: | ||
{{Project | {{Project | ||
+ | |Has project output=Tool | ||
+ | |Has sponsor=Kauffman Incubator Project | ||
|Has title=LP Extractor Protocol | |Has title=LP Extractor Protocol | ||
|Has owner=Lasya Rajan, | |Has owner=Lasya Rajan, | ||
Line 7: | Line 9: | ||
==Summary== | ==Summary== | ||
<onlyinclude>The [[LP Extractor Protocol]] currently envisages marking data locations on webpages, converting webpages into a simplified Domain Specific Language (DSL), and then encoding the DSL into a matrix. The markings of data locations would be encoded into a companion matrix. Both matrices will then be fed into a neural network, which is trained to produce the markings given the DSL. To date, we have conducted a literature review that has found papers describing similar "paired input" networks, and are in the process refining our understanding of the pre-existing code and work related to each step.</onlyinclude> | <onlyinclude>The [[LP Extractor Protocol]] currently envisages marking data locations on webpages, converting webpages into a simplified Domain Specific Language (DSL), and then encoding the DSL into a matrix. The markings of data locations would be encoded into a companion matrix. Both matrices will then be fed into a neural network, which is trained to produce the markings given the DSL. To date, we have conducted a literature review that has found papers describing similar "paired input" networks, and are in the process refining our understanding of the pre-existing code and work related to each step.</onlyinclude> | ||
+ | |||
+ | Files location: | ||
+ | E:\projects\Kauffman Incubator Project\03 Automate the extraction of information\RajanLasya_ExtractionProtocols_03.21 | ||
==Proposed Method== | ==Proposed Method== | ||
Line 36: | Line 41: | ||
==Literature== | ==Literature== | ||
− | |||
=== HTML Tree Structure Analysis === | === HTML Tree Structure Analysis === | ||
Line 108: | Line 112: | ||
*[https://docs.scrapy.org/en/latest/index.html Scrapy] | *[https://docs.scrapy.org/en/latest/index.html Scrapy] | ||
: Similar to Beautiful Soup, but has more extensive and efficient functionality. Extracts HTML data by creating "selectors" specified by CSS or XPath expressions. | : Similar to Beautiful Soup, but has more extensive and efficient functionality. Extracts HTML data by creating "selectors" specified by CSS or XPath expressions. | ||
+ | |||
+ | ==== pix2code ==== | ||
* [https://github.com/tonybeltramelli/pix2code pix2Code] | * [https://github.com/tonybeltramelli/pix2code pix2Code] | ||
− | : Github repo that contains reference implementation of pix2code architecture. See above pix2code paper. | + | : Github repo that contains original reference implementation of pix2code architecture. See above pix2code paper. [https://www.youtube.com/watch?v=pqKeXkhFA3I&feature=youtu.be Video] demo of trained neural network. |
+ | * [https://github.com/fjbriones/pix2code2 pix2code2] | ||
+ | : An attempt to improve pix2code through the use of autoencoders between the two LSTM layers. | ||
+ | |||
+ | * [https://github.com/emilwallner/Screenshot-to-code Screenshot-to-code] | ||
+ | : Another version of pix2code with a Bootstrap version that converts web page screenshots to HTML, with the potential to generalize on new design mock-ups. | ||
+ | |||
+ | * [https://github.com/andrewsoohwanlee/pix2code-pytorch pix2code PyTorch] | ||
+ | : pix2code implemented in PyTorch, also not ready for general usage yet. | ||
+ | |||
+ | * [https://github.com/ngundotra/code2pix code2pix] | ||
+ | : A project to recreate an inverse architecture to pix2code, with the objective of creating a GAN (Generative Adversarial Network) to replace pix2code. | ||
=== DFS Encoding === | === DFS Encoding === | ||
Line 118: | Line 135: | ||
: Github repo that contains reference implementation of node2vec algorithm as a python module. See above node2vec paper. | : Github repo that contains reference implementation of node2vec algorithm as a python module. See above node2vec paper. | ||
: A [https://github.com/thunlp/OpenNE toolkit] containing node2vec implemented in a framework based on tensorflow | : A [https://github.com/thunlp/OpenNE toolkit] containing node2vec implemented in a framework based on tensorflow | ||
− | + | : [https://towardsdatascience.com/node2vec-embeddings-for-graph-data-32a866340fef Here] is a very good and elementary introduction to node2vec | |
* [https://networkx.github.io/documentation/stable/index.html NetworkX] | * [https://networkx.github.io/documentation/stable/index.html NetworkX] | ||
: NetworkX is a Python package for loading, visualizing, and processing graph data. Includes built in functions for DFS encoding, and constructing adjacency and edges to vertices matrices. | : NetworkX is a Python package for loading, visualizing, and processing graph data. Includes built in functions for DFS encoding, and constructing adjacency and edges to vertices matrices. | ||
Line 124: | Line 141: | ||
*[https://radimrehurek.com/gensim/ Gensim] | *[https://radimrehurek.com/gensim/ Gensim] | ||
: Gensim is a Python library used to analyze plain-text documents for semantic structure. Is required for node2vec. | : Gensim is a Python library used to analyze plain-text documents for semantic structure. Is required for node2vec. | ||
+ | |||
+ | * [http://www.numpy.org/ NumPy] | ||
+ | : NumPy is a computing package that includes a N-dimensional array object (useful in encoding) and many other functions to process data. Is required for pix2code. | ||
+ | |||
+ | === DSL Development === | ||
+ | |||
+ | * [http://hackage.haskell.org/package/lucid Lucid] | ||
+ | : Lucid is a DSL implemented with Haskell for writing HTML. It represents DOM elements as functions, and uses specific notation to differentiate between data elements and code elements. | ||
+ | |||
=== General === | === General === | ||
− | |||
− | |||
− | |||
* [https://keras.io/ Keras] | * [https://keras.io/ Keras] | ||
: In conjunction with tensorflow, Keras will support the deep learning components of the project. Is required for pix2code. | : In conjunction with tensorflow, Keras will support the deep learning components of the project. Is required for pix2code. | ||
− | *[https://github.com/ziyan/spider SVM Classifier Training Algorithm ] | + | * [https://github.com/ziyan/spider SVM Classifier Training Algorithm ] |
: From the Yao, Zuo paper, this Github repo contains an algorithm for labelling the collected dataset using clustering, training the SVM with the labeled dataset, and using SVM model to extract content from new webpages. Implemented in JavaScript, CoffeeScript, and Python. | : From the Yao, Zuo paper, this Github repo contains an algorithm for labelling the collected dataset using clustering, training the SVM with the labeled dataset, and using SVM model to extract content from new webpages. Implemented in JavaScript, CoffeeScript, and Python. | ||
+ | |||
+ | * [https://www.h5py.org/ H5PY] | ||
+ | : The h5py package can be used to store large amounts of numerical data, and integrates well with NumPy | ||
+ | |||
+ | === Useful tutorials === | ||
+ | : Since we will be using a two-layer LSTMs in tensorflow, this [https://medium.com/@erikhallstrm/using-the-tensorflow-multilayered-lstm-api-f6e7da7bbe40 article] might be useful. | ||
+ | |||
+ | === Proposed Model === | ||
+ | : Here is a visualization of the model that we might want to use for our extractor | ||
+ | [[File: Extractor-Model.png| first diagram of extractor model]] | ||
+ | |||
+ | ==DSL Encoder== | ||
+ | To encode the structure of the DSL scripts, we can try using one-hot vector. More details can be found [https://blog.floydhub.com/turning-design-mockups-into-code-with-deep-learning/ here] and on the [[DSL Encoding]] page. |
Latest revision as of 12:47, 21 September 2020
LP Extractor Protocol | |
---|---|
Project Information | |
Has title | LP Extractor Protocol |
Has owner | Lasya Rajan |
Has start date | |
Has deadline date | |
Has project status | Active |
Subsumed by: | Listing Page Extractor |
Has sponsor | Kauffman Incubator Project |
Has project output | Tool |
Copyright © 2019 edegan.com. All Rights Reserved. |
Summary
The LP Extractor Protocol currently envisages marking data locations on webpages, converting webpages into a simplified Domain Specific Language (DSL), and then encoding the DSL into a matrix. The markings of data locations would be encoded into a companion matrix. Both matrices will then be fed into a neural network, which is trained to produce the markings given the DSL. To date, we have conducted a literature review that has found papers describing similar "paired input" networks, and are in the process refining our understanding of the pre-existing code and work related to each step.
Files location:
E:\projects\Kauffman Incubator Project\03 Automate the extraction of information\RajanLasya_ExtractionProtocols_03.21
Proposed Method
According to “Project Goal V2,” (E:\mcnair\Projects\Incubators) we considered three broadly defined methods to organize and extract useful information from an HTML web page. The first method is text processing, analyzing and classifying the textual content of the HTML page. The second method is to use image based pattern recognition, likely through an off-the-shelf model that can extrapolate key HTML elements from web page screenshots. The third, and most novel method is to structurally analyze the HTML tree structure, and express that simplified HTML structure in a Domain Specific Language (DSL). We are currently pursuing this third method.
HTML Tree Structure Analysis
Structurally analyzing the HTML tree structure of a web page and expressing it in a DSL is the most innovative method of the three. It would require more than simply adapting off-the-shelf models. First, the DSL itself would need to be designed to optimize abstraction into the target domain, a web page. (See Domain Specific Language Research.) Then, the DSL would need to be integrated into the machine learning pipeline by encoding the DSL into an appropriately formatted input, such as a vector or matrix, for a neural network. Three proposed methods for this encoding are using an adjacency matrix, an edges to vertices matrix, or utilizing DFS (depth-first search) algorithms.
DFS Encoding
Currently, we are leaning towards utilizing DFS algorithms. DFS algorithms operate by starting at the root node (or an arbitrary node for a graph) and traverses the longest branch fully before backtracking back to the last split before the branch terminated. A DFS algorithm could traverse any given tree and record 1 when a new node is found, and 0 when that node is fully explored. This creates a numerical representation of that tree that can then be entered into a vector or matrix. A DFS algorithm has an efficiency of O(n).
Adjacency Matrix
By interpreting the tree as a graph, we can utilize an adjacency matrix to encode the tree. The elements of the matrix represent whether their corresponding vertices are adjacent in the graphical representation. In its simplest form, for a set of V number of vertices, the matrix would be a square matrix of dimensions |V| x |V|. The diagonal elements of such a matrix would all be zero. This approach has an algorithmic efficiency of O(n^2).
Edges to Vertices Matrix
For any given tree, we have n-1 (I'm assuming n = number of nodes) edges. For every edge, we can record the two ending vertices. This will result in a matrix of dimensions (n-1) x 2. This matrix approach has an algorithmic efficiency of O(n).
Supervised Learning Approach (HTML to DSL)
Additionally, the HTML tree structure analysis method will require a subprocess by which to parse a complex HTML page into our DSL. An example of a similar process is Pix2Code, in which a DSL context and a GUI are feed into an architecture containing Long Short-Term Memory (LSTM) layers and a CNN-based vision model (see image below) which outputs a DSL token. After training with paired inputs is complete, this architecture can then take an empty context and a GUI input and output DSL code.
Literature
HTML Tree Structure Analysis
- This is the documentation for the Pix2Code architecture mentioned.
- This article provides an overview of various web data extraction techniques. Section 2.2 describes a process of extracting a web page's DOM structure, and Section 2.3 includes a supervised extraction process that has some similar aspects to Pix2Code and other paired input architectures.
- This approach to web content extraction focuses exclusively on less structured web pages, and classifying text blocks within those pages. In Section 3: Data Collection, an algorithm written in JavaScript is used to inspect DOM elements and organize them by parent element.
- This survey of various web data extraction methods includes a section on tree-based analysis in Chapter 2: Techniques.
- This paper presents a method of content extraction that analyzes the relationships between DOM elements based on a "chars-node ratio" that displays the relationship between text content and tags content in each node of the DOM tree. The authors of this paper implemented this technique in an open-source Firefox plugin.
- The approach delineated in this article parses HTML documents into DOM trees using openXML. Then, a recursive content extractor process each DOM tree and removes any non-content nodes.
- This paper proposes a Vision-based Page Segmentation (VIPS) algorithm that creates an extracted content structure, with each node of the structure corresponding to a block of coherent content on the page. Although this method focuses on semantically dividing the page, the algorithm uses page layout features to detect the page's structure.
- This article surveys eleven existing web extraction tools that utilize DOM structure to pull out relevant information. Some of these tools rely solely on the DOM tree, while others combine visual features with the DOM tree to extract content.
- This paper articulates a method that uses visual information, rather than DOM tree structure, to extract information from a webpage. Although this is a different method from our proposed DOM-based approach, the page segmentation algorithm in Section 5 (Page Segmentation Algorithm) includes various factors that may be useful in our simplified HTML/DSL interpretation of the DOM tree structure.
DFS Encoding
- This paper discusses different methods of encoding graph structures into low-dimensional embeddings that can be exploited by machine learning models. Section 2.2.2 (Random walk approaches) specifically compares the accuracy of using the random walk approach to traverse a graph, as opposed to BFS and DFS approaches.
- node2vec was mentioned briefly in the above Hamilton et al. paper. node2vec is a scalable encoding algorithm that focuses on preserving network neighborhoods of nodes. The definition of a neighborhood can manipulated depending on the application context. In section 3.1 (Classic Search Strategies), node2vec is compared to BFS and DFS approaches.
- V2V (Vector to Vertex) is a learning approach similar to node2vec, except that it takes the random-walk sequence results from graph data and encodes them using a Continous Bag of Word (CBOW) approach to create V2V vectors.
- This paper explains how to apply the skip-gram model to learning node representations of graph-structured data. This new encoder-decoder model can be trained to generate representations for any arbitrary random walk sequence.
- In a similar process to other neural network encoders, this proposed architecture first generates sequential data from graphs, using BFS shortest path, and random walk algorithms. It then trains LSTM autoencoders to embed these graph sequences into a vector space.
- This article describes another approach to learning graph-level representations, except through a combination of supervised and unsupervised learning components.
General
- This article describes methods to simplify the content of a noisy HTML page, specifically through using machine learning to predict whether a block is content or non-content. This allows the classifier to remove boilerplate information.
- This article classifies various web data extraction techniques into 5 different types of tools, and one category of web extraction specific-languages. Section 3.2 (HTML-aware Tools) describes several existing tools for parsing HTML tree structures in building wrappers. Section 3.4 (NLP-based Tools) includes several methods of text analysis that may be relevant to this project.
Implementation
This section contains possible implementation libraries and tools for various components of the extractor.
HTML Tree Structure Analysis
- A simple Python library that can parse HTML files into "Beautiful Soup objects," which are basically tree structure objects. Likely too limited in functionality for automated extraction, but might be useful in developing/testing DSL.
- Similar to Beautiful Soup, but has more extensive and efficient functionality. Extracts HTML data by creating "selectors" specified by CSS or XPath expressions.
pix2code
- Github repo that contains original reference implementation of pix2code architecture. See above pix2code paper. Video demo of trained neural network.
- An attempt to improve pix2code through the use of autoencoders between the two LSTM layers.
- Another version of pix2code with a Bootstrap version that converts web page screenshots to HTML, with the potential to generalize on new design mock-ups.
- pix2code implemented in PyTorch, also not ready for general usage yet.
- A project to recreate an inverse architecture to pix2code, with the objective of creating a GAN (Generative Adversarial Network) to replace pix2code.
DFS Encoding
- Github repo that contains reference implementation of node2vec algorithm as a python module. See above node2vec paper.
- A toolkit containing node2vec implemented in a framework based on tensorflow
- Here is a very good and elementary introduction to node2vec
- NetworkX is a Python package for loading, visualizing, and processing graph data. Includes built in functions for DFS encoding, and constructing adjacency and edges to vertices matrices.
- Gensim is a Python library used to analyze plain-text documents for semantic structure. Is required for node2vec.
- NumPy is a computing package that includes a N-dimensional array object (useful in encoding) and many other functions to process data. Is required for pix2code.
DSL Development
- Lucid is a DSL implemented with Haskell for writing HTML. It represents DOM elements as functions, and uses specific notation to differentiate between data elements and code elements.
General
- In conjunction with tensorflow, Keras will support the deep learning components of the project. Is required for pix2code.
- From the Yao, Zuo paper, this Github repo contains an algorithm for labelling the collected dataset using clustering, training the SVM with the labeled dataset, and using SVM model to extract content from new webpages. Implemented in JavaScript, CoffeeScript, and Python.
- The h5py package can be used to store large amounts of numerical data, and integrates well with NumPy
Useful tutorials
- Since we will be using a two-layer LSTMs in tensorflow, this article might be useful.
Proposed Model
- Here is a visualization of the model that we might want to use for our extractor
DSL Encoder
To encode the structure of the DSL scripts, we can try using one-hot vector. More details can be found here and on the DSL Encoding page.