You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The assembly jar file includes also the [WebGraph](https://webgraph.di.unimi.it/) and [LAW](https://law.di.unimi.it/software.php) packages required to compute [PageRank](https://en.wikipedia.org/wiki/PageRank) and [Harmonic Centrality](https://en.wikipedia.org/wiki/Centrality#Harmonic_centrality).
15
15
16
-
Note that the webgraphs are usually multiple Gigabytes in size and require a sufficient Java heap size ([Java option](https://docs.oracle.com/en/java/javase/14/docs/specs/man/java.html#extra-options-for-java)`-Xmx`) for processing.
16
+
17
+
### Javadocs
18
+
19
+
The Javadocs are created by `mvn javadoc:javadoc`. Then open the file `target/site/apidocs/index.html` in a browser.
20
+
21
+
22
+
## Memory and Disk Requirements
23
+
24
+
Note that the webgraphs are usually multiple Gigabytes in size and require for processing
25
+
- a sufficient Java heap size ([Java option](https://docs.oracle.com/en/java/javase/21/docs/specs/man/java.html#extra-options-for-java)`-Xmx`)
26
+
- enough disk space to store the graphs and temporary data.
27
+
28
+
The exact requirements depend on the graph size and the task – graph exploration or ranking, etc.
17
29
18
30
19
31
## Construction and Ranking of Host- and Domain-Level Web Graphs
Copy file name to clipboardExpand all lines: graph-exploration-README.md
+171Lines changed: 171 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -112,3 +112,174 @@ A tutorial how to interactively explore the Common Crawl webgraphs – or other
112
112
jshell> sl() // list predecessors (vertices connected via incoming links)
113
113
```
114
114
115
+
116
+
## Using the Java Classes
117
+
118
+
The Java classes "GraphExplorer" and "Graph" bundle a set of methods which help exploring the graphs:
119
+
- load the webgraph, its transpose and the vertex map
120
+
- access the vertices and their successors or predecessors
121
+
- utilities to import or export a list of vertices or counts from or into a file
122
+
123
+
The methods are bundled in the classes of the Java package `org.commoncrawl.webgraph.explore`. To get an overview over all provided methods, inspect the source code or see the section [Javadocs](README.md#javadocs) in the main README for how to read the Javadocs. Here only few examples are presented.
124
+
125
+
We start again with launching the JShell and loading a webgraph:
First, the vertices in the webgraphs are represented by numbers. So, we need to translage between vertex label and ID:
147
+
148
+
```
149
+
jshell> g.vertexLabelToId("org.wikipedia")
150
+
$46 ==> 115107569
151
+
152
+
jshell> g.vertexIdToLabel(115107569)
153
+
$47 ==> "org.wikipedia"
154
+
```
155
+
156
+
One important note: Common Crawl's webgraphs list the host or domain names in [reverse domain name notation](https://en.wikipedia.org/wiki/Reverse_domain_name_notation). The vertex lists are sorted by the reversed names in lexicographic order and then numbered continuously. This gives a close-to-perfect compression of the webgraphs itself. Most of the arcs are close in terms of locality because subdomains or sites of the same region (by country-code top-level domain) are listed in one continous block. Cf. the paper [The WebGraph Framework I: Compression Techniques](https://vigna.di.unimi.it/ftp/papers/WebGraphI.pdf) by Paolo Boldi and Sebastiano Vigna.
157
+
158
+
Now, let's look how many other domains are linked from Wikipedia?
159
+
160
+
```
161
+
jshell> g.outdegree("org.wikipedia")
162
+
$46 ==> 2106338
163
+
```
164
+
165
+
Another note: Common Crawl's webgraphs are based on sample crawls of the web. Same as the crawls, also the webgraphs are not complete and the Wikipedia may in reality link to far more domains. But 2 million linked domains is already not a small sample.
166
+
167
+
The Graph class also gives you access to the successors of a vertex, as array or stream of integers, but also as stream of strings (vertex labels):
Technically, webgraphs only store successor lists. But the Graph class holds also two graphs: the "original" one and its transpose. In the transposed graph "successors" are "predecessors", and "outdegree" means "indegree". Some methods on a deeper level take one of the two webgraphs as argument, here it makes a difference whether you pass `g.graph` or `g.graphT`, here to a method which translates vertex IDs to labels and extracts the top-level domain:
The same can be done for predecessors using the method "Graph::predecessorTopLevelDomainCounts".
256
+
257
+
Dealing with large successor or predecessor lists can be painful and viewing them in a terminal window is practically impossible. We've already discussed how to compress the list to top-level domain counts. Alternatively, you could select the labels by prefix...
... but even then the list may be huge. Then the best option is to write the stream output (vertex labels or top-level domain frequencies) into a file and view it later using a file viewer or use any other tool for further processing:
We hope these few examples will support either to have fun exploring the graphs or to develop your own pipeline to extract insights from the graphs.
284
+
285
+
Finally, thanks to the authors of the [WebGraph framework](https://webgraph.di.unimi.it/) and of [pyWebGraph](https://github.com/mapio/py-web-graph) for their work on these powerful tools and for any inspiration taken into these examples.
0 commit comments