Posts categorized “Billion Triple Challenge”.

Heat-maps of Semantic Web Predicate usage

It’s all Cygri‘s fault — he encouraged me to add schema namespaces to the general areas on the semantic web cluster-tree. Now, again I misjudged horribly how long this was going to take. I thought the general idea was simple enough, I already had the data. One hour should do it. And now one full day later I have:

FOAF Predicates on the Semantic Web

It’s the same map as last time, laid using graphviz’s neato as before. The heat-map of the properties was computed from the feature-vector of predicate counts, first I mapped all predicates to their “namespace”, by the slightly-dodgy-but-good-enough heuristic of taking the part of the URI before the last # or / character. Then I split the map into a grid of NxN points (I think I used N=30 in the end), and compute a new feature vector for each point. This vector is the sum of the mapped vector for each of the domains, divided by the distance. I.e. (if you prefer math) each point’s vector becomes:

\displaystyle V_{x,y}= \sum_d\frac{V_d}{\sqrt{D( (x,y),  pos_d)}}

Where D is the distance (here simple 2d euclidean), d is each domain, pos_d is that domains position in the figure and V_d is that domains feature vector. Normally it would be more natural to decrease the effect by the squared distance, but this gave less attractive results, and I ended up square-rooting it instead. The color is now simply on column of the resulting matrix normalised and mapped to a nice pylab colormap.

Now this was the fun and interesting part, and it took maybe 1 hour. As predicted. NOW, getting this plotted along with the nodes from the graph turned out to be a nightmare. Neato gave me the coordinates for the nodes, but would change them slightly when rendering to PNGs. Many hours of frustration later I ended up drawing all of it again with pylab, which worked really well. I would publish the code for this, but it’s so messy it makes grown men cry.

NOW I am off to analyse the result of the top-level domain interlinking on the billion triple data. The data-collection just finished running while I did this. … As he said.

Visualising predicate usage on the Semantic Web

So, not quite a billion triple challenge post, but the data is the same.  I had the idea that I compare the Pay-Level-Domains (PLD) of the context of the triples based on what predicates is used within each one. Then once I had the distance-metric, I could use FastMap to visualise it. It would be a quick hack, it would look smooth and great and be fun. In the end, many hours later, it wasn’t quick, the visual is not smooth (i.e. it doesn’t move) and I don’t know if it looks so great. It was fun though. Just go there and look at it:

PayLevelDomains cluster-tree

As you can see it’s a large PNG with the new-and-exciting ImageMap technology used to position the info-popup, or rather used to activate the JavaScript used for the popups. I tried at first with SVG, but I couldn’t get SVG and XHTML and Javascript to play along, I guess in Firefox 5 it will work. The graph is laid out and generated Graphviz‘s neato, which also generated the imagemap.

So what do we actually see here? In short, a tree where domains that publish similar Semantic Web data are close to each other in the tree and have similar colours. In detail: I took the all PLDs that contained over 1,000 triples, this is around 7500, and counted the number of triples for each of the 500 most frequent predicates in the dataset. (These 500 predicates cover ≈94% of the data). This gave me a vector-space with 500 features for each of the PLDs, i.e. something like this:

geonames:nearbyFeature dbprop:redirect foaf:knows 0.01 0.8 0.1 0 0 0.9 0.75 0 0.1

Each value is the percentage of triples from this PLD that used this predicate. In this vector space I used the cosine-similarity to compute a distance matrix for all PLDs. With this distance matrix I thought I could apply FastMap, but it worked really badly and looked like this:

Fastmapping the PLDs

So instead of FastMap I used maketree from the complearn tools, this generates trees from a distance matrix, it generates very good results, but it is an iterative optimisation and it takes forever for large instances. Around this time I realised I wasn’t going to be able to visualise all 7500 PLDs, and cut it down to the 2000, 1000, 500, 100, 50 largest PLDs. Now this worked fine, but the result looked like a bog-standard graphviz graph, and it wasn’t very exciting (i.e not at all like this colourful thing). Now I realised that since I actually had numeric feature vectors in the first place I wasn’t restrained to using FastMap to make up coordinates, and I used PCA to map the input vector-space to a 3-dimensional space, normalised the values to [0;255] and used these as RGB values for colour. Ah – lovely pastel.

I think I underestimated the time this would take by at least a factor of 20. Oh well. Time for lunch.

The Subject Matter (or it’s a scam – there are only 900M!)

This is the next part of the BTC statistics, this time I look at the subjects of the triples. Oh my, isn’t it exciting. Actually, I’ve had all the numbers for this ready for a while, but holidays and real work has kept me from typing it up. So, BTC overall contains:

  • 128,079,322 unique subjects
  • 118,205,618 has more than a single triple
  • 19,037,202 more than 10
  • 1,302,353 more than 100
  • 25,741 more than 1000
  • 223 more than 10000

Out of these 128M subjects 59,423,933 are blank nodes. Only 17,089 of them are file:// URIs, I really expected many more to have snuck in. At first sight it may seem very odd that so many subjects have more than 1000 triples — what could those possibly be? However, when looking at the 10 subjects with the most triples it becomes clear:

138,618 swrc:InProceedings
195,167 dctype:Text
209,623 foaf:Document
362,161 foaf:holdsAccount

Most of these are parts of schemas, i.e. properties or classes (perhaps all? I don’t know enough about CYC use to say what is). Looking at the data, out of the hundred-thousand of triples about foaf:holdsAccount for instance, 180,552 of the triples are:

foaf:holdsAccount rdf:type rdfs:Property .

And 180,390 are the triple:

foaf:holdsAccount rdf:type owl:InverseFunctionalProperty .

Of course each of these are in different context. At first I thought this meant that someone was keeping hundreds of thousand of the FOAF ontology around, but of course then all the other FOAF properties and classes would also be the subject of lots of triples. Looking at the contexts where these triples came from there are 180,574 contexts containing the first triple. 180,389 of them are from Kanzaki’s flickr2foaf script (the remaining are 150 variations on and 30 odd random contexts). However, the output from flickr2foaf does not include the schema information, it only uses use foaf:holdsAccount (and many foaf:OnlineAccount instances). My guess to what has happened is that someone has crawled this, each profile, such as mine will contain rdfs:seeAlso links to all my flickr contacts, and each of these pages  will use foaf:holdsAccount. Then they applied some sort of inference that materialised the triples above, adding it once for each context it appeared in. This inference cannot be basic RDFS inference, since it also adds owl:InverseFunctionalProperty, and it has not been applied to all the BTC data, but only to some context. I wonder if there is a way to recover which contexts this has been applied to, and then perhaps finding out which triples are redundant, i.e. they could be re-inferred from the other triples?

Now, all these triples about foaf:holdsAccount and CYC concepts also tells us something else: this isn’t really the Billion Triple Challenge, since many of the triples are duplicate, it is the Billion Quad challenge, which I guess is not so catchy. A few more CPU cycles spent on piping things through sort, and uniq (my favourite activity!) I know that out of the original 1,151,383,508 quads, there are actually only 1,150,846,965 uniqe quads, i.e. about 500K duplicates, and more interestingly, there are only 906,166,056 unique triples, i.e. 245M duplicates. I guess it’s not the Billion triple challenge either :) — now with only 900M triples it should be easy!

(BTW: No graphs this time, sorry! Also — I know I said I would talk about the literal values this time, but I changed my mind, next time!)


Gianluca Demartini asked an interesting question: Why is nearly half the subjects blank nodes? I don’t really know – but I can speculate. 46% of the subject IDs are blank-nodes, these account for ≈30% of the triples in the dataset. I was hoping these 30% would be badly distributed i.e. that there was some few blank nodes with lots and lots of triples, but alas, the blank-node/triple distribution breaks down like this :

  • 57,457,905 – over 1
  • 1,931,363 – over 10
  • 189,487 – over 100
  • 3,901 – over 1000
  • 50 – over 10000

You need to include the 43,916,862 largest bnodes descriptions to cover 90% of these triples, i.e. we cannot quickly ignore the biggest ones and move on with our lives. I wont give you the top N bnodes since this is more or less random generated IDs, but looking at some of the “largest” bnodes they all look like sitemap files that have been converted to RDF — for example, the largest blank node is _:genid1http-3A-2F-2Fwww-2Eindexedvisuals-2Ecom-2Findexedvisuals-2Exml, this appers to be an RDF version of  the sitemap for

Now, this bnode alone is the subject of 32,984 triples, and all of these apart from one is a triples with property and another bnode as an object. I guess this is the case for many of the largest bnodes, and probably many of those nodes in return. (Although a highly scientific grep for bnode IDs that contain “sitemap” returns only about 100K cases — a better count is underway.)

So in conclusion — bah! Who knows? Who needs bnodes anyway? :)


I did a proper count of how many of the blank-nodes are sitemap nodes like the indexedvisuals above, and it’s only 27! :) There goes that theory. These 27 do account for 71,985 triples with the 0.84url predicate, but this is still a tiny amount of the data. In the next post we will also see that a huge percentage of these bnodes have proper types, giving additional evidence that they are genuine real interesting parts of the data, not just some weird artifact.

Billions and billions and billions (on a map)

Time for a few more BTC statistics, this time looking at the contexts. The BTC data comes from 50,207,171 different URLs, out of these:

  • 35,423,929 yielded more than a single triple
  • 10,278,663 yielded more than 10 triples, and covers 85% of the full data.
  • 1,574,458 more than 100  covers 63%
  • 133,369 more than 1000 covers 30%
  • 3,759 more than 10000 covers 7%

The biggest context were as follows:

triples context

It’s pretty cool that someone crawled 7 million triples with aperture and put it online :) – the link is 404 now though, so you can’t easily check what it was. Also, none of the huge dbpedia pages seem to give any info, I am not quite sure what is going on there. Perhaps some encoding trouble somewhere?

As the official BTC statistics page already shows, it is more interesting when you group the context by the ones with the same host, computing the same Pay-Level-Domains as they did I get the hosts contributing the most triples as:

triples context

Again, this is computed from the whole dataset, not just a subset, but interestingly it differs quite a lot from the “official” statistics, in fact, I’ve “lost” over 100M triples from dbpedia. I am not sure why this happens, a handful of context URLs where so strange that python’s urlparse module did you produce a hostname, but they only account for about 100,000 triples. Summing for the hosts I did find I get the right number of triples (i.e. one billion :). So unless there is something fundamentally wrong about the way I find the PLD, I am almost forced to conclude that the official stats are WRONG!

UPDATE: The official numbers must be wrong, because if you sum them all you get 1,504,548,700 – i.e. over 1.5Billion triples for just the top 50 domains alone. This cannot be true, since actual number of triples is “just” 1,151,383,508.

More fun than the table above is using to geocode the IPs of these servers and put them on map. Now the hostip database is not perfect, in fact, it’s pretty poor, some hosts with A LOT of triples are missing (such as I could perhaps have used the country codes of the URLs as a fall-back solution, but I was too lazy.

Now for drawing the map I thought I could use Many Eyes, but it turned out not to be as easy as I imagined. After uploading the dataset I found that although Many Eyes has a map visualisation, it does not use lat/lon coordinates, but relies instead on country name. Here is what it would have looked like if done by lat/lon, you have to imagine the world map though:

Trying again, I used the database again, and got the country of each host, and added up the numbers for each country (Many Eyes does not do any aggregation) and uploaded a triples by country dataset. This I could visualise on a map, shading each country according to the number of triples, but it’s kinda boring:

Giving up on Many Eyes I tried the Google Visualisation API instead. Surely they would have a smooth zoomable map visualisation? Not quite. They have a map, it’s flash based, only supports “zoom” into pre-defined regions and  does a complete reload when changing region. Also, it only supports 400 data points. All the data is embedded in the Javascript though. I couldn’t get it to embed here, so click:


Now I am sure I could hack something together than would use proper Google maps, and would actually let you zoom nicely, etc. BUT I think I’ve really spent enough time on this now.

Keep your eyes peeled for the next episode where we find out why the semantic web has more triples of length 19 than any other.

BTC Statistics I

As I said, I wanted to try looking into the billion triple challenge using unix command-line tools. ISWC deadline set me back a bit, but now I’ve got it going.

First step was to get rid of those pesty literals as they contain all sort of crazy character that make my lazy parsing tricky. A bit of python later and I converted:

<> <> "Edd Dumbill" <> .
<> <> "edd" <> .
<> <> "Henry Story" <> .


<> <> "000_1" <> .
<> <> "000_2" <> .
<> <> "000_3" <> .

i.e. each literal was replaced with chunknumber_literalnumber, and the actual literals stored in another file. Now it was open for simply splitting the files by space and using cut, awk, sed, sort, uniq, etc. to do everything I wanted. (At least, that’s what I though, as it turned out the initial data contained URIs with spaces, and my “parsing” broke … then I fixed it by replacing > < with >\t<, and used tab as a field delimiter and I was laughing. The data has now been fixed, but I kept my original since I was too lazy to download 17GB again)

So, now I’ve computed a few random statistics, nothing amazingly interesting yet. I’ll put a bit her eat a time, today: THE PREDICATES!

The full data set contains 136,188 unique predicates of these:

  • 112,966 occur more than once
  • 62,937 more than 10 times
  • 24,125 more than 100
  • 8,178 more than 1000
  • 2,045 more than 10000

623 of them have URIs starting with <file://> – they will certainly be very useful for the semantic web.

Note that although 136k different predicates seems like a great deal, many of them are hardly used at all, in fact, if you only look at the top 10,000 most used predicates, you still cover 92% of the triples.

As also mentioned on the official BTC stats page, the most used predicates are:

triples predicate
143,293,758 rdf:type
53,869,968 rdfs:seeAlso
35,811,115 foaf:knows
32,895,374 foaf:nick
23,266,469 foaf:weblog
22,326,441 dc:title
19,565,730 akt:has-author
19,157,120 sioc:links_to
18,257,337 skos:subject

Note that these are computed from the whole corpus, not just a sample, and for instance for the top property there is a difference of a massive 13,139. That means the official stats are off by almost 0.01%! I don’t know how we can work under these conditions…

Moving on I assigned each predicate to a namespace, I did this by matching them with the list at, if the the URI didn’t start with any of those I made the namespace the URI up to the last # or /, whatever appeared later. The most used namespaces were:

triples namespace
244,854,345 foaf
224,325,132 dbpprop
167,911,029 rdf
807,21,580 rdfs
64,313,022 akt
63,850,346 geonames
58,675,733 dc
44,572,003 rss
31,502,395 sioc
21,156,972 skos
14,801,992 geo
9,812,367 content
8,623,124 owl
6,813,536 xhtml
5,443,549 nie

I included the top 19, since number is the NEPOMUK Information Element Ontology, and I found it funny that it was used so widely. Another thing that is funny is that RDFS is used more than 10x as much as OWL (even ignoring the RDF namespace, defining things like rdf:Property, also used by schemas). I tried to plot this data as well, since Knud pointed out that you need a nice long-tail graph these days. However, for both predicates and namespaces there are a (relatively) huge number of things that only occur once or twice, if you plot a histogram these dominate the whole graph, even with  logarithmic Y axis. In the end I’ve ended up plotting the run length encoding of the data, i.e. how many namespaces occur once, twice, three times, etc. :

Here the X axis shows how the number of occurrences and the Y axis shows how many things occur this often. I.e. the top left point is all the random noise that occurs once, such as file:/cygdrive/c/WINDOWS/Desktop/rdf.n3, file:/tmp/filem8INvE and other useful URLs. The bottom two right points are foaf and dbprop.

I don’t know about the graph – I have a feeling it lies somehow, in a way a histogram doesn’t. But I don’t know. Anyone?

Anyway – most things of the BTC I have plotted have a similarily shaped frequency distribution, i.e. the plain predicate frequencies, the subject/object frequencies are all the same. The literals are more interesting, if I have the time I’ll write them up tomorrow. Still it’s all pretty boring – I hope to detect duplicate triples from different sources once I’m done with this. I expect to find at least 10 copies of the FOAF schema.