Posts categorized “Visualisation”.

Voting in the Eurovision Song Contest

Yesterday I was made to watch the Eurovision song contest. I went to bed before the voting ended, so I woke up to find that Azerbaijan had won. Which was curious, since they were awful (or perhaps not, since this is Eurovision).

At the official webpage you can get the voting breakdown, where we can see that all the countries I have lived in (Norway, UK and Germany) gave Azerbaijan 0 points. Clearly the Eurovision has been ruined by all these new East-block counties, who in a giant conspiracy who only vote for each other, rendering us western countries with real musical talent without a chance. To confirm my suspicion I grabbed the result table, python, scipy and matplotlib. Compute the correlation matrix for the columns, run PCA on this and plot the first two components (if all that meant nothing to you, the result is that countries who tend to distribute their votes similarly are close to each other in the diagram):

Here the truths are clear as day – there is a Scandinavian conspiracy, Norway and Sweden are really the same country, Denmark is almost the same. Greece and Cyprus is one country (ahem – sorry Turkey, you are not far away). The Eastblock cabal is in the lower right. Malta is nothing like anyone else, it’s almost like it’s an island  ….

I think the only solution is to go back to the 1960 version of Eurovision, the first year that all countries that matter took part.

Seriously though – it’s fun to see how close this is to the actual geography of Europe, rotate the map a bit, and Scandinavia, UK, Italy are all in the right place.

PS: this is also pretty funny, but it seems someone takes this more seriously than perhaps they should: Eurovision Voting Fraud

Creating animations with graphviz

Here is a really pointless hack – Joern Hees asked me if I knew of any tools to force layout and visualise RDF graphs. We wondered about graphviz, but he wanted an interactive tool. When he left I wondered if it wouldn’t be quite easy to at least make an animation with graphviz. Of course it took longer than the 10 minutes, I expected, but it sort of worked. Based on the graphviz siblings example graph:

# get example
wget http://www.graphviz.org/Gallery/directed/siblings.gv.txt

# create the initial random layout
neato -Gstart=rand -Gmaxiter=1 -o i.dot siblings.gv.txt

# create 200 pngs for each iteration 
for x in $(seq 200) ; do neato -Gmaxiter=$x -Tpng -o $(printf "%03d" $x).png i.dot ; done

# resize so they are all the same size - graphviz sizing (-Gsize=4,4) is specified in inches and does not always produce PNGs of the same size.
for f in *.png ; do convert $f -resize 500x500! out.png ; mv out.png $f ; done

# make movie
mencoder mf://*.png -mf w=450:h=500:fps=10:type=png -ovc lavc -lavcopts vcodec=mpeg4:mbd=2:trell -oac copy -o output.avi

# upload to youtube
# profit!

(Of course there are many other tools that are much better than this – this is really a “because I can” case.)

Schema usage in the BTC2010 data

A little while back I spent about 1 CPU week computing which hosts use which namespaces in the BTC2010 data, i.e. I computed a matrix with hosts as rows, schemas as columns and each cell the number of triples using that namespace each host published. My plan was to use this to create a co-occurrence matrix for schemas, and then use this for computing similarities for hierarchical clustering. And I did. And it was not very amazing. Like Ed Summer’s neat LOD graph I wanted to use Protovis to make it pretty. Then, after making one version, uglier than the next I realised that just looking at the clustering tree as a javascript datastructure was just as useful, I gave up on the whole clustering thing.

Not wanting spent CPU hours go to waste, I instead coded up a direct view of the original matrix, getting a bit carried away I made a crappy non-animated, non-smooth version of Moritz Stefaner’s elastic lists using jquery-ui’s tablesorter plugin.

At http://gromgull.net/2010/10/btc/explore.html you can see the result. Clicking one a namespace will show only hosts publishing triples using this schema, and only schemas that co-occur with the one you picked. Conversely, click on a host will show the namespaces published by that host, and only hosts that use the same schemas (this makes less intuitive sense for hosts than for namespaces). You even get a little protovis histogram of the distribution of hosts/namespaces!

The usually caveats for the BTC data applies, i.e. this is a random sampling of parts of the semantic web, it doesn’t really mean anything :)

Illustrating the kernel trick

For a one paragraph intro to SVMs and the kernel-trick I wanted a a graphic that I’ve seen in a book (although forgotten where, perhaps in Pattern Classification?):

Simple idea — show some 2D data points that are not linearly separable, then transform to 3D somehow, and show that they are. I found nothing on google (at least nothing that was high enough resolution to reuse, so I wrote some lines of python with pylab and matplotlib:

import math
import pylab
import scipy

def vlen(v):
return math.sqrt(scipy.vdot(v,v))

p=scipy.randn(100,2)

a=scipy.array([x for x in p if vlen(x)>1.3 and vlen(x)<2])
b=scipy.array([x for x in p if vlen(x)<0.8])

pylab.scatter(a[:,0], a[:,1], s=30, c="blue")
pylab.scatter(b[:,0], b[:,1], s=50, c="red", marker='s')

pylab.savefig("linear.png")

fig = pylab.figure()
from mpl_toolkits.mplot3d import Axes3D
ax = Axes3D(fig)
ax.view_init(30,-110)

ax.scatter3D(map(vlen,a), a[:,0], a[:,1], s=30, c="blue")
ax.scatter3D(map(vlen,b), b[:,0], b[:,1], s=50, marker="s", c="red")

pylab.savefig("tranformed.png")

pylab.show()

Take — adapt — use for anything you like, you can rotate the 3D plot in the window that is shown and you can save the figures as PDF etc. Unfortunately, the sizing of markers in the 3d plot is not yet implemented in the latest matplotlib (0.99.1.2-3), so this only looks good with the latest SVN build.

Semantic Web Clusterball

From the I-will-never-actually-finish-this department I bring you the Semantic Web Cluster-ball:

Semantic Web Clusterball

I started this is a part of the Billion Triple Challenge work, it shows the how different sites on Semantic Web are linked together. The whole thing is an interactive SVG, I could not get it to embed here, so click on that image and mouse over things and be amazed. Clicking on the different predicates in the SVG will toggle showing that predicate, mouse over any link will show how many links are currently being shown. (NOTE: Only really tested in Firefox 3.5.X, it looked roughly ok in Chrome though.)

The data is extracted from the BTC triples by computing the Pay-Level-Domain (PLD, essentially the top-level domain, but with special rules for .co.uk domains and similar) for the subjects and objects, and if they differ, count the predicates that link them. I.e. a triple:

dbpedia:Albert_Einstein rdf:type foaf:Person.

would count as a link between http://dbpedia.org and http://xmlns.com for the rdf:type predicate. Counting all links like this gives us the top cross-domain linking predicates:

predicate links
http://www.w3.org/1999/02/22-rdf-syntax-ns#type 60,813,659
http://www.w3.org/2000/01/rdf-schema#seeAlso 16,698,110
http://www.w3.org/2002/07/owl#sameAs 4,872,501
http://xmlns.com/foaf/0.1/weblog 4,627,271
http://www.aktors.org/ontology/portal#has-date 3,873,224
http://xmlns.com/foaf/0.1/page 3,273,613
http://dbpedia.org/property/hasPhotoCollection 2,556,532
http://xmlns.com/foaf/0.1/img 2,012,761
http://xmlns.com/foaf/0.1/depiction 1,556,066
http://www.geonames.org/ontology#wikipediaArticle 735,145

Most frequent is of course rdf:type, since most schemas are from different domains to the data, and most things have a type. The ball linked above is excluding type, since it’s not really a link. You can also see a version including rdf:type. The rest of the properties are more link-like, I am not sure what is going on with the akt:has-date though, anyone?

The visualisation idea is of course not mine, mainly I stole it from Chris Harrison: Wikipedia Clusterball. His is nicer since he has core nodes inside the ball. He points out that the “clustering” of nodes along the edge is important, as this brings out the structure of whatever is being mapped. My “clustering” method was very simple, I swap each node with the one giving me the largest decrease in edge distance, then repeat until the solution no longer improves. I couple this with a handful of random restarts and take the best solution. It’s essentially a greedy hill-climbing method, and I am sure it’s far from optimal, but it does at least something. For comparison, here is the ball on top without clustering applied.

The whole thing was of course hacked up in python, the javascript for the mouse-over etc. of the SVG uses prototype. I wanted to share the code, but it’s a horrible mess, and I’d rather not spend the time to clean it up. If you want it, drop me a line., see below. The data used to generate this is available either as a download: data.txt.gz (19Mb, 10,000 host-pairs and top 500 predicates), or a subset on Many Eyes (2,500 host-pairs and top 100 predicates, uploading 19Mb of data to Many Eyes crashed my Firefox :)

UPDATE: Richard Stirling asked for the code, so I spent 30 min cleaning it up a bit, grab it here: swball_code.tar.gz It includes the data+code needed to recreate the example above.

Heat-maps of Semantic Web Predicate usage

It’s all Cygri‘s fault — he encouraged me to add schema namespaces to the general areas on the semantic web cluster-tree. Now, again I misjudged horribly how long this was going to take. I thought the general idea was simple enough, I already had the data. One hour should do it. And now one full day later I have:

FOAF Predicates on the Semantic Web

It’s the same map as last time, laid using graphviz’s neato as before. The heat-map of the properties was computed from the feature-vector of predicate counts, first I mapped all predicates to their “namespace”, by the slightly-dodgy-but-good-enough heuristic of taking the part of the URI before the last # or / character. Then I split the map into a grid of NxN points (I think I used N=30 in the end), and compute a new feature vector for each point. This vector is the sum of the mapped vector for each of the domains, divided by the distance. I.e. (if you prefer math) each point’s vector becomes:

\displaystyle V_{x,y}= \sum_d\frac{V_d}{\sqrt{D( (x,y),  pos_d)}}

Where D is the distance (here simple 2d euclidean), d is each domain, pos_d is that domains position in the figure and V_d is that domains feature vector. Normally it would be more natural to decrease the effect by the squared distance, but this gave less attractive results, and I ended up square-rooting it instead. The color is now simply on column of the resulting matrix normalised and mapped to a nice pylab colormap.

Now this was the fun and interesting part, and it took maybe 1 hour. As predicted. NOW, getting this plotted along with the nodes from the graph turned out to be a nightmare. Neato gave me the coordinates for the nodes, but would change them slightly when rendering to PNGs. Many hours of frustration later I ended up drawing all of it again with pylab, which worked really well. I would publish the code for this, but it’s so messy it makes grown men cry.

NOW I am off to analyse the result of the top-level domain interlinking on the billion triple data. The data-collection just finished running while I did this. … As he said.

Visualising predicate usage on the Semantic Web

So, not quite a billion triple challenge post, but the data is the same.  I had the idea that I compare the Pay-Level-Domains (PLD) of the context of the triples based on what predicates is used within each one. Then once I had the distance-metric, I could use FastMap to visualise it. It would be a quick hack, it would look smooth and great and be fun. In the end, many hours later, it wasn’t quick, the visual is not smooth (i.e. it doesn’t move) and I don’t know if it looks so great. It was fun though. Just go there and look at it:

PayLevelDomains cluster-tree

As you can see it’s a large PNG with the new-and-exciting ImageMap technology used to position the info-popup, or rather used to activate the JavaScript used for the popups. I tried at first with SVG, but I couldn’t get SVG and XHTML and Javascript to play along, I guess in Firefox 5 it will work. The graph is laid out and generated Graphviz‘s neato, which also generated the imagemap.

So what do we actually see here? In short, a tree where domains that publish similar Semantic Web data are close to each other in the tree and have similar colours. In detail: I took the all PLDs that contained over 1,000 triples, this is around 7500, and counted the number of triples for each of the 500 most frequent predicates in the dataset. (These 500 predicates cover ≈94% of the data). This gave me a vector-space with 500 features for each of the PLDs, i.e. something like this:

geonames:nearbyFeature dbprop:redirect foaf:knows
dbpedia.org 0.01 0.8 0.1
livejournal.org 0 0 0.9
geonames.org 0.75 0 0.1

Each value is the percentage of triples from this PLD that used this predicate. In this vector space I used the cosine-similarity to compute a distance matrix for all PLDs. With this distance matrix I thought I could apply FastMap, but it worked really badly and looked like this:

Fastmapping the PLDs

So instead of FastMap I used maketree from the complearn tools, this generates trees from a distance matrix, it generates very good results, but it is an iterative optimisation and it takes forever for large instances. Around this time I realised I wasn’t going to be able to visualise all 7500 PLDs, and cut it down to the 2000, 1000, 500, 100, 50 largest PLDs. Now this worked fine, but the result looked like a bog-standard graphviz graph, and it wasn’t very exciting (i.e not at all like this colourful thing). Now I realised that since I actually had numeric feature vectors in the first place I wasn’t restrained to using FastMap to make up coordinates, and I used PCA to map the input vector-space to a 3-dimensional space, normalised the values to [0;255] and used these as RGB values for colour. Ah – lovely pastel.

I think I underestimated the time this would take by at least a factor of 20. Oh well. Time for lunch.

FastMap in Python

This shows some strings positioned on a graph according to their Levenshtein string distance. (And let me note how appropriate it is that a man named Levenshtein should make a string distance algorithm)

The mapping from the distance matrix given by the string distance is then mapped to two dimensions using the FastMap algorithm by Christos Faloutsos and King-Ip Lin. This mapping can be also done with Multidimensional Scaling, (and I did so in my PhD work, I even claimed it was novel, but actually I was 40 years too late, oh well), but that algorithm is nasty and iterative. FastMap, as the name implies, is much faster, it doesn’t actually even need a full distance matrix (I think). It doesn’t always find the best solution, and it has a slight random element to it, so the solution might also vary each time it’s run, but it’s good enough for almost all cases. Again, I’ve implemented it in python – grab it here: fastmap.py

To get a feel for how it works, download the code, remove George Leary and see how it reverts to only one dimension for a sensible mapping.

The algorithm is straight-forward enough, for each dimension you want, repeat:

  1. use a heuristic to find the two most distant points
  2. the line between these two points becomes the first dimension, project all points to this line
  3. recurse :)

The distance measure used keeps the projection “in mind”, so the second iteration will be different to the first.The whole thing is a bit like Principal Component Analysis, but without requiring an original matrix of feature vectors.

This is already quite old, from 1995, and I am sure something better exists now, but it’s a nice little thing to have in the toolbox. I wonder if it can be used to estimate numerical feature values for nominal attributes, in cases where all possible values are known?

Billions and billions and billions (on a map)

Time for a few more BTC statistics, this time looking at the contexts. The BTC data comes from 50,207,171 different URLs, out of these:

  • 35,423,929 yielded more than a single triple
  • 10,278,663 yielded more than 10 triples, and covers 85% of the full data.
  • 1,574,458 more than 100  covers 63%
  • 133,369 more than 1000 covers 30%
  • 3,759 more than 10000 covers 7%

The biggest context were as follows:

triples context
7,186,445 http://sw.deri.org/svn/sw/2008/03/MetaX/vocab.rdf#aperture-1.2.0
410,659 http://lsdis.cs.uga.edu/projects/semdis/swetodblp/march2007/swetodblp_march_2007_part_22.rdf
273,644 http://www.ling.helsinki.fi/kit/2004k/ctl310semw/WordNet/wordnet_nouns-20010201.rdf
237,685 http://dbpedia.org/resource/リレット
196,239 http://dbpedia.org/resource/スケルツォ
194,730 http://dbpedia.org/resource/オークション
178,842 http://www.reveredata.com/reports/store/index-companies.rss
165,948 http://lsdis.cs.uga.edu/~satya/Satya/jan24.owl
165,506 http://www.cs.man.ac.uk/~dturi/ontologies/go-assocdb/go-termdb.owl
160,592 http://dbpedia.org/resource/をがわいちろを

It’s pretty cool that someone crawled 7 million triples with aperture and put it online :) – the link is 404 now though, so you can’t easily check what it was. Also, none of the huge dbpedia pages seem to give any info, I am not quite sure what is going on there. Perhaps some encoding trouble somewhere?

As the official BTC statistics page already shows, it is more interesting when you group the context by the ones with the same host, computing the same Pay-Level-Domains as they did I get the hosts contributing the most triples as:

triples context
278,566,771 dbpedia.org
133,266,773 livejournal.com
94,748,441 rkbexplorer.com
84,896,760 geonames.org
61,339,034 mybloglog.com
53,492,284 sioc-project.org
23,970,898 qdos.com
23,745,914 hi5.com
23,459,199 kanzaki.com
17,691,303 rdfabout.com
15,784,386 plode.us
15,208,914 dbtune.org
13,548,946 craigslist.org
10,155,861 l3s.de
10,028,115 opencyc.org

Again, this is computed from the whole dataset, not just a subset, but interestingly it differs quite a lot from the “official” statistics, in fact, I’ve “lost” over 100M triples from dbpedia. I am not sure why this happens, a handful of context URLs where so strange that python’s urlparse module did you produce a hostname, but they only account for about 100,000 triples. Summing for the hosts I did find I get the right number of triples (i.e. one billion :). So unless there is something fundamentally wrong about the way I find the PLD, I am almost forced to conclude that the official stats are WRONG!

UPDATE: The official numbers must be wrong, because if you sum them all you get 1,504,548,700 – i.e. over 1.5Billion triples for just the top 50 domains alone. This cannot be true, since actual number of triples is “just” 1,151,383,508.

More fun than the table above is using hostip.info to geocode the IPs of these servers and put them on map. Now the hostip database is not perfect, in fact, it’s pretty poor, some hosts with A LOT of triples are missing (such as livejournal.com). I could perhaps have used the country codes of the URLs as a fall-back solution, but I was too lazy.

Now for drawing the map I thought I could use Many Eyes, but it turned out not to be as easy as I imagined. After uploading the dataset I found that although Many Eyes has a map visualisation, it does not use lat/lon coordinates, but relies instead on country name. Here is what it would have looked like if done by lat/lon, you have to imagine the world map though:

Trying again, I used the hostip.info database again, and got the country of each host, and added up the numbers for each country (Many Eyes does not do any aggregation) and uploaded a triples by country dataset. This I could visualise on a map, shading each country according to the number of triples, but it’s kinda boring:

Giving up on Many Eyes I tried the Google Visualisation API instead. Surely they would have a smooth zoomable map visualisation? Not quite. They have a map, it’s flash based, only supports “zoom” into pre-defined regions and  does a complete reload when changing region. Also, it only supports 400 data points. All the data is embedded in the Javascript though. I couldn’t get it to embed here, so click:

mapscreenshot

Now I am sure I could hack something together than would use proper Google maps, and would actually let you zoom nicely, etc. BUT I think I’ve really spent enough time on this now.

Keep your eyes peeled for the next episode where we find out why the semantic web has more triples of length 19 than any other.

BTC Statistics I

As I said, I wanted to try looking into the billion triple challenge using unix command-line tools. ISWC deadline set me back a bit, but now I’ve got it going.

First step was to get rid of those pesty literals as they contain all sort of crazy character that make my lazy parsing tricky. A bit of python later and I converted:


<http://www.w3.org/People/Berners-Lee/card#edd> <http://xmlns.com/foaf/0.1/name> "Edd Dumbill" <http://www.w3.org/People/Berners-Lee/card> .
<http://www.w3.org/People/Berners-Lee/card#edd> <http://xmlns.com/foaf/0.1/nick> "edd" <http://www.w3.org/People/Berners-Lee/card> .
<http://bblfish.net/people/henry/card#me> <http://xmlns.com/foaf/0.1/name> "Henry Story" <http://www.w3.org/People/Berners-Lee/card> .

to

<http://www.w3.org/People/Berners-Lee/card#edd> <http://xmlns.com/foaf/0.1/name> "000_1" <http://www.w3.org/People/Berners-Lee/card> .
<http://www.w3.org/People/Berners-Lee/card#edd> <http://xmlns.com/foaf/0.1/nick> "000_2" <http://www.w3.org/People/Berners-Lee/card> .
<http://bblfish.net/people/henry/card#me> <http://xmlns.com/foaf/0.1/name> "000_3" <http://www.w3.org/People/Berners-Lee/card> .

i.e. each literal was replaced with chunknumber_literalnumber, and the actual literals stored in another file. Now it was open for simply splitting the files by space and using cut, awk, sed, sort, uniq, etc. to do everything I wanted. (At least, that’s what I though, as it turned out the initial data contained URIs with spaces, and my “parsing” broke … then I fixed it by replacing > < with >\t<, and used tab as a field delimiter and I was laughing. The data has now been fixed, but I kept my original since I was too lazy to download 17GB again)

So, now I’ve computed a few random statistics, nothing amazingly interesting yet. I’ll put a bit her eat a time, today: THE PREDICATES!

The full data set contains 136,188 unique predicates of these:

  • 112,966 occur more than once
  • 62,937 more than 10 times
  • 24,125 more than 100
  • 8,178 more than 1000
  • 2,045 more than 10000

623 of them have URIs starting with <file://> – they will certainly be very useful for the semantic web.

Note that although 136k different predicates seems like a great deal, many of them are hardly used at all, in fact, if you only look at the top 10,000 most used predicates, you still cover 92% of the triples.

As also mentioned on the official BTC stats page, the most used predicates are:

triples predicate
156,448,093 http://dbpedia.org/property/wikilink
143,293,758 rdf:type
53,869,968 rdfs:seeAlso
35,811,115 foaf:knows
32,895,374 foaf:nick
23,266,469 foaf:weblog
22,326,441 dc:title
19,565,730 akt:has-author
19,157,120 sioc:links_to
18,257,337 skos:subject

Note that these are computed from the whole corpus, not just a sample, and for instance for the top property there is a difference of a massive 13,139. That means the official stats are off by almost 0.01%! I don’t know how we can work under these conditions…

Moving on I assigned each predicate to a namespace, I did this by matching them with the list at prefix.cc, if the the URI didn’t start with any of those I made the namespace the URI up to the last # or /, whatever appeared later. The most used namespaces were:

triples namespace
244,854,345 foaf
224,325,132 dbpprop
167,911,029 rdf
807,21,580 rdfs
64,313,022 akt
63,850,346 geonames
58,675,733 dc
44,572,003 rss
31,502,395 sioc
21,156,972 skos
14,801,992 geo
10,691,295 http://dbpedia.org/ontology
10,239,809 http://www.kisti.re.kr/isrl/ResearchRefOntology
9,812,367 content
9,661,682 http://www.rdfabout.com/rdf/schema/vote
8,623,124 owl
6,837,606 http://rdf.freebase.com/ns
6,813,536 xhtml
5,443,549 nie

I included the top 19, since number is the NEPOMUK Information Element Ontology, and I found it funny that it was used so widely. Another thing that is funny is that RDFS is used more than 10x as much as OWL (even ignoring the RDF namespace, defining things like rdf:Property, also used by schemas). I tried to plot this data as well, since Knud pointed out that you need a nice long-tail graph these days. However, for both predicates and namespaces there are a (relatively) huge number of things that only occur once or twice, if you plot a histogram these dominate the whole graph, even with  logarithmic Y axis. In the end I’ve ended up plotting the run length encoding of the data, i.e. how many namespaces occur once, twice, three times, etc. :

Here the X axis shows how the number of occurrences and the Y axis shows how many things occur this often. I.e. the top left point is all the random noise that occurs once, such as file:/cygdrive/c/WINDOWS/Desktop/rdf.n3, file:/tmp/filem8INvE and other useful URLs. The bottom two right points are foaf and dbprop.

I don’t know about the graph – I have a feeling it lies somehow, in a way a histogram doesn’t. But I don’t know. Anyone?

Anyway – most things of the BTC I have plotted have a similarily shaped frequency distribution, i.e. the plain predicate frequencies, the subject/object frequencies are all the same. The literals are more interesting, if I have the time I’ll write them up tomorrow. Still it’s all pretty boring – I hope to detect duplicate triples from different sources once I’m done with this. I expect to find at least 10 copies of the FOAF schema.