Posts categorized “Python”.

Multithreaded SciPy/NumPy with OpenBLAS on debian

Some months ago, just after I got an 8-core CPU, I wasted a weekend trying to get SciPy/NumPy to build against OpenBLAS. OpenBLAS is neat, as it has built-in and automatic support for multi-threading, for things like computing the dot-matrix of large matrices this can really be a time saver.

I was using roughly these instructions, but it was too complicated and I got nowhere and gave up.

Then I got a new MacBook, and set it up using homebrew rather than macports, and I noticed that NumPy was built against OpenBLAS by default. Now, it would be a shame to let debian be any worse off…

Luckily, it was much easier than I thought:

1. Install the openblas and lapack libraries and dev-headers

sudo apt-get install libopenblas-dev liblapack-dev

2. Setup a virtualenv

To make sure we do not mess up the whole system, setup a virtualenv (if you ever install more than 3 python packages, and do not yet know about virtualenv, you really should get to know it, it’s a little piece of magic!):

virtualenv env
source env/bin/activate

3. Install NumPy

In Debian, openblas/lapack fit into the alternatives system, and the implementation you chose gets symlinked to /usr/lib, however, this confuses numpy and you must point it to the right place, i.e. to /usr/lib/openblas-base
Download and unpack NumPy:

mkdir evn/download
pip install -d env/download numpy
mkdir env/build
cd env/build
tar xf ../download/numpy-1.7.1.tar.gz

Now create a site.cfg file with the following content:

library_dirs= /usr/lib/openblas-base

atlas_libs = openblas

Build/install NumPy:

python install

You can now check the file env/lib/python2.7/site-packages/numpy/ to make sure it found the right libs, mine looks like this:

lapack_info={'libraries': ['lapack'], 
    'library_dirs': ['/usr/lib'], 'language': 'f77'}
atlas_threads_info={'libraries': ['openblas'], 
    'library_dirs': ['/usr/local/lib'], 
    'language': 'c', 
    'define_macros': [('ATLAS_WITHOUT_LAPACK', None)], 
    'include_dirs': ['/usr/local/include']}
blas_opt_info={'libraries': ['openblas'], 
    'library_dirs': ['/usr/local/lib'], 
    'language': 'c', 
    'define_macros': [('ATLAS_INFO', '"\\"None\\""')], 
    'include_dirs': ['/usr/local/include']}

4. Install SciPy

If NumPy installs cleanly, SciPy can simple be installed with pip:

pip install scipy

5. Test!

Using these scripts you can test your NumPy and SciPy installation. Be activating/deactivating the virtualenv, you can test with/without OpenBLAS. If you have several CPU cores, you can see that with OpenBLAS up to 4 CPUs should also be used.

Without OpenBLAS I get:

dot: 0.901498508453 sec

cholesky: 0.11981959343 sec
svd: 3.64697360992 sec

with OpenBLAS:

dot: 0.0569217920303 sec

cholesky: 0.0204758167267 sec
svd: 0.81153883934 sec

On finding duplicate images

I got a new shiny MacBook in my new job at Bakken & Baeck, and figured it was time for a new start, so I am de-commissioning my old MacBook and with it, the profile and files that are so old it used to be on a PowerBook. Most things were easy, until I got to the photos. Over the years I have imported photos to the laptop while travelling, but always tried to import them again to my real backed-up photo archive at home when I got there, unless my SD card was full while travelling, or I forgot, or something else went wrong. That means I am fairly sure MOST of the photos on the laptop are also in my archive, but also fairly sure some are not.
And of course, each photo, be it an out-of-focus, under-exposed test-shot, is a little piece of personal memory, a beautiful little diamond of DATA, and must at all cost NOT BE LOST.

The photos are mostly in iPhoto (but not all), in a mix of old-style mixed up in iPhotos own folder structures, and in folders I have named.

“Easy” I thought, I trust the computer, I normally use Picasa, it will detect duplicates when importing! Using

find . -iname \*.jpg -print0 | xargs -0 -I{} cp -v --backup=t {} /disks/1tb/tmp/photos/

I can copy all JPGs from the Pictures folder into one big folder without overwriting files with duplicate names (I ❤ coreutils?), then let Picasa sort it out for me.

Easily done, 7500 photos in one folder, Picasa thinks a bit and detects some duplicates, but not by far enough. Several photos I KNOW are archived are not flagged as dupes. I give up trusting Picasa. I know who I can trust:


(In retrospect, I should have trused Picasa a bit, and at least removed the ones it DID claim were duplicates)

So, next step, compute the md5sum of all 7500 new photos, and of all 45,000 already archived photos. Write the shell-script, go to work, return, write the python to find all duplicates, delete the ones from the laptop.

Success! 3700 duplicates gone! But wait! There are still many photos I know for a fact are duplicates, I pick one at random and inspect it. It IS the same photo, but one JPG is 3008×2008 and the other is 3040×2024, also the white-balance is very slightly different. Now I understand, back when I had more time, I shot exclusively in RAW, these are two JPG produced from the same RAW file, one by iPhoto, one by UFRaw, the iPhoto one is slightly smaller and has worse color. Bah.

Now it’s getting later, my Friday night is slipping away between my fingers, but I am damned if I give up now. Next step: EXIF data! Both files have EXIF intact, and both are (surprise!) taken at the same time. Now, I don’t want to go and look up the EXIF tag on all 45,000 archived photos just now, but I can filter by filename, if two files have the same basename (IMGP1234) AND are taken at the same time, I am willing to risk deleting one of them.

So with the help of the EXIF parsing library from and a bit of python:

it is done! Some ~3500 more duplicates removed!

I am left with 202 photos that may actually be SAVED FROM ETERNAL OBLIVION! (Looking more carefully, about half is actually out-of-focus or test shots, or nonsense I probably DID copy to the archive, but then deleted) It was certainly worth spending the entire evening 3 days in a row on this!

Now I can go back to hating hotmail for having deleted all the emails I received pre 2001…

python, regex, unicode and brokenness

(This post included a complaint about handling of unicode codepoint >0xffff in python, including a literal such character, and it broke WordPress, which ate the remainder of the post after that character… and I am too lazy to retype it, so for now, no unicode)

I love python, I really do, but some things are … slightly irregular.

One of those things is the handling of unmatched regular expression groups when replacing. In python such a group returns None when matching, this is fine. But when replacing, this unmatched group will produce an error, rather than simple inserting the empty string. For example:

>>> re.sub('(ab)|(a)', r'\1\2', 'abc')
Traceback (most recent call last):
  File "", line 1, in 
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/", line 151, in sub
    return _compile(pattern, flags).sub(repl, string, count)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/", line 275, in filter
    return sre_parse.expand_template(template, match)
  File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/", line 787, in expand_template
    raise error, "unmatched group"
sre_constants.error: unmatched group

There are plenty of people with this problem on the interwebs, and even a python bug report – most “solutions” involves re-writing your expression to make the unmatched group match empty string. Unfortunately, my input expression comes from the sparql 11 compliance tests and as much as I’d like I’m not really free to change it. So, it gets ugly:

And it works, at least in my python 2.7.3 …

RDFLib & Linked Open Data on the Appengine

Recently I’ve had the chance to use RDFLib a fair bit at work, and I’ve fixed lots of bugs and also written a few new bits. The new bits generally started as write-once and forget things, which I then needed again and again and I kept making them more general. The end result (for now) is two scripts that let you go from this CSV file to this webapp (via this N3 file). Actually – it’ll let you go from any CSV file to a Linked Open Data webapp, the app does content-negotiation and SPARQL as well as the HTML you just saw when you clicked on the link.
In the court

The dataset in this case, is a small collection of King Crimson albums – I spent a long time looking for some CSV data in the wild that had the features I wanted to show off, but failed, and copy/pasted this together from the completely broken CSV dump of the Freebase page.

To convert the CSV file you need a config file giving URI prefixes and some details on how to handle the different columns. The config file for the King Crimson albums looks like:




col1=date("%d %B %Y")
col2=split(";", uri("", ""))


With this config file and the current HEAD of rdfextras you can run:

python -m -f kingcrimson.config kingcrimson.csv

and get your RDF.

This tool is of course not the first or only of it’s kind – but it’s mine! You may also want to try Google Refine, which has much more powerful (and interactive!) editing possibilities than my hack. With the RDF extension, you can even export RDF directly.
One benefit of this script is that it’s stream-based and could be used on very large CSV files. Although, I believe Google Refine can also export actions taken in some form of batch script, but I never tried it.

With lots of shiny new RDF in my hand I wanted to make it accessible to people who do not enjoy looking at N3 in a text-editor and built the LOD application.
It’s built on the excellent Flask micro-web-framework and it’s now also part of rdfextras . If you have the newest version you can run it locally in Flask’s debug server like this:

python -m rdfextras.web.lod kingcrimson.n3

This runs great locally – and I’ve also deployed it within Apache, but not everyone has a mod_python ready Apache at hand, so I thought it would be nice to run it inside the Google Appengine.

Running the Flask app inside of appengine turned out to be amazingly easy, thanks to Francisco Souza for the pointers:

from google.appengine.ext.webapp.util import run_wsgi_app
from rdfextras.web import lod

import rdflib
g.load("kingcrimson.n3", format='n3')


Write your app.yaml and make this your handler for /* and you’re nearly good to go. To deploy this app to the appengine you also need all required libraries (rdflib, flask, etc.) inside your app directory, a shell script for this is here:

Now, I am not really clear on the details on how the appengine works. Is this code run for every request? Or is the wsgi app persistent? When I deployed the LOD app inside apache using mod_python, it seems the app is created once, and server many requests over it’s lifetime.
In any case, RDFLib has no appengine compatible persistent store (who wants to write an rdflib store on top of the appengine datastore?), so the graph is kept in memory, perhaps it is re-parsed once for each request, perhaps not – this limits the scalability of this approach in any case. I also do not know the memory limitations of the appengine – or how efficient the rdflib in-memory store really is – but I assume there is a fairly low limit on the number of triple you can server this way. Inside apache I’ve deployed it on some hundred thousand triples in a BerkleyDB store.

There are several things that could be improved everywhere here – the LOD app in particular has some rough edges and bugs, but it’s being used internally in our project, so we might fix some of them given time. The CSV converter really needs a way to merge two columns, not just split them.

All the files you need to run this example yourself are under: – let me know if you try it and if it works or breaks!

A quick and dirty guide to YOUR first time with RDF

(This is written for the challenge from

(To save you copy/pasting/typing you can download the examples from here:

10 steps to make sense of RDF data:

  1. Install a debian or Ubuntu based system — I used Debian testing.
  2. Install rdflib and Berkely/Sleepycat DB by doing:
    sudo apt-get install python-rdflib python-bsddb3

    (I got rdflib version 2.4.2 – if you get version 3.X.X the code may look slightly different, let me know if you cannot work out the changes on your own)

  3. Find some data — I randomly picked the data behind the BIS Research Funding Explorer. You can find the raw RDF data on the server. We will use the schema file from:

    and the education data from:

    We use the education data because it is smaller than the research data, only 500K vs 11M, and because there is a syntax error in the corresponding file for research :). In the same folders there are files called blahblah-void. These are statistics about the datasets, and we do not need them for this (see for details).

  4. Load the data, type this into a python shell, or create a python file and run it:
    import rdflib
    g.load("", format='nt')

    Note that the two files are in different RDF formats, both contain triples, but one is serialized as XML, the other in a ascii line-based format called N-Triples.You do not have to care about this, just tell rdflib to use the right parser with the format=X parameter, RDF/XML is the default.

  5. After the script has run there will be a new folder called db in the current directory, it contains the berkeley data-base files and indexes for the data. For the above example it’s about 1.5M
  6. Explore the data a bit, again type this into a python shell:
    • First open the DB again:
    • import rdflib
      -- Outputs: 3690 --

      The graph object is quite pythonic, and you can treat it like a collection of triples. Here len tells us we have loaded 3690 triples.

    • Find out what sorts of things this data describes. Things are typed by a triple with rdf:type as the predicate in RDF.
      for x in set(g.objects(None, rdflib.RDF.RDFNS["type"])): print x
      -- Outputs:

      rdflib gives you several handy functions that return python generators for doing simple triple based queries, here we used graph.objects, taking two parameters, the subjects and predicates to filter for, and returns a generator over all objects matching. rdflib also provides constants for the well-known RDF and RDFSchema vocabularies, we used this here to get the correct URI for the rdf:type predicate.

    • Now we know the data contains some Institutions, get a list using another rdflib triple-based query:
      for x in set(g.subjects(rdflib.RDF.RDFNS["type"], rdflib.URIRef(''))): print x
      -- Outputs:
      ... (and many more) ...

      This gives us a long list of all institutions. The set call here just iterates through the generator and removes duplicates.

    • Lets look at the triples about one in more detail:
      for t in g.triples((rdflib.URIRef(''), None, None)): print map(str,t)
      -- Outputs:
      ['', '', '']
      ['', '', 'University College London']
      ... (and many more) ...

      This gives us a list of triples asserted about UCL, here we used the triples method of rdflib, it takes a single argument, a tuple representing the triple filters. The returned triples are also tuples, the map(str,t) just makes the output prettier.

  7. rdflib makes it very easy to work with triple based queries, but for more complex queries you quickly need SPARQL, this is also straight forward:
    PREFIX owl: <>
    PREFIX foaf: <>
    PREFIX p: <>
    PREFIX aiiso: <>
    PREFIX geo: <>
    PREFIX skos: <>
    PREFIX rdf: <>
    PREFIX rdfs: <>
    list(g.query(PREFIX+"SELECT ?x ?label WHERE { ?x rdfs:label ?label ; a aiiso:Institution . } "))[:10]

    The prefixes defined here at the start lets us use short names instead of full URIs in the queries. The graph.query method returns a generator over tuples of variables bindings. This lists the first 10 – this is more or less the same as we did before, list all institutions, but this time also get the human readable label.

  8. Now a slightly more complicated example. Ask the knowledge base to find all institutions classified as public sector that took part in some project together:

    r=list(g.query(PREFIX+"""SELECT DISTINCT ?x ?xlabel ?y ?ylabel WHERE { 
       ?x rdfs:label ?xlabel ; 
          a aiiso:Institution ; 
          p:organisationSize 'Public Sector' ; 
          p:project ?p . 
       ?y rdfs:label ?ylabel ; 
          a aiiso:Institution ; 
          p:organisationSize 'Public Sector' ; 
          p:project ?p .
       FILTER (?x != ?y) } LIMIT 10 """))
    for x in r[:3]: print map(str,x)
    -- Outputs:
    ['', 'Nottingham University', '', 'The University of Sheffield']
    ['', 'University of Nottingham', '', 'University of Sheffield']
    ['', 'Sheffield University', '', 'University of Nottingham']

    All fairly straight forward, the FILTER is there to make sure the two institutions we find are not the same.
    (Disclaimer: there is a bug in rdflib ( that makes this query take very long :( – it should be near instantaneous, but takes maybe 10 seconds for me. )

  9. The data we loaded so far do not have any details on the project that actually got funded, only the URI, for example: You can go there with your browser and find out that this is a project called “Nuclear transfer enhancement technology for bio processing and tissue engineering” – luckily so can rdflib, just call graph.load on the URI. Content-negotiation on the server will make sure that rdflib gets machine readable RDF when it asks. A for-loop over a rdflib triple query and loading all the project descriptions is left as an exercise to the reader :)
  10. That’s it! There are many places to go from here, just keep trying things out – if you get stuck try asking questions on or in the IRC chatroom at irc:// Have fun!

Illustrating the kernel trick

For a one paragraph intro to SVMs and the kernel-trick I wanted a a graphic that I’ve seen in a book (although forgotten where, perhaps in Pattern Classification?):

Simple idea — show some 2D data points that are not linearly separable, then transform to 3D somehow, and show that they are. I found nothing on google (at least nothing that was high enough resolution to reuse, so I wrote some lines of python with pylab and matplotlib:

import math
import pylab
import scipy

def vlen(v):
return math.sqrt(scipy.vdot(v,v))


a=scipy.array([x for x in p if vlen(x)>1.3 and vlen(x)<2])
b=scipy.array([x for x in p if vlen(x)<0.8])

pylab.scatter(a[:,0], a[:,1], s=30, c="blue")
pylab.scatter(b[:,0], b[:,1], s=50, c="red", marker='s')


fig = pylab.figure()
from mpl_toolkits.mplot3d import Axes3D
ax = Axes3D(fig)

ax.scatter3D(map(vlen,a), a[:,0], a[:,1], s=30, c="blue")
ax.scatter3D(map(vlen,b), b[:,0], b[:,1], s=50, marker="s", c="red")


Take — adapt — use for anything you like, you can rotate the 3D plot in the window that is shown and you can save the figures as PDF etc. Unfortunately, the sizing of markers in the 3d plot is not yet implemented in the latest matplotlib (, so this only looks good with the latest SVN build.

I’ll trie in python

In Koble the auto-completion of thing-names used for wiki-editting, instant-search and adding relations  is getting slower and slower, mainly because I do:

for t in things:
   if t.startswith(key): res.append(t)
for t in things:
   if key in t: res.append(t)

Going through the list twice makes sure I get all things that match well first (i.e. the start with the string I complete for), and then things matching less well later (they only contain the string).

Of course the world has made up a far better data-structure for indexing prefix’es of string, namely the trie, or prefix tree. James Tauber had already implemented one in python, and kindly made it available. His version didn’t do everything I needed, so I added a few methods. Here is my updated version:


Heat-maps of Semantic Web Predicate usage

It’s all Cygri‘s fault — he encouraged me to add schema namespaces to the general areas on the semantic web cluster-tree. Now, again I misjudged horribly how long this was going to take. I thought the general idea was simple enough, I already had the data. One hour should do it. And now one full day later I have:

FOAF Predicates on the Semantic Web

It’s the same map as last time, laid using graphviz’s neato as before. The heat-map of the properties was computed from the feature-vector of predicate counts, first I mapped all predicates to their “namespace”, by the slightly-dodgy-but-good-enough heuristic of taking the part of the URI before the last # or / character. Then I split the map into a grid of NxN points (I think I used N=30 in the end), and compute a new feature vector for each point. This vector is the sum of the mapped vector for each of the domains, divided by the distance. I.e. (if you prefer math) each point’s vector becomes:

\displaystyle V_{x,y}= \sum_d\frac{V_d}{\sqrt{D( (x,y),  pos_d)}}

Where D is the distance (here simple 2d euclidean), d is each domain, pos_d is that domains position in the figure and V_d is that domains feature vector. Normally it would be more natural to decrease the effect by the squared distance, but this gave less attractive results, and I ended up square-rooting it instead. The color is now simply on column of the resulting matrix normalised and mapped to a nice pylab colormap.

Now this was the fun and interesting part, and it took maybe 1 hour. As predicted. NOW, getting this plotted along with the nodes from the graph turned out to be a nightmare. Neato gave me the coordinates for the nodes, but would change them slightly when rendering to PNGs. Many hours of frustration later I ended up drawing all of it again with pylab, which worked really well. I would publish the code for this, but it’s so messy it makes grown men cry.

NOW I am off to analyse the result of the top-level domain interlinking on the billion triple data. The data-collection just finished running while I did this. … As he said.

Visualising predicate usage on the Semantic Web

So, not quite a billion triple challenge post, but the data is the same.  I had the idea that I compare the Pay-Level-Domains (PLD) of the context of the triples based on what predicates is used within each one. Then once I had the distance-metric, I could use FastMap to visualise it. It would be a quick hack, it would look smooth and great and be fun. In the end, many hours later, it wasn’t quick, the visual is not smooth (i.e. it doesn’t move) and I don’t know if it looks so great. It was fun though. Just go there and look at it:

PayLevelDomains cluster-tree

As you can see it’s a large PNG with the new-and-exciting ImageMap technology used to position the info-popup, or rather used to activate the JavaScript used for the popups. I tried at first with SVG, but I couldn’t get SVG and XHTML and Javascript to play along, I guess in Firefox 5 it will work. The graph is laid out and generated Graphviz‘s neato, which also generated the imagemap.

So what do we actually see here? In short, a tree where domains that publish similar Semantic Web data are close to each other in the tree and have similar colours. In detail: I took the all PLDs that contained over 1,000 triples, this is around 7500, and counted the number of triples for each of the 500 most frequent predicates in the dataset. (These 500 predicates cover ≈94% of the data). This gave me a vector-space with 500 features for each of the PLDs, i.e. something like this:

geonames:nearbyFeature dbprop:redirect foaf:knows 0.01 0.8 0.1 0 0 0.9 0.75 0 0.1

Each value is the percentage of triples from this PLD that used this predicate. In this vector space I used the cosine-similarity to compute a distance matrix for all PLDs. With this distance matrix I thought I could apply FastMap, but it worked really badly and looked like this:

Fastmapping the PLDs

So instead of FastMap I used maketree from the complearn tools, this generates trees from a distance matrix, it generates very good results, but it is an iterative optimisation and it takes forever for large instances. Around this time I realised I wasn’t going to be able to visualise all 7500 PLDs, and cut it down to the 2000, 1000, 500, 100, 50 largest PLDs. Now this worked fine, but the result looked like a bog-standard graphviz graph, and it wasn’t very exciting (i.e not at all like this colourful thing). Now I realised that since I actually had numeric feature vectors in the first place I wasn’t restrained to using FastMap to make up coordinates, and I used PCA to map the input vector-space to a 3-dimensional space, normalised the values to [0;255] and used these as RGB values for colour. Ah – lovely pastel.

I think I underestimated the time this would take by at least a factor of 20. Oh well. Time for lunch.

FastMap in Python

This shows some strings positioned on a graph according to their Levenshtein string distance. (And let me note how appropriate it is that a man named Levenshtein should make a string distance algorithm)

The mapping from the distance matrix given by the string distance is then mapped to two dimensions using the FastMap algorithm by Christos Faloutsos and King-Ip Lin. This mapping can be also done with Multidimensional Scaling, (and I did so in my PhD work, I even claimed it was novel, but actually I was 40 years too late, oh well), but that algorithm is nasty and iterative. FastMap, as the name implies, is much faster, it doesn’t actually even need a full distance matrix (I think). It doesn’t always find the best solution, and it has a slight random element to it, so the solution might also vary each time it’s run, but it’s good enough for almost all cases. Again, I’ve implemented it in python – grab it here:

To get a feel for how it works, download the code, remove George Leary and see how it reverts to only one dimension for a sensible mapping.

The algorithm is straight-forward enough, for each dimension you want, repeat:

  1. use a heuristic to find the two most distant points
  2. the line between these two points becomes the first dimension, project all points to this line
  3. recurse :)

The distance measure used keeps the projection “in mind”, so the second iteration will be different to the first.The whole thing is a bit like Principal Component Analysis, but without requiring an original matrix of feature vectors.

This is already quite old, from 1995, and I am sure something better exists now, but it’s a nice little thing to have in the toolbox. I wonder if it can be used to estimate numerical feature values for nominal attributes, in cases where all possible values are known?