QuickGraph#4 Explore your browser history in Neo4j

The dataset

For this example I am going to use my browser history data. Most browsers store this data in SQLite. This means relational data, easy to access from Neo4j using the apoc.load.jdbc  stored procedure. I’m a Chrome user, and in my Mac, Chrome stores the history db at

~/Library/Application Support/Google/Chrome/Default/History

There are two main tables in the History DB: urls and visits. I’m going to explore them directly from Neo4j’s browser using the same apoc.load.jdbc procedure. In order to do that, you’ll have to download first a jdbc driver for SQLite, and copy it in the plugins directory of your Neo4j instance. Also keep in mind that Chrome locks the History DB when the browser is open so if you want to play with it(even read only acces) you will have to either close the browser or as I did, copy the DB (a single file) somewhere else and work from that snapshot.

This Cypher fragment will return the first few records of the urls table and we see on them things like an unique identifier for the page, its url, title of the page and some counters with the number of visits and the number of times the url has been typed as opposed to reached by following a hyperlink.

CALL apoc.load.jdbc("jdbc:sqlite:/Users/jbarrasa/Documents/Data/History",
                    "urls") yield row 
WITH row LIMIT 10
RETURN *

The results look like this on my browser history.

screen-shot-2016-09-29-at-02-52-48

The visits table contain information about the page visit event, a timestamp (visit_time), a unique identifier (id) for each visit and most interesting, whether the visit follows a previous one (from_visit). This would mean that there was a click on a hyperlink that lead from page A to page B.

screen-shot-2016-09-29-at-13-36-33

A bit of SQL manipulation using the date and time functions on the SQLite side will filter out the columns from the visits table that we don’t care about for this experiment and also format the timestamp in a user friendly date and time.

SELECT id, url, time(((visit_time/1000000)-11644473600), 'unixepoch') as visit_time, 
date(((visit_time/1000000)-11644473600), 'unixepoch') as visit_date,
visit_time as visit_time_raw 
FROM visits

Here’s what records look like using this query. Nice and ready to be imported into Neo4j.

screen-shot-2016-09-29-at-14-35-41

Loading the data into Neo4j

The model I’m planning to build is quite simple: I’ll use a node to represent a web page and a separate one to represent each individual visit to a page. Each visit event is linked to the page through the :VISIT_TO_PAGE relationship, and chained page visits (hyperlink navigation) are linked through the :NAVIGATION_TO relationship. Here is what that looks visually on an example navigation from a post on the Neo4j blog to a page with some code on Github:

screen-shot-2016-09-29-at-19-28-38

Ok, so let’s go with the import scripts.  First the creation of Page nodes out of every record in the urls table:

CALL apoc.load.jdbc("jdbc:sqlite:/Users/jbarrasa/Documents/Data/History",
                    "urls") yield row 
WITH row 
CREATE (p:Page {page_id: row.id, 
                page_url: row.url, 
                page_title: row.title, 
                page_visit_count: row.visit_count, 
                page_typed_count: row.typed_count})

And I’ll do the same with the visits, but linking them to the pages we’ve just loaded. Actually, to accelerate the page lookup I’ll create an index on page ids first.

CREATE INDEX ON :Page(page_id)

And here’s the Cypher running the visit data load.

WITH "SELECT id, url, visit_time as visit_time_raw, 
 time(((visit_time/1000000)-11644473600), 'unixepoch') as visit_time, 
 date(((visit_time/1000000)-11644473600), 'unixepoch') as visit_date 
 FROM visits" AS sqlstring

CALL apoc.load.jdbc("jdbc:sqlite:/Users/jbarrasa/Documents/Data/History",
                    sqlstring ) yield row
WITH row 
MATCH (p:Page {page_id: row.url}) 
CREATE (v:PageVisit { visit_id: row.id, 
                      visit_time: row.visit_time, 
                      visit_date: row.visit_date, 
                      visit_timestamp: row.visit_time_raw}) 
CREATE (v)-[:VISIT_TO_PAGE]->(p)

And finally, I’ll load the transitions between visits but as we did before with the pages, let’s create first an index on visit ids:

CREATE INDEX ON :PageVisit(visit_id)
WITH "SELECT id, from_visit, transition, segment_id, visit_duration 
      FROM visits" AS sqlstring
CALL apoc.load.jdbc("jdbc:sqlite:/Users/jbarrasa/Documents/Data/History",
                    sqlstring
                    ) yield row 
WITH row 
MATCH (v1:PageVisit {visit_id: row.from_visit}),
      (v2:PageVisit {visit_id: row.id}) 
CREATE (v1)-[:NAVIGATION_TO]->(v2)

So we are ready to start querying our graph!

Querying the graph

Let’s look for a direct navigation in the graph that goes for instance from a page in the Neo4j web site to Twitter.

MATCH (v1)-[:VISIT_TO_PAGE]->(p1),
      (v2)-[:VISIT_TO_PAGE]->(p2),
      (v1)-[:NAVIGATION_TO]->(v2) 
WHERE p1.page_url CONTAINS 'neo4j.com' 
      AND p2.page_url CONTAINS 'twitter.com'
RETURN * LIMIT 10

In my browser history data, this produces the following output. Notice that I’ve extended it to include an extra navigation step. I’ve done that just by clicking on the graph visualisation in the Neo4j browser to make the example more understandable:

screen-shot-2016-09-29-at-16-46-26

It actually corresponds to a visit to the Neo4j blog, followed by me tweeting how cool was what I just read. The proof that I’m working with real data is the actual tweet (!)

Ok, so while this basic model is good to analyse individual journeys, I think extracting a Site node by aggregating all pages in the same site can give us interesting insights. Let’s go for it.

Extending the model

This could be done in different ways, for example we could write a stored procedure and call it from a Cypher script. Having the full power of java, we could do a proper parsing of the url string to extract the domain.

I will do it differently though, I’ll run a SQL query on the History SQLite DB including string transformations to substring the urls and extract the domain name (sort of). The SQL that extracts the root of the url could be the following one:

SELECT id, substr(url,9,instr(substr(url,9),'/')-1) as site_root 
FROM urls 
WHERE instr(url, 'https://')=1 
UNION
SELECT id, substr(url,8,instr(substr(url,8),'/')-1) as site_root 
FROM urls
WHERE instr(url, 'http://')=1

Quite horrible, I know. But my intention is to show how the graph can be extended with new data without having to recreate it. Quite a common scenario when you work with graphs, but relax, graphs are good at accommodating change, nothing to do with RDBMS migrations when having to change your schema.

So this new query produces rows containing just the domain (the root of the url) and the page id that I will use to match to previously loaded pages. Something like this:

screen-shot-2016-09-29-at-19-47-56

And the Cypher that loads it and adds the extra information in our graph would be this:

WITH "select substr(url,9,instr(substr(url,9),'/')-1) as site_root, id 
      from urls where instr(url, 'https://')=1 
      UNION
      select substr(url,8,instr(substr(url,8),'/')-1) as site_root, id 
      from urls where instr(url, 'http://')=1"  AS query
CALL apoc.load.jdbc("jdbc:sqlite:/Users/jbarrasa/Documents/Data/History",
                     query) yield row 
WITH row 
MATCH (p:Page {page_id: row.id})
MERGE (s:Site {site_root: row.site_root})
CREATE (p)-[:PAGE_IN_SITE]->(s)

And once we have the sites we can include weighted site level navigation. The weight is simply calculated by summing the number of transitions between pages belonging to each site. Here is the Cypher that does the job:

MATCH (s:Site)<-[:PAGE_IN_SITE]-()<-[:VISIT_TO_PAGE]-()<-[inbound:NAVIGATION_TO]-()-[:VISIT_TO_PAGE]->()-[:PAGE_IN_SITE]->(otherSite) 
WHERE otherSite <> s 
WITH otherSite, s, count(inbound) as weight 
CREATE (otherSite)-[sn:SITE_DIRECT_NAVIGATION{weight:weight}]->(s)

This is a much richer graph, where we can traverse not only individual journeys, but also Site level connections. In the following visualisation we can see that there are some transitions between the http://www.theguardian.co.uk and the http://www.bbc.co.uk sites (indicated in green), also to other sites like en.wikipedia.org. In the same capture we can see one of the individual navigations that explain the existence of  a :SITE_DIRECT_NAVIGATION relationship between the Guardian node and the BBC one. It actually represents a hyperlink I clicked on the Guardian’s article that connected it to a BBC one. The purple sequence of events (page visits) details my journey and the yellow nodes represent the pages, pretty much the same we saw on the previous example from neo4j.com to twitter.com.

screen-shot-2016-09-30-at-01-09-41

We can also have a bird’s eye view of a few thousand of the nodes on the graph and notice some interesting facts:

Screen Shot 2016-09-29 at 21.41.11.png

I’ve highlighted some interesting Site nodes. We can se that the most highly connected (more central in the visualization) are the googles and the URL shortening services (t.co, bit.ly, etc.). It makes sense because you typically navigate in and out of them, they are kind of bridge nodes in your navigation. This is confirmed if we run the betweenness centrality algorithm on the sites and their connections. Briefly, betweenness centrality is an indicator of a node’s centrality in a graph and is equal to the number of shortest paths from all nodes to all others that pass through that node.

Here is the Cypher script, again invoking the graph algo implementation as a stored procedure that you can find in the amazing APOC library:

MATCH (s:Site)
WITH collect(s) AS nodes
CALL apoc.algo.betweenness(['SITE_DIRECT_NAVIGATION'],nodes,'BOTH') 
  YIELD node, score
RETURN node.site_root, score
ORDER BY score DESC LIMIT 5

And these are the top five results of the computation on my browser history.

screen-shot-2016-09-30-at-00-44-51

I’m sure you can think of many other interesting queries on your own navigation, what’s the average length of a journey, how many different sites it traverses, is it mostly intra-site? Are there any isolated clusters? An example of this in my browser history are the Amazon sites (amazon.co.uk and music.amazon.co.uk). There seem to be loads of transitions (navigation) between them but none in or out to/from other sites. You can visually see this on the bottom left part of the previous bird’s eye view. I’m sure you will come up with many more but I’ll finish this QuickGraph with a query involving some serious path exploration.

The question is: Which sites have I navigated to from LinkedIn pages, how many times have I reached them and how long (as in how many hyperlink clicks) did it take me to get to them? You may be asking yourself how on earth would you even express that in SQL(?!?!). Well, not to worry, you’ll be pleased to see that it takes less writing expressing the query in Cypher than it takes to do it in English. Here it is:

MATCH (v1)-[:VISIT_TO_PAGE]->(p1)-[:PAGE_IN_SITE]-(s1:Site {site_root: "www.linkedin.com"}) 
MATCH p = (v1)-[:NAVIGATION_TO*]->(v2)-[:VISIT_TO_PAGE]->(p2)-[:PAGE_IN_SITE]-(s2)
WHERE s2 <> s1
WITH length(p) AS pathlen, s2.site_root AS site 
RETURN AVG(pathlen) AS avglen, count(*) AS count, site ORDER BY avglen

And my results, 21 milliseconds later…

screen-shot-2016-09-30-at-01-59-38

What’s interesting about this QuickGraph?

This experiment shows several interesting things, the first being how straightforward it can be to load relational data into Neo4j using the apoc.load.jdbc  stored procedure. As a matter of fact, the same applies to other types of data sources for example Web Services as I described in previous posts.

The second takeaway is how modelling and storing as a graph data that is naturally a graph (sequences of page visits) as opposed to shoehorning it into relational tables opens a lot of opportunities for querying and exploration that would be unthinkable in SQL.

Finally I’ve also shown how some graph algorithms (betweenness centrality) can be applied easily to your graph using stored procedures in Cypher. Worth mentioning that you can extend the list of available ones by writing your own and easily deploying it on your Neo4j instance.

Advertisements

The ‘hidden’ connections in Google’s Knowledge Graph

As far as I know, the only way to query Google’s Knowledge Graph currently is the search API. Let’s run a query on it, search for instance for Miles Davis’ album “Sketches of Spain”.

https://kgsearch.googleapis.com/v1/entities:search?query=sketches%20of%20spain&key=<your_key_here>&limit=1

The API returns this JSON-LD fragment back (thanks, Jos de Jong for the great JSON Editor Online):

screen-shot-2016-09-12-at-14-33-15

Strip out the wrapping entities and each search result returned is just a node from the Knowledge Graph for which we get the id, type (category), name and description. Additionally, you may get your node linked to a Wikipedia page that provides a detailed description of the entity. That’s what the red box highlights in the previous fragment. Visually, what we get is something like this:

screen-shot-2016-09-12-at-15-11-24

This is nice because your text search is returning an entity in Google’s knowledge graph and it’s structured data… yes but there’s something missing. I don’t think I’d be exaggerating if I said there is the most important bit missing: The context, the connections, the other bits of the graph that this entity relates to. Let me explain what I mean: If I run the same search in a browser I get a much richer result from the Knowledge Graph:

screen-shot-2016-09-12-at-12-56-10

The dashed red box shows what the search API currently returns, and the bits connected with the arrows are the context that I’m talking about. The author of the album, the producers, the awards received, the genre… The data is obviously in the graph and JSON-LD’s capabilities for expressing rich linked data are crying to be used. If that was not enough, the relationships are already defined in schema.org so it looks like we have all we need. Actually, Google! you have all you need 🙂

Right, so based on this, what would a (WAY) richer result look like? Look at the little blue box that I added to the original query output:

screen-shot-2016-09-12-at-15-41-39

Or probably for a more intuitive representation, look at the graph that this new JSON-LD fragment represents:

screen-shot-2016-09-12-at-16-03-41

Wouldn’t it be cool? And not only cool but also extremely useful? Let me know your thoughts.

And yes, for those of you who may be wondering where did I get the IRIs of the extra nodes and whether they are real or made up, I did run separate queries on the search API for each of the related entities and stuck it all together manually so valid IRIs but retrieved separately.

One final comment: If you’re interested in publishing/sharing connected data (graph data) as JSON-LD straight from your Neo4j Graph Database, have a look at this repo.

 

 

 

QuickGraph#3 A step-by-step example of RDF to Property Graph transformation

The dataset

For this example I am going to use a sample movie dataset from the Cayley project. It’s a set of half a million triples about actors, directors and movies that can be downloaded here. Here is what the dataset looks like:

</en/meet_the_parents> <name> "Meet the Parents" .
</en/meet_the_parents> <type> </film/film> .
</en/meet_the_parents> </film/film/directed_by> </en/jay_roach> .
</en/meet_the_parents> </film/film/starring> _:28754 . 
_:28754 </film/performance/actor> </en/ben_stiller> .
_:28754 </film/performance/character> "Gaylord Focker" .
</en/meet_the_parents> </film/film/starring> _:28755 .
...

One could argue whether this dataset is actual RDF or just a triple based graph since it does not use valid URIs or even the RDF vocabulary (note for example that instead of  http://www.w3.org/1999/02/22-rdf-syntax-ns#type we find just type). But this would be a rather pointless discussion in my opinion. For what it’s worth, the graph is parseable with standard RDF parsers which is enough and as we’ll see the problems derived from this can be fixed, which is the point of this post.

 

Loading the data into Neo4j

I’ll use the RDF Importer described here for the data load. Now, there is something to take into account, even though the data set is called ‘30kmoviedata.nq’ it does not contain quads but triples, so I tried the parser setting the serialization format to ‘N-Triples’. The parser threw an error complaining about the structure of the URIs:

Not a valid (absolute) IRI: /film/performance/actor [line 1]

However, funnily enough the file parses as Turtle format. So if you want to give it a try, remember to set the second parameter of the importRDF stored procedure to ‘Turtle’ and run the import in the usual way. It took only 39 seconds to load the 471K triples on my laptop.

screen-shot-2016-09-09-at-16-56-13

Fixing the model

Fixing dense nodes representing categories

First thing we notice is that because the data set does not use the RDF vocabulary, the a <type> b statements are not transformed into labeled nodes as would have happened if rdf:type was used instead. So there are a couple of unusually dense nodes representing the categories (person and movie) because most of the nodes in the dataset are either actors or movies and are therefore linked to either one or the other category node. The two dense nodes are immediately visible in a small sample of 1000 nodes:

screen-shot-2016-09-09-at-17-11-09

We can get counts on the number of nodes connected to each of them by running this query:

MATCH (x)-[:ns1_type]->(t) RETURN t.uri, count (x)

screen-shot-2016-09-09-at-16-27-39

The natural way of representing categories in the Label Property Graph model is by using labels so let’s fix this!  Here is the Cypher fragment that does the job:

MATCH (x)-[:ns1_type]->({uri : 'file:/film/film'}) 
SET x:Film

And once we have the nodes labeled with their categories we can get rid of the dense nodes and the links that connect the rest of the nodes to them.

MATCH (f {uri : 'file:/film/film'}) DETACH DELETE f

Exactly the same applies to the other category: ‘file:/film/person’

MATCH (x)-[:ns1_type]->({uri : 'file:/people/person'}) 
SET x:Person 

MATCH (p {uri : 'file:/people/person'}) DETACH DELETE p

Fixing unneeded intermediate nodes holding relationship properties

In the tiny fragment that I copied at the beginning of the post, we can already see that the data set suffers from one of the known limitations of triple based graph models which is the impossibility of adding attributes to relationships. To do that, intermediate nodes need to be created. Let’s have a look at the example in the previous data fragment graphically.

Ben Stiller plays the role of Gaylord Focker in the movie Meet the Parents and when modelling this (think how would you draw that in a whiteboard) our intuition says something like this:

 

Screen Shot 2016-09-09 at 21.11.20.png

But in a triple based model you will need to introduce an intermediate node to hold the role played by an actor in a movie. Something like this.

screen-shot-2016-09-09-at-14-43-32

This obviously creates a gap between what you conceive when modelling a domain and what is stored in disk and ultimately queried. You will have to map what’s in your head, what you drew in the whiteboard when sketching the model to what the triple based formalism forces you to actually create. Does this ring a bell? Join tables in the relational model maybe? In your head it’s a many-to-many relationship but in the relational model it has to be modelled in a separate join table, an artificial construct imposed by the modelling paradigm that inevitably builds a gap between the conceptual model and the physical one. This ultimately makes your model harder to understand and maintain and your SQL queries looooooonger and less performant. But not to worry, we’ll fix this by using the property graph model, the one that is closer to the way we as humans understand and model domains.

But before we do that, let’s look at another problem derived from this. This complex model introduces the possibility of data quality problems in the form of broken links. What if we have the first leg connecting our intermediate node with the movie but no connection with the actor?  It would be a totally meaningless piece of information. The pattern I’m describing would be expressed like this:

()-[r:ns2_starring]->(x) WHERE NOT (x)-[:ns0_actor]->()

And a query producing a ‘Data Quality’ report on this particular issue could look something like this:

MATCH ()-[r:ns2_starring]->(x) WHERE NOT (x)-[:ns0_actor]->() 
WITH COUNT(r) as brokenLinks
MATCH ()-[r:ns2_starring]->(x)-[:ns0_actor]->() 
WITH COUNT(r) as linked, brokenLinks
RETURN linked + brokenLinks as total, linked, brokenLinks,  
     toFloat(brokenLinks)* 100/(linked + brokenLinks) as percentageBroken

Screen Shot 2016-09-09 at 17.43.59.png

So 0.03% does not seem to be significant, probably the dataset was truncated in a bad way, which would explain the missing bits. Anyway, we can get rid of these broken links that don’t add any value to our graph. Here’s how:

MATCH ()-[r:ns2_starring]->(x) WHERE NOT (x)-[:ns0_actor]->() 
DETACH DELETE x

Ok, so now we are in a position to get rid of the ugly and unintuitive intermediate nodes that I described before and replace them with relationships containing attributes on them.

MATCH (film)-[r:ns2_starring]->(x)-[:ns0_actor]->(actor)
CREATE (actor)-[:ACTS_IN { character: x.ns0_character}]->(film)
DETACH DELETE x
...
Deleted 136694 nodes, set 15043 properties, created 136694 relationships, statement executed in 7029 ms.

And voilà! Here is the final model zooming on the ‘Gaylord Focker’ area:

MATCH (actor)-[:ACTS_IN { character : 'Gaylord Focker' }]->(movie) 
RETURN * LIMIT 25

 

Screen Shot 2016-09-09 at 18.37.38.png

And to finish, one of our favourites at Neo4j, a recommendation engine for Hollywood actors. Who should Ben Stiller work with? We’ll base this in the concept of friend-of-a-friend. If Ben has worked several times with actor X and actor X has worked several times with actor Y then there is a good chance that Ben might be interested in working with actor Y.

Here is the Cypher query that returns our best recommendations for Ben Stiller:

MATCH (ben:Person {ns1_name: 'Ben Stiller'})-[:ACTS_IN]->(movie)<-[:ACTS_IN]-(friend) 
WITH ben, friend, count(movie) AS timesWorkedWithBen ORDER BY timesWorkedWithBen DESC LIMIT 3 //limit to top 3 
MATCH (friend)-[:ACTS_IN]->(movie)<-[:ACTS_IN]-(friendOfFriend)
WHERE NOT (ben)-[:ACTS_IN]->(movie)<-[:ACTS_IN]-(friendOfFriend) AND friendOfFriend <> ben
RETURN friend.ns1_name AS friendOfBen, timesWorkedWithBen, friendOfFriend.ns1_name AS recommendationForBen, count(movie) AS timesWorkedWithFriend ORDER BY timesWorkedWithFriend DESC limit 50

Easy, right? And here are the recommendations:

Screen Shot 2016-09-09 at 20.46.32.png

The following two visualisations give an idea of the portion of the graph explored with our recommendation query. This first one shows Ben’s friends and the movies where they worked together (~400 nodes in total):

Screen Shot 2016-09-09 at 19.26.35.png

And the next shows Ben’s friends’ friends, again with the movies that connect them (~1800 nodes):

Screen Shot 2016-09-09 at 19.34.10.png

You can try to write something similar on the original triple based graph using SPARQL, Gremlin or any other language but I bet you it will be less compact, less intuitive and certainly less performant than the Cypher I wrote. Prove me wrong if you can 😉

What’s interesting about this QuickGraph?

The example highlights some of the modelling limitations of triple based graph models like RDF and how it is possible to transform a model originally created as RDF into a more intuitive and easier to query and explore using the Labeled Property Graph in Neo4j.