W3C

- DRAFT -

W3C HCLS Hackathon at MIT

30 Aug 2012

See also: IRC log

Attendees

Present
Bob_Powers, MITKiva
Regrets
Chair
EricP
Scribe
Lena, dbooth

Contents


<dbooth> zakim this is #hack

<bobP> yes I can hear

<bobP> sort of

<bobP> I can hear good

<Justin> well, well

<bobP> can you hear me now?

<Justin> no, because you're muted

<Justin> and Eric is scared of feedback

<bobP> hold on for coffee

<Justin> YES!

<Lena> any scribe volunteers?

<Lena> or should we "volunterize" someone ;)

Lightning talk by Maryam Panahiazar: Using Semantic Technology and Phyloinformatics in Translational Bioinformatics

<Lena> scribenick: Lena

Lightning talk by David Booth - RDF Pipeline Framework

automating data production pipelines

open source project

hosted on google code

<bobP> http://code.google.com/p/rdf-pipeline/

<dbooth> http://dbooth.org/2012/pipeline/

idea: when doing apps using sem web techs, we get data from somewhere and we massage it using some data production pipeline

slide 7 - modelled after the cleveland clinic pipeline

each node represent processing and data storage

e.g. patients represents some sort of patient data

when we have large amounts of data, it is used by many different applications

e.g. cardiology vs immunology

why do we want data production pipelines? -

from applications to RDF - from RDF to applications

dbooth: app developers tend to be miopic - they only see their application
... what should go on in the semantic data integration cloud to make it work?

the pipeline can have as many outputs as you want

pipeline itself is described as rdf

each node is an http node that "knows" rest

data and programming language agnostic

although it was designed around an rdf use case

the framework transforms code (e.g. shell scripts) into REST services

output can be RDF but not necessarily

lazy update policy

it won't update unless the data changes

data manipulation using SPARQL

because it is a rules language

use inserts to keep the data in the server

different from a workflow because there is no central controller!

each node sees and uses the same description of pipeline

somewhat event-driven

<iker> http://www.topquadrant.com/products/TB_install.php

Elsevier - Semantic Web in HCLS for Commercial Applications, Iker Huerga

<iker> http://www.topquadrant.com/products/TB_install.php

<ericP> http://www.w3.org/People/Eric/ericP-foaf#ericP

Tutorial - SPARQL Rules, Iker Huerga

<dbooth> Download of TopBraid Composer Free edition: http://www.topquadrant.com/products/TB_install.php

<iker> http://topbraid.org/examples/purchases

<iker> SELECT ?x WHERE { LET (?x := smf:parseDate("12/3/09","MM/dd/yy")). }

drag and drop topbraid-spin-spin.ttl into imports tab

right click on spin function

create subclass

name it ISO8601

<iker> http://www.topquadrant.com/spin/tutorial/SPARQLRulesTutorial.pdf

click spin contrain and select "create from spin template"

go to predicate, click on + sign

click rdf:property and then arg1

<iker> This transforms a date into mmddyy ISO8601

click on spin body - add empty row

<iker> SELECT ?x WHERE { LET (?x := smf:parseDate("12/3/09","MM/dd/yy")). }

<dbooth> Then modify it to be: SELECT ?x WHERE { LET (?x := smf:parseDate(?arg1,"MM/dd/yy")). }

<iker> SELECT ?x WHERE { LET (?x := :ISO8601("3/6/09")). }

(paste the above query in sparql editor)

now go to right hand side and click add property

select datatypeproperty

call it invoiceDate

add range

xsd: date

<iker> CONSTRUCT {?s :invoiceDate ?idate } WHERE { ?s purchases:date ?date . LET (?idate := :ISO8601(?date)) . }

double click purchase

spin rule - add empty row

paste the construct that works

replace ?s with ?this

click ok (tiny little ok on the right hand side of the textbox)

<dbooth> so the spin:rule becomes:

<dbooth> [[

<dbooth> CONSTRUCT {

<dbooth> ?this :invoiceDate ?idate .

<dbooth> }

<dbooth> WHERE {

<dbooth> ?this purchases:date ?date .

<dbooth> BIND (:ISO8601(?date) AS ?idate) .

<dbooth> }

<dbooth> ]]

go to instances tab, double click on one of them

go to "Inference" menu, click "Run Inferences"

<iker> http://topbraid.org/spin/api/1.2.0/index.html

Lightning talk: Justin Lancaster, Integrated Monitoring, Modeling and Management

<iker> sorry guys but I need to leave to catch the train, very nice hackathon, congratulations Eric and thx everyone

<inserted> scribenick: dbooth

Using Jena, by Ian Jacobi

Luke: Jena also has a Dataset interface that has a default graph and a set of named graphs, but it's read only.

<luke> Dataset.getDefaultModel(), Dataset.getNamedModel(String graphURI) will get you the Model objects out of the Dataset object. You could probably modify those.

<ericP> sample data for jena hacking demos

<ericP> slides

<ericP> also linked from the agenda

Letest release of maven eclipse extension: http://download.eclipse.org/technology/m2e/releases

<Justin> RE: Lightning Talk -- JUSTIN LANCASTER -- kwiKBio(tm) Project -- slides accessible at http://biomedserver.com/kwiKBio%20Project_AUG%202012.pdf

linked from http://www.eclipse.org/m2e/download/

Eclipse IDE for Java Developers: http://www.eclipse.org/downloads/packages/eclipse-ide-java-developers/junor

After eclipse startup, File-->New-->Project-->Maven->Maven Project

scribe: select "Create a simple project"
... Group id: org.example
... Artifact Id: simplejena.example
... Finish

Then double click on: Package Explorer--> simplejena.example->src->pom.xml

scribe: go to Dependencies tab
... Group Id: org.apache.jena
... Artifact Id: jena-core
... Version: 2.7.3
... Okay
... SAVE
... and that will cause it to load jena from maven, and put it into the classpath.

Then note in Package Explorer->simplejena.example->Maven Dependencies, that jena stuff is there.

Then File->New->Class

scribe: Package: org.example.simplejenaexample
... Name: SimpleJenaExample
... Select "public static voic main(...)

http://purl.org/hcls/2007/kb-sources/addgene.ttl

<ericP> prefix sc: <http://purl.org/science/owl/sciencecommons/>

http://purl.org/science/articles/pmid/15169870

Complete code in SimpleJenaExample.java tab:

[[

package org.example.simplejenaexample;

import com.hp.hpl.jena.rdf.model.Model;

import com.hp.hpl.jena.rdf.model.ModelFactory;

import com.hp.hpl.jena.rdf.model.Property;

import com.hp.hpl.jena.rdf.model.Resource;

import com.hp.hpl.jena.rdf.model.StmtIterator;

import com.hp.hpl.jena.vocabulary.RDFS;

public class SimpleJenaExample {

/**

* @param args

*/

public static void main(String[] args) {

Model m = ModelFactory.createDefaultModel();

m.read("http://purl.org/hcls/2007/kb-sources/addgene.ttl", "N3");

Property is_described_by = m.createProperty(m.getNsPrefixURI("sc"), "is_described_in");

StmtIterator itr = m.listStatements(null, is_described_by, m.createResource("http://purl.org/science/articles/pmid/15169870"));

while (itr.hasNext()) {

Resource subject = itr.nextStatement().getSubject();

StmtIterator name_itr = subject.listProperties(RDFS.label);

while (name_itr.hasNext()) {

System.out.println(name_itr.nextStatement().getString());

}

}

}

}

]]

<ericP> slide4 task

Ian: To use SPARQL, you need to add Arq as a dependency.

To add Arq dependency: Package Explorer->simplejena.example->src->pom.xml->(right click)->Maven->Add dependency

scribe: Group Id: org.apache.jena
... Artifact Id: jena-arq
... Version: 2.9.3
... OK

<ericP> java -cp $(echo apache-jena-2.7.3/lib/*.jar | tr ' ' ':'):. Slide4

Luke: Warning "log4j:WARN No appenders could be found for logger (com.hp.hpl.jena.sparql.mgt.ARQMgt)." can be ignored. Jena uses an abstract logger that will bind at runtime to your preferred logger, so this warning is saying that it failed to do so. You can correct it by adding a logger to your maven dependencies.

<ericP> query demo

Working code is in the above pastebin uri.

Luke: For using SPARQL 1.1 Update, you can use a Jena Dataset.

Interactive SPARQL editor SparqlEd from Sindice: http://www.sindicetech.com/blog/?p=14&preview=true

<luke> For Jena SPARQL update syntax on a local Model, the syntax looks like:

<luke> Model model;

<luke> String query;

<luke> UpdateRequest request = UpdateFactory.create(query);

<luke> UpdateAction.execute(request, model);

<mary> thansk for SPARQL editor

SPARQL Editor: SparqlEd, From Sindice

Interactive SPARQL editor SparqlEd from Sindice: http://www.sindicetech.com/blog/?p=14&preview=true

http://hcls.sindicetech.com/sparql-editor/

<luke> complete, working Jena update example: http://pastebin.com/faWHGfY2

<Justin> THANKS TO ERIC, LENA and all other organizers and contributors for a really terrific session!! Ciao. Justin

YES, THANK YOU ERIC AND HELENA FOR A FANTASTIC HACKATHON!

AND THANK YOU ENTAGEN, W3C and DERI FOR SPONSORING IT!

ADJOURNED

Summary of Action Items

[End of minutes]

Minutes formatted by David Booth's scribe.perl version 1.136 (CVS log)
$Date: 2012/08/30 21:38:07 $

Scribe.perl diagnostic output

[Delete this section before finalizing the minutes.]
This is scribe.perl Revision: 1.136  of Date: 2011/05/12 12:01:43  
Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/

Guessing input format: RRSAgent_Text_Format (score 1.00)

Succeeded: i/topq/Topic: Elsevier - Semantic Web in HCLS for Commercial Applications, Iker Huerga
Succeeded: s/Jacoby/Jacobi/
Succeeded: s/has its own/uses an abstract/
Succeeded: i/Using Jena/scribenick: dbooth
Found ScribeNick: Lena
Found ScribeNick: dbooth
Inferring Scribes: Lena, dbooth
Scribes: Lena, dbooth
ScribeNicks: Lena, dbooth
Default Present: Bob_Powers, MITKiva
Present: Bob_Powers MITKiva

WARNING: Fewer than 3 people found for Present list!

Got date from IRC log name: 30 Aug 2012
Guessing minutes URL: http://www.w3.org/2012/08/30-hack-minutes.html
People with action items: 

[End of scribe.perl diagnostic output]