See also: IRC log
<scribe> scribe: jeffw
agend+ Open Linked Data Use Case: Progress and Next Steps
Over our initial meetings we've actually accomplished quite a bit. We've documented on our wiki and we've educated ourselves on the eXtreme Design (XD) approach to knowledge engineering, we've downloaded and explored the Neon Toolkit with the XD and SPARQL plugin, we've begun to expand on some initial use cases following the XD approach.
We've outlined on the wiki some key decision components and we have a sample underway where we used the tools to specialize the Transition pattern to address one of the use cases (information flow). We've also prepared wiki pages providing basic introductions to some of the semantic standards and related domain standards for emergency management like EDXL.
So where we are now is that we're ready to really begin to put some meat on the bones. We're ready to start a significant modeling effort for the 8 or 10 decision components we've identified. So this brings us to today's Agenda. I suggested the following topics:
eva: we will discover what is missing from use cases when we start modeling, we'll discover point
don: we have some basic use cases, so let's just dig some details
connection between use cases and components, shouldn't there be that?
eva: yes, we'll discover a missing use case as we go, for example recording decisions or core decision component, the idea is that each use case produces one or a couple small modules that tend to solve that particular problem
jeff summarized use cases
eva: I believe we are covering
core decision representation and interoperability implying core
representation of decisions and we were going to talk about
event as a notion important for decision, so maybe they are in
the use case there, or maybe
... I suggest 4 or 5 domain centered use cases, which result in
3 or 4 modules (interconnected ontologies), we don't need one
use case for each individual small thing
... of course, we want to ensure we cover what we want to do
with the decision format, because from those we derive the
competency questions. And we can do this iteratively
don: I agree with eva, that we should just not necessarily add alot of stuff, but just break down the one
what do you recommend eva for how to do modeling, I'm not sure we all have time, but why not divide between us these
eva: important on the next
telecon that we could discuss what patterns you want to reuse,
these could be one way to synchronize between us that we are
talking about the same things, that we have the same model in
our head
... I propose we start modeling, and then we all present what
patterns to reuse and if no patterns, then what are the
modeling issues, and then we can post our draft solutions on
the wiki
... a good idea is to post a picture of the solutions, so I
need to check for visualization plug=in, cause a nice pictorial
demo of the model is more intuitive than the OWL file, which we
should also pass around but for a meeting better to have
graphical illustration
don: I was thinking I could do situational awareness for emergency services
eva: I would prefer the open-linked data use case
jeff summarized his transition pattern work (see info at end of these notes)
jeff summarized his perspective on events (see info at end of these notes)
eva: I'm familiar with the
concept and I could see what is done in the ontology domains,
but heavier ontologies are trying to characterize events, but I
could consider what is there and on the lightweight issue, one
of the common misconceptions of ontologies
... and vocabularies on the web is that you need to use the
complete thing, say you model events with time, place, etc. the
whole idea with owl ontologies is that when you add data, you
add only what is applicable
... even if restriction says an event always has atime, it
doesn't matter if you don't add that time, you already have the
possibility of an arbitrary sparse graph of data
... You can have optional things, so that you don't have to use
the whole thing, in total it still is small and clear enough so
that it is easily understandable
don: I think that for "event", we
need to think more in terms of how managed program languages
handle representation of events v. cursor on target and ucore,
cause for cursor on target (what, where, when with option for
details)
... I don't think it would be sufficient for this and if you
add in more stuff, then you don't need it, so we should look at
what is there and tie-in event based representation in
computing
What would be an example?
don: Say you have a socket
listener, one way is a thread to poll whether you have any
data, another I have a general thing called an event and when
you get data in this thing, come poke me
... So looking at the architecture for a generic construct but
passing specific info into it would be good
I always like to have uris so you can include the info or just link to it
eva: if you have different
components, so that you modularize your data as well, it's like
the linked data, you can relate the dataset, but not always
using the complete cloud of data, a similar structure here
would be a good solution
... for next meeting, Don had some ideas of what we could look
at, so why don't we do some collection of background material
and we can model events with whatever until then. I'd like to
look at existing event models and Don could put something on
wiki about event driven software approach.
taxonomies and understanding them
eva: I will post something on the wiki about their different vocabularies
I also think we should continue to reachout to RPI and other folks working in this open linked data area as well as other W3C members
Where I think our work is important for the open linked data initiative is both that we can help expand the data sources that are rdf-enabled through our application of our work to domains like emergency management
and also we can provide an important use case for the open linked data, namely for making decisions.
That's all the time we have for today. Thanks to everyone for calling in.
(Here are some notes on topics above, since I couldn't scribe well while talking)
We have a set of use cases and the question is whether these are sufficient to begin and also whether they are structured at the right level. First, we have the Informaion Flow use case
which utilizes the "decision state" component of our format. The idea is that one moves through different states in the decision process and one should be able to represent that and query when a certain state
began or ended and determine how much time was spent in given states. Since our organizations want to improve information flow, it's important to track the amount of time needed to get and pass the information
and potentially develop a metric to measure whether our efforts to improve information flow are working. The Open Linked Data use case is intended to ensure and demonstrate that our decision format
can operate on and utilize linked data in its components, including as subject matter of decisions, as options, as criteria and the ability to make decisions across distributed sets of data utilizing the semantic standards
including SPARQL as a technique to drive the application of the criteria on the options to drive an assessment (an ordered list of the options based on the criteria).
The Interoperability use case is intended to show that decisions and their subject matter can be shared across tool sets utilizing these standards to enable interoperability.
The use case for Capturing Expertise is to suggest that if decisions are captured and managed in a standard format then anyone who is a good decision-maker, their decision components, such as criteria for making
a certain type of decision can be linked to and reused by others. In this way, expertise can be captured and reused.
The use case for Templates for Efficiency is to show that the same reuse of components generated by others can improve efficiency of the decision-making process as a whole.
Finally, the Situational Awareness use case is one where we recognize that decisions are being made constantly in our organizations and they are not well documented and managed, but they could be
and if they were, including the state of the decision-making, and aspects of decision-making such as which options are being contemplated currently, then situational awareness would be vastly improved and enabled.
(After some discussion that these use cases are sufficient to begin modeling, the participants picked their favorite to begin.)
<scribe> ACTION: Jeff will move forward to model the Information Flow use case. [recorded in http://www.w3.org/2010/05/27-decision-xg-minutes.html#action01]
<scribe> ACTION: Don will move forward to model the Situational Awareness use case and will report next time on what he learns about how managed program languages handle events in terms of processing/updating and the impact on the model. [recorded in http://www.w3.org/2010/05/27-decision-xg-minutes.html#action02]
<scribe> ACTION: Eva will move forward with the Open Linked Data use case and report next time on what she learns about taxonomies in use on this data. [recorded in http://www.w3.org/2010/05/27-decision-xg-minutes.html#action03]
I used the Neon toolkit with the eXtreme Design (XD) plugin and the SPARQL plugin. I specialized the Transition pattern by importing it and subclassing a few of the classes to give them more specific names
for this application, e.g. DecisionState. I then did further subclassing to suggest that there might be sets of states from different perspectives and one of the perspectives is "Information Flow". Under that
subclass, I then added subclasses for types of states, such as InformationGathering or InformationAnalysis. I then moved forward to creat individuals/instances of those classes. So I created specific
time intervals, which I then linked in to specific instances of states for a given sample decision, and then I linked the states into some transitions. I then moved forward to use a SPARQL
query to show that I had represented the information that I needed in the form I needed it. This was a good first step for me and I learned much about the use of the tool. I will pair-design with
Eva as I move forward with this use case, since she has been kind to help me with several questions I've had regarding use of the toolkit, the plugin and the ontology patterns.
Two items worth mentioning: first, we've begun a reachout effort to some of the folks at RPI and we should continue to ensure that we proceed with this use case as well informed as possible and
that our thinking aligns with the work ongoing in this area. If we can get someone to speak at one of our meetings on this topic, if we can get someone who can be a resource for us to check
with as we proceed, and better yet, if we can get someone to participate in our incubator from this work, then that would be a great benefit hopefully to all participants.
Second, the taxonomies are key to utilizing the open data sets, in other words, what terms are used in the data sets. Our education on this topic will be key for us successfully tying into the data sets.
Another point is that there are tools and documentation on the RPI site about how to rdf-enable data sets, and this may be something we should do as part of our emergency management domain.
Meeting was adjourned on time and the final notes were added to flesh out the scribe's comments which were not captured at the time.
This is scribe.perl Revision: 1.135 of Date: 2009/03/02 03:52:20 Check for newer version at http://dev.w3.org/cvsweb/~checkout~/2002/scribe/ Guessing input format: RRSAgent_Text_Format (score 1.00) Found Scribe: jeffw Inferring ScribeNick: jeffw WARNING: No "Present: ... " found! Possibly Present: Eva dmcgarry don eblomqvi inserted You can indicate people for the Present list like this: <dbooth> Present: dbooth jonathan mary <dbooth> Present+ amy Agenda: http://www.w3.org/2005/Incubator/decision/wiki/Decision_Mtg_5_Agenda Got date from IRC log name: 27 May 2010 Guessing minutes URL: http://www.w3.org/2010/05/27-decision-xg-minutes.html People with action items: don eva forward jeff move will WARNING: Input appears to use implicit continuation lines. You may need the "-implicitContinuations" option.[End of scribe.perl diagnostic output]