The purpose of the Open Annotation Community Group is to work towards a common, RDF-based, specification for annotating digital resources. The effort will start by working towards a reconciliation of two proposals that have emerged over the past two years: the Annotation Ontology  and the Open Annotation Model . Initially, editors of these proposals will closely collaborate to devise a common draft specification that addresses requirements and use cases that were identified in the course of their respective efforts. The goal is to make this draft available for public feedback and experimentation in the second quarter of 2012. The final deliverable of the Open Annotation Community Group will be a specification, published under an appropriate open license, that is informed by the existing proposals, the common draft specification, and the community feedback.
Note: Community Groups are proposed and run by the community. Although W3C hosts these conversations, the groups do not necessarily represent the views of the W3C Membership or staff.
In the early 1990s, the creators of Netscape apparently built a function that enabled each web page to be annotated by those visiting it, as a way for viewers to discuss the page’s content. But according to a  produced in 2013 by a nonprofit called [Hypothesis], the feature was turned off.
Now that same nonprofit is working to bring that functionality back by offering an app that allows for “open annotation.” As it’s described, this is a “layer over the web,” based on open standards, that allows people to comment even when and where comments aren’t allowed. The project is based on the annotation standards for digital documents developed by the [W3C Web Annotation Working Group].
The organization is especially interested in wooing [educational users] (both K-12 and higher ed) to serve as test pilots, and it has drawn its first two institutions. The pilot for [Hypothesis] currently includes [California State University Channel Islands (CSUCI)] and North Carolina’s [Davidson College].
The purpose of the pilot is to see whether students and teachers engage more deeply with course content and with each other when they use the app. People can add comments and annotate documents that are being used in courses. It’s available as an external tool in whatever learning management system the school is using.
The program uses the [Learning Tools Interoperability (LTI) standard] to integrate with LTI-compliant LMSes, including [Instructure Canvas], [Blackboard Learn], [D2L Brightspace], [Moodle], [Sakai] and [Schoology]. Eventually, [Coursera] and [edX] are expected to be added to that list.
The [pilot at CSUCI], which uses Canvas, will span the duration of the spring 2019 semester and take input from 12 instructors, who have committed to using Hypothesis at least once in their courses.
“We’re very excited to have our faculty as part of the Hypothesis pilot at CSUCI,” said Instructional Technology Lead Michael McGarry, in a statement. “After just running [an] initial kick-off webinar, there’s already a lot of buzz about the possibility of putting this level of engagement in the hands of students. Ideas have begun flowing. Light bulbs are turning on. I’m excited to see our faculty excited.”
Davidson Lead Instructional Designer Sundi Richard said more Hypothesis users are surfacing in classes and using it for their own independent or social reading. “We see this pilot as a way to expand that usage to groups who might not want to just jump into web or social annotation,” she suggested. “We hope this leads to varied uses of the tool for meaningful reading and engagement in digital spaces.”
Hypothesis is open to more schools joining the pilot effort. But even if they don’t, they can try out the software. Implementation guides for the various LMS integrations are available [on the Hypothesis website].
Eventually, schools that make “substantial use” of the utility will be expected to join the organization’s “sustaining partnership program” and pay an annual fee.
Following the 24 June OA Data Model Rollout at the University of Manchester in the United Kingdom, a face-to-face meeting of the OA Community Group will be held. This meeting will take place Tuesday, 25 June, 8:30 AM – 4:00 PM at Manchester. The meeting will be a working meeting focusing on next steps for the Group now that Community Draft 2 of the Open Annotation Specification has been released.
The meeting agenda will include the following topics:
Should rdf:List be a mix-in rather than referenced object for oa:List?
Should we just use Alt, Bag and List directly making oa specific classes, e.g., oa:List, unnecessary?
The oa:SemanticTag as a class is still ugly, is there a better solution?
Can we offer a mapping to prov for specific resources (e.g. for time)?
Rather than stick with our current arm chair list, should we provide more clarifications and validation for Motivations?
Should we spin off specific resource and selectors to its own independent group?
When would/should we move to a Working Group?
What will we do about cnt if we go to a Working Group, as it’s not going to be stable?
The goal is not make major changes to the specification but rather to develop best practices and provide clarifications that will facilitate adoption and proliferation of the data model. This meeting will be led by Community Group Co-Chairs — Paolo Ciccarese and Rob Sanderson. The meeting is open to all registered Participants in the Open Annotation Community Group, ensuring that those attending have agreed to the W3C Community Contributor License Agreement. Please RSVP by email to me (email@example.com) by no later than 20 June.
When you RSVP, I’ll forward you additional logistics details, including recommended hotel. Full final agenda will be posted to the Community Group listserv a week in advance of the meeting.
Visiting Project Coordinator
Center for Informatics Research in Science and Scholarship
The Graduate School of Library and Information Science
We are pleased to announce three public meetings introducing the Open Annotation Data Model Community Specification. These day-long public rollouts, carried out in concert with the Annotation Ontology and the Open Annotation Collaboration, and made possible by generous funding from the Andrew W. Mellon Foundation, will inform digital humanities and sciences computing developers, curators of digital collections and scholars using digital content about the W3C Open Annotation Community Group’s work.
Participants will learn about the data model’s core features and advanced modules through tutorials, a showcase of existing implementations, Q&A sessions with community implementers and live demonstrations. Topics will include:
• The Open Annotation Data Model,
• The W3C Open Annotation Community Group,
• Existing implementations,
• Developer tools & resources.
Rollout times and places:
• U.S. West Coast Rollout – 09 April 2013 at Stanford University (RSVP)
• U.S. East Coast Rollout – 06 May 2013 at the University of Maryland (RSVP)
• U.K. Rollout – 24 June 2013 at the University of Manchester (RSVP)
In addition to regular teleconferences to conduct general and specific Community Group business, we are also planning occasional, special-purpose face-to-face meetings as needed to move forward on Group priorities. The first of these will take place September 18-19 in Chicago, IL (USA) and will be a working meeting to advance development of the Open Annotation Core Data Model and Extension Specification. Meeting will begin at 9 AM on Sept. 18 and conclude by 2 PM on Sept. 19.
This meeting will be co-chaired by the specification editors — Paolo Ciccarese, Rob Sanderson, and Herbert Van de Sompel — all of whom have confirmed their plans to attend in person. The meeting is open to all registered Participants in the Open Annotation Community Group, ensuring that those attending have agreed to the W3C Community Contributor License Agreement, but please keep in mind that the Group has been growing in recent days, and the venue we have been able to secure for this meeting has limited capacity. If interested in attending, please RSVP by email to Jacob Jett (firstname.lastname@example.org) and me (email@example.com) by no later than August 21st; sooner if possible, since available seats will be allocated on a first come, first serve basis.
For those who RSVP, we’ll forward you additional logistics details, including recommended hotel. Full final agenda will be posted to the Community Group listserv a week in advance of the meeting. We are hoping that interested Community Group members will help us to summarize in advance a few of the recent discussions about features and aspects of the data model, including proposals for specific modifications to the specifications. These “issue briefs” (e.g., 2 or 3 pages each) will be circulated to the entire Group prior to the meeting. More about this in a subsequent post.
We wish to acknowledge our appreciation to the Open Annotation Collaboration and the Andrew W. Mellon Foundation for underwriting logistics and venue expenses for this meeting; however, please note that in general we do not have budget to provide travel support for attendees.
Tim Cole, co-coordinator for outreach
University of Illinois at Urbana-Champaign
There’s a spectacular set of annotation challenges lurking in the documents underlying this N.Y. Times piece about standardized reading tests for U.S. schoolchildren. Such tests are very controversial among U.S. educators, and this piece is amusing for the silliness of a particular question. The setup, quoted in the piece, is an extraction from a story by a well-liked children’s author. Also quoted are the questions, along with multiple choice answers, about various aspects of the narrative, allegedly to test the children’s ability to make deductions both about the story and what might be deduced from it.
The Times piece is a fun read, but—and I am serious about this—a good set of competency questions for an annotation ontology could at least focus on something, informally, like this:
Annotate the test such that for each of the test questions, and for each of the possible answers, what evidence is presented in the extracted narrative for or against that answer.
There are probably other opportunities for competency questions, for example about annotation of the Times piece itself. One of the most interesting challenges of that will arise when you get to the part that seems to be an explanation from test designers of how it is determined what in fact the correct answers are.