Skip to toolbar

Community & Business Groups

Argumentation Community Group

The Argumentation Community Group will facilitate and promote the use of the Web for all forms of argumentation. The group will discuss and design both argumentation representation formats and systems.

Note: Community Groups are proposed and run by the community. Although W3C hosts these conversations, the groups do not necessarily represent the views of the W3C Membership or staff.

No Reports Yet Published

Learn more about publishing.

Chairs, when logged in, may publish draft and final reports. Please see report requirements.

Publish Reports

Moral Reasoning Systems


Automated reasoning is a branch of artificial intelligence dedicated to understanding different aspects of reasoning; moral reasoning is reasoning concerned with morality. Automated moral reasoning is a research topic pertaining to the understanding of and the modeling and simulation of moral reasoning.

Moral Reasoning Systems and Education

Five varieties of moral reasoning systems with educational applications to consider are indicated.

Firstly, there is a variety of moral reasoning system with console-based or text-based user interfaces, a variety which possibly makes use of custom programming languages. This variety requires some specialized expertise to use, resembling, perhaps, computer algebra systems, automated theorem provers and proof assistants.

Secondly, there is a variety of moral reasoning system which interoperates with software applications requiring less specialized expertise to use, software where the users needn’t be computer programmers. Examples include decision support systems, software which support individual or organizational decision-making activities.

Thirdly, there is a variety of moral reasoning system with natural language and multimodal user interfaces. This variety includes dialog systems, virtual humans, intelligent personal assistants and intelligent tutoring systems. This variety can conveniently answer, discuss and advise larger numbers of users with regard to questions that they might ask, including in educational contexts.

Fourthly, there is a variety of moral reasoning system which interoperates with the processing and generation of stories, fables, parables or exemplums. This variety can be of use in processing the moral messages of literary texts and generating literary texts which teach moral messages.

Fifthly, there is a variety of moral reasoning system which interoperates with interactive digital entertainment, serious games, simulations and learning environments. This variety interoperates with virtual interactive storytellers, virtual directors, drama managers, experience managers and other educational narrative technologies.

Comparative Moral Reasoning

We can consider that moral reasoning systems could load “configuration and data” before providing outputs for inputs or questions. Such configuration and data include: axiomatic systems, philosophies, schools of thought, principles, beliefs, values, models of characters, self-models or role models, and generic models of cultural stereotypes. How system outputs vary based upon variations of loaded configuration and data is interesting.

We can envision systems which can simulate moral reasoning per the stages of moral development from models, for instance Kohlberg’s. We can envision systems which can simulate moral reasoning per multiple belief systems, philosophies or schools of thought. We can envision systems which can compare reasoning from across various configurations or loaded data, across various philosophies or schools of thought, and can provide explanation and argumentation as components of system output.

Automated Moral Reasoning and Planning

Automated planning and scheduling is a branch of artificial intelligence concerned with the realization of strategies or action sequences. Planning algorithms are often instrumental to generating the behavior of intelligent systems and robotics.

Machine ethics, or computational ethics, is a part of the ethics of artificial intelligence concerned with the moral behavior of artificially intelligent systems. Moral reasoning components should be interoperable with planning and scheduling components.

Uses of planning are much broader than robotics. Uses of planning extend into every sector, into industry, academia, science, military and government, and into public policy. Combinations of planners and moral reasoning can provide societal benefits transcending robotics and machine ethics.


Moral reasoning systems can provide broad societal benefits including computer-aided moral reasoning, computer-aided authoring of literature, new tools for philosophy, law, social sciences, the digital humanities, new decision support and public policy technologies, and new tools for education.



Barber, Heather, and Daniel Kudenko. “Generation of Adaptive Dilemma-based Interactive Narratives.” IEEE Transactions on Computational Intelligence and AI in Games 1, no. 4 (2009): 309-326.

Colyvan, Mark, Damian Cox, and Katie Steele. “Modelling the Moral Dimension of Decisions.” Noûs 44, no. 3 (2010): 503-529.

French, Simon. Decision Theory: An Introduction to the Mathematics of Rationality. Halsted Press, 1986.

Goldin, Ilya M., Kevin D. Ashley, and Rosa L. Pinkus. “Introducing PETE: Computer Support for Teaching Ethics.” In Proceedings of the 8th international conference on Artificial intelligence and law, pp. 94-98. ACM, 2001.

Greco, Salvatore, J. Figueira, and M. Ehrgott. “Multiple Criteria Decision Analysis.” Springer’s International series (2005).

Harmon, Sarah. “An Expressive Dilemma Generation Model for Players and Artificial Agents.” In Twelfth Artificial Intelligence and Interactive Digital Entertainment Conference. 2016.

Hodhod, Rania. “Interactive Narrative and Intelligent Tutoring for Ill-Defined Domains.” (2008).

Hodhod, Rania, and Daniel Kudenko. “Interactive Narrative and Intelligent Tutoring for Ethics Domain.” Intelligent Tutoring Systems for Ill-Defined Domains: Assessment and Feedback in Ill-Defined Domains. (2008): 13.

Hodhod, Rania, Daniel Kudenko, and Paul Cairns. “AEINS: Adaptive Educational Interactive Narrative System to Teach Ethics.” In AIED 2009: 14th International Conference on Artificial Intelligence in Education Workshops Proceedings, p. 79. 2009.

Hodhod, Rania, Daniel Kudenko, and Paul Cairns. “Serious Games to Teach Ethics.” AISB’09: Artificial and Ambient Intelligence (2009).

Lapsley, Daniel K. Moral Psychology. Westview Press, 1996.

Mancherjee, Kevin, and Angela C. Sodan. “Can Computer Tools Support Ethical Decision Making?.” ACM SIGCAS Computers and Society 34, no. 2 (2004): 1.

McLaren, Bruce M. “Extensionally Defining Principles and Cases in Ethics: An AI Model.” Artificial Intelligence 150, no. 1 (2003): 145-181.

McLaren, Bruce M. “Computational Models of Ethical Reasoning: Challenges, Initial Steps, and Future Directions.” IEEE intelligent systems 21, no. 4 (2006): 29-37.

Prakken, Henry, and Giovanni Sartor. “Law and Logic: A Review from an Argumentation Perspective.” Artificial intelligence 227 (2015): 214-245.

Rahwan, Iyad, Simon D. Parsons, and Nicolas Maudet. Argumentation in Multi-agent Systems. Springer-Verlag Berlin Heidelberg, 2010.

Robbins, Russell W., William A. Wallace, and Bill Puka. “Supporting Ethical Problem Solving: An Exploratory Investigation.” In Proceedings of the 2004 SIGMIS conference on Computer personnel research: Careers, culture, and ethics in a networked environment, pp. 134-143. ACM, 2004.

Saptawijaya, Ari, and Luís Moniz Pereira. “Towards Modeling Morality Computationally with Logic Programming.” In International Symposium on Practical Aspects of Declarative Languages, pp. 104-119. Springer International Publishing, 2014.

Schrier, Karen. “EPIC: A Framework for Using Video Games in Ethics Education.” Journal of Moral Education 44, no. 4 (2015): 393-424.

Sharipova, Mayya, and Gordon McCalla. “Supporting Students’ Interactions over Case Studies.” In International Conference on Artificial Intelligence in Education, pp. 772-775. Springer International Publishing, 2015.

Tappan, Mark B., and Lyn Mikel Brown. “Stories Told and Lessons Learned: Toward a Narrative Approach to Moral Development and Moral Education.” Harvard Educational Review 59, no. 2 (1989): 182-206.

Tappan, Mark B. “Hermeneutics and Moral Development: Interpreting Narrative Representations of Moral Experience.” Developmental Review 10, no. 3 (1990): 239-265.

Tappan, Mark B., and Packer, M. (Eds.). Narrative and Storytelling: Implications for Understanding Moral Development. New Directions for Child Development, #54. San Franciso Jossey-Bass, 1991.

Vitz, Paul C. “The Use of Stories in Moral Development: New Psychological Reasons for an Old Education Method.” American Psychologist 45, no. 6 (1990): 709.

Generating and Detecting Persuasive Rhetoric

How can software detect persuasion in rhetoric and dialog occurring between people or between people and dialog systems?

In Opinion Polling Systems and Virtual Opinion Pollsters, I broached dialog systems which interact with users to collect their opinions. Presented was that virtual pollsters should adhere to the best practices of survey methodology and questionnaire construction, cognizant of questionnaire construction issues, question sequence issues, question wording issues and other issues with dialogs.

A broader matter, broached in E-Participation, Decision Support Systems, Multi-document Natural Language Processing and Cognitive Bias Mitigation, is one of detecting persuasion, persuasion occurring in documents, dialogs and transcripts, persuasion from humans and from natural language generation and dialog systems. For those interested, some publications are indicated about persuasion.


Cialdini, Robert B. “The science of persuasion.” (2004).

Kacprzak, Magdalena, Anna Sawicka, Andrzej Zbrzezny, and Katarzyna Zukowska. “A formal model of an argumentative dialogue in the management of emotions.” Pozna n Reasoning Week: 59.

Majone, Giandomenico. Evidence, argument, and persuasion in the policy process. Yale University Press, 1989.

Prakken, Henry. “Models of persuasion dialogue.” In Argumentation in artificial intelligence, pp. 281-300. Springer US, 2009.

van Benthem, Johan. Argumentation in artificial intelligence. Edited by Iyad Rahwan, and Guillermo R. Simari. Vol. 47. Heidelberg: Springer, 2009.

Walton, Douglas. Media argumentation: dialectic, persuasion and rhetoric. Cambridge University Press, 2007.

Generating Persuasive Rhetoric

Devereux, Joseph, and Chris Reed. “Strategic argumentation in rigorous persuasion dialogue.” In International Workshop on Argumentation in Multi-Agent Systems, pp. 94-113. Springer Berlin Heidelberg, 2009.

Marcu, Daniel. “The conceptual and linguistic facets of persuasive arguments.” In Proceedings of the ECAI {96 Workshop, Gaps and Bridges: New Directions in Planning and Natural Language Generation, pages 43 {46, Budapest,
Hungary. 1996.

Moulin, Bernard, Hengameh Irandoust, Micheline Bélanger, and Gaëlle Desbordes. “Explanation and argumentation capabilities: Towards the creation of more persuasive agents.” Artificial Intelligence Review 17, no. 3 (2002): 169-222.

Rosenfeld, Ariel, and Sarit Kraus. “Strategical Argumentative Agent for Human Persuasion.” In ECAI 2016: 22nd European Conference on Artificial Intelligence, 29 August-2 September 2016, The Hague, The Netherlands-Including Prestigious Applications of Artificial Intelligence (PAIS 2016), vol. 285, p. 320. IOS Press, 2016.

Detecting Persuasive Rhetoric

Stance Classification Using Dialogic Properties of Persuasion by Marilyn A. Walker, Pranav Anand, Robert Abbott and Ricky Grant,

Allen, James F., and C. Raymond Perrault. “Analyzing intention in utterances.” Artificial intelligence 15, no. 3 (1980): 143-178.

Gilbert, Henry T. “Persuasion detection in conversation.” PhD diss., Monterey, California. Naval Postgraduate School, 2010.

Ortiz, Pedro. “Machine learning techniques for persuasion detection in conversation.” PhD diss., Monterey, California. Naval Postgraduate School, 2010.

Young, Joel, and Pedro Ortiz. “Automated Persuasion Detection in Conversation.” GSTF Journal on Computing 1, no. 3 (2011).

Analysis of Visual Persuasion

Joo, Jungseock, Weixin Li, Francis F. Steen, and Song-Chun Zhu. “Visual persuasion: Inferring communicative intents of images.” In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 216-223. IEEE, 2014.

Joo, Jungseock. “Visual Persuasion in Mass Media: A Computational Framework for Understanding Visual Communication.” PhD diss., UNIVERSITY OF CALIFORNIA, LOS ANGELES, 2015.

Narrative Persuasion

Hamby, Anne, David Brinberg, and James Jaccard. “A Conceptual Framework of Narrative Persuasion.” Journal of Media Psychology (2016).

Hamby, Anne, David Brinberg, and Kim Daniloski. “Reflecting on the journey: Mechanisms in narrative persuasion.” Journal of Consumer Psychology (2016).

Query Analysis, Planning and Optimization utilizing Database Schema Metadata and Ontology

Advancements to database schema metadata and ontology advance query analysis, query planning and query optimization. Advancements to database schema metadata advance the analysis of query plans, for example those outputted by the SQL keywords DESCRIBE, EXPLAIN and EXPLAIN EXTENDED.

1. Database software should version to provide the capability to annotate database schemas, tables, columns and relations with (a) URI-based identifiers, (b) URI-based classes, and (c) entire RDF graphs.

2. New SQL syntax should be of use to access the URI-based identifiers, classes and graph-based metadata about database schemas, tables, columns and relations.

3. New ontologies should be authored and standardized to provide new features for databases and data usage scenarios.

4. Database software and logic programming environments should version to utilize standard API for interoperation.

New usage scenarios include: (a) measurement, calculation or estimation of data about specific queries or query plans upon one or more data resources, the measurements, calculations or estimations pertaining to various privacy topics, preserving privacy in big data, (b) the processing of representations of, i.e. expression trees of, arbitrarily large queries or query plans upon one or more data resources to determine whether the specific queries or query plans meet various criteria to access the data resources indicated in the queries or query plans, and (c) the alignment of data resources and of data from multiple data resources.


13th IEEE International Conference on Advanced and Trusted Computing. Track 3: Privacy Preservation in Big Data.

Software Analysis, Automated Theorem Proving, Plan and Argument Analysis

The technology of static program analysis, automated theorem proving, computer algebra systems, formula editors, automated planning and scheduling, plan rationale, argumentation software, argument analysis software, related document authoring and editing software as well as the features and ergonomics of such software are of interest to our group.

Towards software or software plugins that can provide argumentation-related features, broadly, some links are provided including to web-based mathematics and planning domain authoring and editing software.

See Also

Static Program Analysis, Automated Theorem Proving, Computer Algebra System,
Formula Editor, Automated Planning and Scheduling, Planning Domain Definition Language, Argument Map


Static Program Analysis
List of Tools for Static Code Analysis

Automated Theorem Proving
List of Theorem Provers and Proof Assistants

Computer Algebra Systems
List of Computer Algebra Systems

Mathematics Document Editing Software
WebLurch, Lurch (Video)

Automated Planning and Scheduling Software
Planning and Scheduling

Planning Domain Document Editing Software
PDDL Studio, myPDDL, Planning.Domains (Editor.Planning.Domains)

Argument Analysis Software
List of Argument Mapping Software, Web-based Collaboration Software

Natural Language Technology and Public Opinion Polling

Web-based opinion polls can be enhanced by natural language processing technology. Uses of natural language technology include processing text-based responses to the questions of opinion polls, surveys or questionnaires, including why people answered one or more previous questions as they did, using natural language to, for instance, explain their reasoning. Uses of forms enhanced with natural language user input capabilities include team scenarios, collaborative software, i.e. business software, as well as public opinion polling.

Websites or apps could make use of forms enhanced with text-based user input elements, forms enhanced by natural language technology. En route to client-side natural language technology, cloud-based technologies could provide such services.

In addition to processing bulk quantities of completed opinion polls, surveys or questionnaires, where multi-document processing could enhance the results of such processing, possible services include determining whether a natural language processing service can parse text-based user input elements’ text contents, in the elements’ contexts, while the user is typing, while the user in on a page, or before they conclude a multipage form.

Dialogue systems technology can provide users with, beyond text-based forms, the convenience of spoken language opinion polls, surveys or questionnaires. Natural language technology can also enhance the design of opinion polls, surveys or questionnaires, processing the text of sequences of or flowcharts of questions.


Siri, Google Now, Cortana

Project Oxford (LUIS), IBM Watson, SkyPhrase, Semantria, Wolfram Alpha

SIGdial Bibliography

Lists of dialogue systems by Staffan Larsson
Lists of dialogue systems by Dan Bohus

Workshop on Argument Mining 2014
Workshop on Argument Mining 2015

Frontiers and Connections between Argumentation Theory and Natural Language Processing

The Technology of Meetings, Lectures, Discussion Panels, Dialogues, Argumentation and Debates

The technology of meetings, lectures, discussion panels, dialogues, argumentation and debates are of interest to our group. Some topics in the overlap of artificial intelligence with meetings support technology are discussed, meetings occurring in all organizations, in all sectors, academia, science, industry and government.

Individuals also meet to to do civics, to participate in townhall discussions, to participate in the democracies of their neighborhoods or cities. Accordingly, meetings support technology can enhance Web-based civic engagement. Meetings support technology can empower individuals, organizations and communities, pertaining to the operation of governments and to the transparency of governments, city, state and federal.

The topics presented include the recording of meetings with modern sensors, multiparty speech recognition, obtaining transcripts from meetings, the processing of the data from arrays of sensors, such as pointclouds and 3D audio, into photographs, video, 3D video as well as binaural, surround sound or ambisonic audio. Software technology topics include conveniencing meeting participants as well as production teams with advanced features.

Ten topics are presented:

  1. Obtaining 3D data, pointclouds, from multiple sensors. Obtaining 3D audio from multiple sensors. Obtaining photographs, video, 3D video, binaural audio, surround sound, ambisonics from sensor data.
  2. Natural language understanding, sound source localization, multiperson speech recognition, multiperson nonverbal gesture recognition.
  3. Transcription, topic modeling, keyword generation, enhancing the indexing of video, video segments, video clips.
  4. Modeling meetings, lectures, discussion panels, dialogues, argumentation and debates; detecting events, categorizing events.
  5. Interpreting meetings, interpreting narratives or storyboards from meetings, summarizing meetings, motions of attention during meetings.
  6. The virtual cinematography or videography utilizing virtual cameras; the capability to position virtual cameras in space, to adjust virtual camera settings, to move virtual cameras around to obtain photographs or videos.
  7. The capability to, beyond outputting one video stream, output multiple simultaneous multimedia streams, multiple simultaneous virtual cameras, as per multiview video.
  8. The processing of photographs or video cinematography from meetings; utilizing photographs, videos as well as pointclouds data, machine learning from human photographers, videographers.
  9. The storage of pointcloud video, archiving of the raw preprocessed 3D data; the indexing, search, retrieval of 3D multimedia content.
  10. The summarization of sets of meetings, dashboard summarizations of sets of meetings.


Abdollahian, Golnaz, Cüneyt M. Taskiran, Zygmunt Pizlo, and Edward J. Delp. “Camera motion-based analysis of user generated video.” Multimedia, IEEE Transactions on 12, no. 1 (2010): 28-41.

Amita Jajoo, Suman Kumari, Sapana Borole. “Character-Based Scene Extraction and Movie Summarization Using Character Interactions.”

Ang, Jeremy, Yang Liu, and Elizabeth Shriberg. “Automatic Dialog Act Segmentation and Classification in Multiparty Meetings.” In ICASSP (1), pp. 1061-1064. 2005.

Bares, William H., Joël P. Grégoire, and James C. Lester. “Realtime constraint-based cinematography for complex interactive 3d worlds.” In AAAI/IAAI, pp. 1101-1106. 1998.

Bhatt, Mehul, Jakob Suchan, and Christian Freksa. “ROTUNDE-A Smart Meeting Cinematography Initiative: Tools, Datasets, and Benchmarks for Cognitive Interpretation and Control.” arXiv preprint arXiv:1306.1034 (2013).

Bhatt, Mehul, Jakob Suchan, and Carl Schultz. “Cognitive Interpretation of Everyday Activities: Toward Perceptual Narrative Based Visuo-Spatial Scene Interpretation.” arXiv preprint arXiv:1306.5308 (2013).

Bianchi, Michael. “Automatic video production of lectures using an intelligent and aware environment.” In Proceedings of the 3rd international conference on Mobile and ubiquitous multimedia, pp. 117-123. ACM, 2004.

Buchsbaum, Daphna, Thomas L. Griffiths, Dillon Plunkett, Alison Gopnik, and Dare Baldwin. “Inferring Action Structure and Causal Relationships in Continuous Sequences of Human Action.” Cognitive psychology 76 (2015): 30-77.

Buist, Anne Hendrik, Wessel Kraaij, and Stephan Raaijmakers. “Automatic Summarization of Meeting Data: A Feasibility Study.” In CLIN. 2004.

Cutler, Ross, Yong Rui, Anoop Gupta, Jonathan J. Cadiz, Ivan Tashev, Li-wei He, Alex Colburn, Zhengyou Zhang, Zicheng Liu, and Steve Silverberg. “Distributed meetings: A meeting capture and broadcasting system.” In Proceedings of the tenth ACM international conference on Multimedia, pp. 503-512. ACM, 2002.

de Lima, Edirlei ES, Cesar T. Pozzer, Marcos C. d’Ornellas, Angelo EM Ciarlini, Bruno Feijó, and Antonio L. Furtado. “Virtual cinematography director for interactive storytelling.” In Proceedings of the International Conference on Advances in Computer Enterntainment Technology, pp. 263-270. ACM, 2009.

de Mdntaras, R. Lopez, and L. Saitta. “Knowledge-based cinematography and its applications.” In Ecai 2004: Proceedings of the 16th European Conference on Artificial Intelligence, vol. 110, p. 256. IOS Press, 2004.

DiMicco, Joan Morris, Katherine J. Hollenbach, and Walter Bender. “Using visualizations to review a group’s interaction dynamics.” In CHI’06 Extended Abstracts on Human Factors in Computing Systems, pp. 706-711. ACM, 2006.

Dubba, Krishna, Mehul Bhatt, Frank Dylla, David C. Hogg, and Anthony G. Cohn. “Interleaved inductive-abductive reasoning for learning complex event models.” In Inductive Logic Programming, pp. 113-129. Springer Berlin Heidelberg, 2012.

Erol, Berna, Jonathan J. Hull, and Dar-Shyang Lee. “Linking multimedia presentations with their symbolic source documents: algorithm and applications.” In Proceedings of the eleventh ACM international conference on Multimedia, pp. 498-507. ACM, 2003.

Erol, Berna, and Ying Li. “An overview of technologies for e-meeting and e-lecture.” In Multimedia and Expo, 2005. ICME 2005. IEEE International Conference on, pp. 6-pp. IEEE, 2005.

Fan, Quanfu, Arnon Amir, Kobus Barnard, Ranjini Swaminathan, and Alon Efrat. “Temporal modeling of slide change in presentation videos.” In Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on, vol. 1, pp. I-989. IEEE, 2007.

Feng, Vanessa Wei. “RST-Style Discourse Parsing and Its Applications in Discourse Analysis.” PhD diss., University of Toronto, 2015.

Gardner, William G. “3D audio and acoustic environment modeling.” Wave Arts, Inc 99 (1999).

Gardner, William G. “Spatial audio reproduction: Towards individualized binaural sound.” In Frontiers of Engineering:: Reports on Leading-Edge Engineering from the 2004 NAE Symposium on Frontiers of Engineering, p. 113. National Academies Press, 2005.

Ghosh, Sucheta. “End-to-End Discourse Parsing with Cascaded Structured Prediction.” PhD diss., University of Trento, 2012.

Gigonzac, G., Francois Pitie, and A. Kokaram. “Electronic slide matching and enhancement of a lecture video.” (2007): 9-9.

Goldstein, Michael H., Heidi R. Waterfall, Arnon Lotem, Joseph Y. Halpern, Jennifer A. Schwade, Luca Onnis, and Shimon Edelman. “General cognitive principles for learning structure in time and space.” Trends in cognitive sciences 14, no. 6 (2010): 249-258.

Gross, Ralph, Michael Bett, Hua Yu, Xiaojin Zhu, Yue Pan, Jie Yang, and Alex Waibel. “Towards a multimodal meeting record.” In Multimedia and Expo, 2000. ICME 2000. 2000 IEEE International Conference on, vol. 3, pp. 1593-1596. IEEE, 2000.

Haller, Michael, Daniel Dobler, and Philipp Stampfl. “Augmenting the reality with 3D sound sources.” In ACM SIGGRAPH 2002 conference abstracts and applications, pp. 65-65. ACM, 2002.

Hendrix, Claudia, and Woodrow Barfield. “Presence in virtual environments as a function of visual and auditory cues.” In Virtual Reality Annual International Symposium, 1995. Proceedings., pp. 74-82. IEEE, 1995.

Hendrix, Claudia, and Woodrow Barfield. “The sense of presence within auditory virtual environments.” Presence: Teleoperators and Virtual Environments 5, no. 3 (1996): 290-301.

Hosseinmardi, Homa, Akshay Mysore, Nicholas Farrow, Nikolaus Correll, and Richard Han. “Distributed Spatio-Temporal Gesture Recognition in Sensor Arrays.”

Israel, Quinsulon L. “Semantic Analysis for Improved Multi-document Summarization of Text.” PhD diss., Drexel University, 2014.

Ivanov, Alexei V., Giuseppe Riccardi, Sucheta Ghosh, Sara Tonelli, and Evgeny A. Stepanov. “Acoustic correlates of meaning structure in conversational speech.” In INTERSPEECH, pp. 1129-1132. 2010.

Kao, J. L., S. Y. Chen, and D. J. Duh. “Detecting Handwritten Annotation by Synchronization of Lecture Slides and Videos.” (2013).

Kennedy, Kevin, and Robert E. Mercer. “Planning animation cinematography and shot structure to communicate theme and mood.” In Proceedings of the 2nd international symposium on Smart graphics, pp. 1-8. ACM, 2002.

Lino, Christophe, Mathieu Chollet, Marc Christie, and Remi Ronfard. “Computational model of film editing for interactive storytelling.” In Interactive Storytelling, pp. 305-308. Springer Berlin Heidelberg, 2011.

Lino, Christophe, Marc Christie, Roberto Ranon, and William Bares. “The director’s lens: an intelligent assistant for virtual cinematography.” In Proceedings of the 19th ACM international conference on Multimedia, pp. 323-332. ACM, 2011.

Ma, Yu-Fei, Lie Lu, Hong-Jiang Zhang, and Mingjing Li. “A user attention model for video summarization.” In Proceedings of the tenth ACM international conference on Multimedia, pp. 533-542. ACM, 2002.

Matsuyama, Takashi, Xiaojun Wu, Takeshi Takai, and Shohei Nobuhara. “Real-time 3D shape reconstruction, dynamic 3D mesh deformation, and high fidelity visualization for 3D video.” Computer Vision and Image Understanding 96, no. 3 (2004): 393-434.

McCowan, Iain, Samy Bengio, Daniel Gatica-Perez, Guillaume Lathoud, Florent Monay, Darren Moore, Pierre Wellner, and Hervé Bourlard. “Modeling human interaction in meetings.” In Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP’03). 2003 IEEE International Conference on, vol. 4, pp. IV-748. IEEE, 2003.

Merabti, Billal, Marc Christie, and Kadi Bouatouch. “A Virtual Director Inspired by Real Directors.” In Workshops at the Twenty-Eighth AAAI Conference on Artificial Intelligence. 2014.

Meyer, Meredith, Philip DeCamp, Bridgette Hard, Dare Baldwin, and Deb Roy. “Assessing behavioral and computational approaches to naturalistic action segmentation.” In Proc. of the 33nd Annual Conference of the Cognitive Science Society. 2010.

Minnen, David, Irfan Essa, and Thad Starner. “Expectation grammars: Leveraging high-level expectations for activity recognition.” In Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on, vol. 2, pp. II-626. IEEE, 2003.

Mitra, Sushmita, and Tinku Acharya. “Gesture recognition: A survey.” Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on 37, no. 3 (2007): 311-324.

Moezzi, Saied, Li-Cheng Tai, and Philippe Gerard. “Virtual view generation for 3d digital video.” IEEE multimedia 4, no. 1 (1997): 18-26.

Murray, Gabriel, and Giuseppe Carenini. “Summarizing spoken and written conversations.” In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 773-782. Association for Computational Linguistics, 2008.

Nevatia, Ram, Tao Zhao, and Somboon Hongeng. “Hierarchical Language-based Representation of Events in Video Streams.” In Computer Vision and Pattern Recognition Workshop, 2003. CVPRW’03. Conference on, vol. 4, pp. 39-39. IEEE, 2003.

Ngo, Chong-Wah, Ting-Chuen Pong, and Thomas S. Huang. “Detection of slide transition for topic indexing.” In Multimedia and Expo, 2002. ICME’02. Proceedings. 2002 IEEE International Conference on, vol. 2, pp. 533-536. IEEE, 2002.

Nijholt, Anton, H. J. A. Akker, and Dirk Heylen. “Meetings and meeting modeling in smart surroundings.” (2004): 145-158.

Nijholt, Anton, Rieks op den Akker, and Dirk Heylen. “Meetings and meeting modeling in smart environments.” AI & SOCIETY 20, no. 2 (2006): 202-220.

Nijholt, Anton, Rieks op den Akker, and Dirk Heylen. “Meetings and meeting modeling in smart environments.” AI & SOCIETY 20, no. 2 (2006): 202-220.

Nijholt, Anton, Rutger Rienks, Job Zwiers, and Dennis Reidsma. “Online and off-line visualization of meeting information and meeting support.” The Visual Computer 22, no. 12 (2006): 965-976.

Pang, Derek, Sameer Madan, Serene Kosaraju, and Tarun Vir Singh. Automatic virtual camera view generation for lecture videos. Tech. Rep., Stanford Universit, 2010.

Poel, Mannes, Ronald Poppe, and Anton Nijholt. “Meeting behavior detection in smart environments: Nonverbal cues that help to obtain natural interaction.” In Automatic Face & Gesture Recognition, 2008. FG’08. 8th IEEE International Conference on, pp. 1-6. IEEE, 2008.

Purver, Matthew, John Dowding, John Niekrasz, Patrick Ehlen, Sharareh Noorbaloochi, and Stanley Peters. “Detecting and summarizing action items in multi-party dialogue.” In Proceedings of the 8th SIGdial Workshop on Discourse and Dialogue, pp. 18-25. 2007.

Rivera, Ernesto, and Akinori Nishihara. “Enhancing Lecture Video Viewing: A Smart Visual Timeline.” In Global Learn Asia Pacific, vol. 2010, no. 1, pp. 268-271. 2010.

Ronzhin, A. L., and V. Yu Budkov. “Multimodal Interaction with Intelligent Meeting Room Facilities from Inside and Outside.” In Smart Spaces and Next Generation Wired/Wireless Networking, pp. 77-88. Springer Berlin Heidelberg, 2009.

Ryoo, Michael S., and Jake K. Aggarwal. “Semantic representation and recognition of continued and recursive human activities.” International journal of computer vision 82, no. 1 (2009): 1-24.

Sharma, Prerna, and Naman Sharma. “Hand & Upper Body Based Hybrid Gesture Recognition.”

Smolic, Aljoscha, Karsten Mueller, Philipp Merkle, Christoph Fehn, Peter Kauff, Peter Eisert, and Thomas Wiegand. “3D video and free viewpoint video-technologies, applications and MPEG standards.” In Multimedia and Expo, 2006 IEEE International Conference on, pp. 2161-2164. IEEE, 2006.

Stiefelhagen, Rainer. “Tracking focus of attention in meetings.” In Proceedings of the 4th IEEE International Conference on Multimodal Interfaces, p. 273. IEEE Computer Society, 2002.

Subramanian, Ramanathan, Jacopo Staiano, Kyriaki Kalimeri, Nicu Sebe, and Fabio Pianesi. “Putting the pieces together: multimodal analysis of social attention in meetings.” In Proceedings of the international conference on Multimedia, pp. 659-662. ACM, 2010.

Suchan, Jakob, and Mehul Bhatt. “Toward High-Level Dynamic Camera Control.”

Sundareswaran, Venkataraman, Kenneth Wang, Steven Chen, Reinhold Behringer, Joshua McGee, Clement Tam, and Pavel Zahorik. “3D audio augmented reality: implementation and experiments.” In Proceedings of the 2nd IEEE/ACM International Symposium on Mixed and Augmented Reality, p. 296. IEEE Computer Society, 2003.

Taghizadeh, Mohammad J., Reza Parhizkar, Philip N. Garner, Hervé Bourlard, and Afsaneh Asaei. “Ad hoc microphone array calibration: Euclidean distance matrix completion algorithm and theoretical guarantees.” Signal Processing 107 (2015): 123-140.

Tapaswi, Makarand, Martin Bauml, and Rainer Stiefelhagen. “StoryGraphs: visualizing character interactions as a timeline.” In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pp. 827-834. IEEE, 2014.

Taskiran, Cüneyt M., Zygmunt Pizlo, Arnon Amir, Dulce Ponceleon, and Edward J. Delp. “Automated video program summarization using speech transcripts.” Multimedia, IEEE Transactions on 8, no. 4 (2006): 775-791.

Tian, Ying-li, Lisa Brown, Arun Hampapur, Sharat Pankanti, Andrew Senior, and Ruud Bolle. “Real world real-time automatic recognition of facial expressions.” In In Proceedings of IEEE workshop on. 2003.

Vadlapudi, Ravikiran, and Rahul Katragadda. “On automated evaluation of readability of summaries: capturing grammaticality, focus, structure and coherence.” In Proceedings of the NAACL HLT 2010 Student Research Workshop, pp. 7-12. Association for Computational Linguistics, 2010.

Waibel, Alex, Michael Bett, Michael Finke, and Rainer Stiefelhagen. “Meeting browser: Tracking and summarizing meetings.” In Proceedings of the DARPA broadcast news workshop, pp. 281-286. 1998.

Wang, Feng, Chong-Wah Ngo, and Ting-Chuen Pong. “Gesture tracking and recognition for lecture video editing.” In Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, vol. 3, pp. 934-937. IEEE, 2004.

Wang, Feng, and Bernard Merialdo. “Multi-document video summarization.” In Multimedia and Expo, 2009. ICME 2009. IEEE International Conference on, pp. 1326-1329. IEEE, 2009.

Wang, Feng, Chong-wah Ngo, and Ting-chuen Pong. “Lecture Video Enhancement and Editing by Integrating Posture, Gesture and Text.” IEEE Transactions on Multimedia

Wang, Lu, and Claire Cardie. “Summarizing decisions in spoken meetings.” In Proceedings of the Workshop on Automatic Summarization for Different Genres, Media, and Languages, pp. 16-24. Association for Computational Linguistics, 2011.

Winston, Brian. Technologies of seeing: photography, cinematography and television. BFI, 1996.

Yu, Zhiwen, and Yuichi Nakamura. “Smart meeting systems: A survey of state-of-the-art and open issues.” ACM Computing Surveys (CSUR) 42, no. 2 (2010): 8.

The Semantics of Multimedia Tracks

Argumentation formats of interest to our group include multimedia, multimedia-based technologies. The semantics of multimedia tracks can enhance numerous use cases. For example, for a MPEG file containing presenter, presentation, e.g. a slideshow, video software, including Web browsers, can provide user interfaces to utilize the multiple tracks of audio, video or data content. Multimedia tracks’ semantics can enhance the portability of features with multimedia files, without requiring the multimedia files to be in the contexts of HTML documents for such features, though such documents could be interoperable with such features through JavaScript.

Metadata standards, extensible ontology, vocabulary and API for multimedia track metadata, such as XMPMPEGMatroska and WebM, can provide enhanced viewing experiences and features. Multimedia tracks described semantically can also be interrelated semantically.

Extensible semantic metadata ontology, vocabularies, including the expressiveness of XMP, MPEG, Matroska and WebM, and JavaScript API can facilitate enhanced features, uses of tracks, of track-based data, and the portability of multimedia objects. Also possible are XML or RDF based multimedia data tracks.

The Styling of Content and Mathematical Notations by Semantics-based CSS Selectors

Semantics enhances the selection and styling of content; varieties of semantic selection include: (1) selecting upon URI items in white space separated lists of TERMorCURIEorAbsIRI values, (2) selecting upon parallel markup structure and reference combinators, and (3) graph-based selections with SPARQL expressiveness.

Selecting upon URI items in white space separated lists of TERMorCURIEorAbsIRI values, such as @xhtml:role, @rdf:type, @rdfa:typeof or @epub:type, could be expressed with a syntax resembling:

An example of selecting upon parallel markup structure, e.g. MathML content markup and parallel markup, and reference combinators:

Ontologydescription logic and semantic reasoning can enhance the functionality of selection based upon URI items in TERMorCURIEorAbsIRI attribute values, selection based upon the parallel markup structure and reference combinators and of graph-based selection. Semantics-based selectors could be as expressive as SPARQL.

No Comments | Category Web

XML/RDF Hybrid Documents

Documents can align XML, tree-based, document content with graph-based semantics, RDF semantics. Documents can interface as both trees and graphs. A solution for document and modular document component semantics is extending document object model interfaces.

Document object model elements such as Element, HTMLElement, HTMLObjectElement as well as custom elements can be extended with a semantics function which serializes the element into a graph. Utilizing the RDFJS API,

Term Element.semantics(Sink sink);

The semantics function produces triples or quads into a provided sink and returns a Term, either a BlankNode or NamedNode, which maps with the document object model element. The default implementation can perform recursion, add semantics (document markup semantics, structural semantics, attributes such as @xhtml:role, @rdf:type, @rdfa:typeof or @epub:type, microformats and RDFa) and then return a mapped Term.

Such a function would entail a convenience: document.body.semantics(sink);

Web components, custom elements, could include a means of specifying such semantics, in addition to structure, styling and scripting, by overriding the semantics function.

Possible for the aforementioned mapping between Terms and Elements

Element document.getElementByTerm(Term term);
Term document.getTermByElement(Element element);

Uses include enhancing the Web-based and desktop-based indexing, search and retrieval of documents and document metadata. Multimedia documents, including with custom elements, can map to graph-based representations utilizing ontologies such as document structural ontologies or to forthcoming digital textbook ontologies.

No Comments | Category Web

Argumentation Formats

Our group discusses all argumentation formats, use cases and standardization topics to enhance each existing format as well as potential new formats. Kinds of argumentation of interest to our group include: conversational, mathematical, scientific, interpretive, legal and political.

A list of existing argumentation formats:

Akoma Ntoso
Argument Interchange Format (AIF)
Argument Markup Language (AML)
LegalDocumentXML (LegalDocML)
Legal Knowledge Interchange Format (LKIF)
Open Mathematical Documents (OMDoc)
Proof Markup Language (PML)
SALT Rhetorical Ontology (SRO)
Thousands of Problems for Theorem Provers (TPTP)
Thousands of Solutions for Theorem Provers (TSTP)