Call for Papers

Description

Multimedia content and video consumption are expected to take a central role in the post-pandemic world. Thus, providing new advanced interfaces and services that exploit them and lessen their weaker aspects becomes of paramount importance. Video consumption suffers for instance from the well-known problem of linear-sequential viewing, poor content retrieval, lack of structure and lack of highlighting of relevant features. Online video services and the research community are working to provide tools that enhance the user experience with video consumption. This issue affects in particular the use of videos for learning and training purposes. The provision of augmentation services such as visual feedback, knowledge graphs, and visual summaries of video fragments, just to give some examples, have been shown to limit the problems mentioned above. However, automating the development of such services is still a challenge, recently addressed by exploiting deep learning models that use multimodal input data, but further research is needed to exploit the results in effective  services. Personalized visual annotation of videos and adaptive interfaces for data visualization and mobile interaction are further examples of video augmentation that can address the needs of users with different needs, abilities and usage contexts.

The main goal of the workshop is to bring together researchers and practitioners interested in  video augmentation for different purposes, among which education, with the experts and researchers of HCI, AI and data visualization, including those working on learning diseases, a research area where multimedia and video-based learning have been studied for long.  Researchers can find the workshop a useful opportunity to present their original and innovative ideas, not yet mature or fully evaluated for publication in the main conference.

Topics

We invite submissions that address AVI topics and are focused on visual interfaces for video augmentation. They include but are not limited to the following: 

  • Visual augmentation of videos
  • Visual summaries and indexing
  • Video augmentation for mobile users
  • Search Interfaces for video exploration
  • Visual analytics 
  • Interactive video
  • 3D video
  • 360 degree video
  • Hypervideo
  • Adaptive and personalized user interfaces
  • Knowledge graph visualization and exploration
  • Visual tips and recommendations
  • Knowledge extraction and visualization
  • Intelligent multimodal interfaces
  • Usability and accessibility
  • Video augmentation for inclusiveness
  • Video augmentation for training
  • Video-based learning

Submission

We encourage the submission of original contributions investigating advanced visual interfaces for augmented video.

  • Short papers (3-4 pages, including references and appendices)
  • Position papers and demo papers (max 2 pages, including references and appendices)

The workshop will be organized in a half-day format.  

It will consist of an introduction and two main sessions: the first session for the brief presentation of the papers, the second session for interactive discussion on two or three main themes that could emerge from the submissions. The organizers will guide the discussion on controversial topics, challenging topics, and best practices, involving the authors of the papers addressing such issues.

Submissions should be submitted as PDF files via EasyChair.

Abstracts should be sent to: ilaria.torre@unige.it

Submission guidelines. Authors should submit their papers as single-column. Papers must be formatted according to the new workflow for ACM publications. The templates and instructions are available at the following link: https://www.acm.org/publications/taps/word-template-workflow.

Using LaTeX is highly recommended to minimize the extent of reformatting for camera-ready.

Resources:


All accepted papers will be published in the open-access CEUR Workshop proceedings, indexed by Scopus.

Authors are required to present their paper at the workshop physically or remotely.