Thursday, March 19, 2015

MMSys 2016 - Preliminary Call for Papers

ACM Multimedia Systems 2016 (MMSys'16) [PDF]
co-located with NOSSDAV, MoVid, and MMVE

May 10-13, 2016
Klagenfurt am Wörthersee, Austria

The ACM Multimedia Systems Conference (MMSys) provides a forum for researchers to present and share their latest research findings in multimedia systems. While research about specific aspects of multimedia systems are regularly published in the various proceedings and transactions of the networking, operating system, real-time system, and database communities, MMSys aims to cut across these domains in the context of multimedia data types. This provides a unique opportunity to view the intersections and the inter-play of the various approaches and solutions developed across these domains to deal with multimedia data types. MMSys is a venue for researchers who explore:
  • Complete multimedia systems that provide a new kind of multimedia experience or systems whose overall performance improves the state-of-the-art through new research results in one of more components, or
  • Enhancements to one or more system components that provide a documented improvement over the state-of-the-art for handling continuous media or time-dependent services.
Such individual system components include:
  • Operating systems
  • Distributed architectures and protocol enhancements
  • Domain languages, development tools and abstraction layers
  • Using new architectures or computing resources for multimedia
  • New or improved I/O architectures or I/O devices, innovative uses and algorithms for their operation
  • Representation of continuous or time-dependent media
  • Metrics, measures and measurement tools to assess performance and quality of service/experience
This touches aspects of many hot topics: adaptive streaming, games, virtual environments, augmented reality, 3D video, immersive systems, telepresence, multi- and many-core, GPGPUs, mobile streaming, P2P, Clouds, cyber-physical systems. All submissions will be peer-reviewed by at least 3 members of the technical program committee. Full papers will be evaluated for their scientific quality. Accepted papers must reach a high scientific standard and document unpublished research.

Committee ACM MMSys
  • General chair: Christian Timmerer, AAU
  • TPC chair: Ali C. Begen, CISCO
  • Dataset chair: Karel Fliegel, CTU
  • Demo chairs: Omar Niamut, TNO & Michael Zink, UMass
  • Proceedings chair: Benjamin Rainer, AAU
  • Publicity chairs
    • America: Baochun Li, University of Toronto
    • Asia: Sheng-Wei Chen (a.k.a. Kuan-Ta Chen), Academia Sinica
    • Middle East: Mohamed Hefeeda, Qatar Computing Research Institute (QCRI)
    • Europe: Vincent Charvillat, IRIT-ENSEEIHT-Toulouse Univ.
  • Local chair: Laszlo Böszörmenyi, AAU
Important dates ACM MMSys
  • Submission deadline: November 27, 2015
  • Reviews available to authors: January 15, 2016
  • Rebuttal deadline:  January 22, 2016
  • Acceptance notification: January 29, 2015
  • Camera ready deadline: March 11, 2016
Committee ACM NOSSDAV (co-located with MMSys) [Prelim. CfP: PDF]
  • General chair: Hermann Hellwagner, AAU
  • TPC chair: Eckehard Steinbach, TUM
Important dates ACM NOSSDAV
  • Submission deadline: February 5, 2016
  • Acceptance notification: March 23, 2016
  • Camera ready deadline: April 8, 2016
Committees ACM MMVE (co-located with MMSys) [Prelim. CfP: PDF]
  • General chair: Jean Botev, Univ. of Luxembourg
Important dates ACM MMVE
  • Submission deadline: February 5, 2016
  • Acceptance notification: March 23, 2016
  • Camera ready deadline: April 8, 2016
Committees ACM MoVid (co-located with MMSys)
  • TPC chairs: Pål Halvorsen, Simula/Univ. Oslo
Important dates ACM MoVid
  • Submission deadline: tbd
  • Acceptance notification: tbd
  • Camera ready deadline: tbd
Local organisation
  • Chair: Laszlo Böszörmenyi
  • Alpen-Adria-Universität Klagenfurt (AAU)
  • Institute of Information Technology (ITEC)
  • Universitätsstraße 65-67, A-9020 Klagenfurt
  • Email:

Wednesday, March 18, 2015

MPEG news: a report from the 111th meeting, Geneva, Switzerland

MPEG111 opening plenary.
This blog post is also available at SIGMM records.

The 111th MPEG meeting (note: link includes press release and all publicly available output documents) was held in Geneva, Switzerland showing up some interesting aspects which I’d like to highlight here. Undoubtedly, it was the shortest meeting I’ve ever attended (and my first meeting was #61) as final plenary concluded at 2015/02/20T18:18!

In terms of the requirements (subgroup) it’s worth to mention the call for evidence (CfE) for high-dynamic range (HDR) and wide color gamut (WCG) video coding which comprises a first milestone towards a new video coding format. The purpose of this CfE is to explore whether or not (a) the coding efficiency and/or (b) the functionality of the HEVC Main 10 and Scalable Main 10 profiles can be significantly improved for HDR and WCG content. In addition to that requirements issues a draft call for evidence on free viewpoint TV. Both documents are publicly available here.

The video subgroup continued discussions related to the future of video coding standardisation and issued a public document requesting contributions on “future video compression technology”. Interesting application requirements come from over-the-top streaming use cases which request HDR and WCG as well as video over cellular networks. Well, at least the former is something to be covered by the CfE mentioned above. Furthermore, features like scalability and perceptual quality is something that should be considered from ground-up and not (only) as an extension. Yes, scalability is something that really helps a lot in OTT streaming starting from easier content management, cache-efficient delivery, and it allows for a more aggressive buffer modelling and, thus, adaptation logic within the client enabling better Quality of Experience (QoE) for the end user. It seems like complexity (at the encoder) is not such much a concern as long as it scales with cloud deployments such as (e.g., the bitdash demo area shows some neat 4K/8K/HFR DASH demos which have been encoded with bitcodin). Closely related to 8K, there’s a new AVC amendment coming up covering 8K although one can do it already today (see before) but it’s good to have standards support for this. For HEVC, the JCT-3D/VC issued the FDAM4 for 3D Video Extensions and started with PDAM5 for Screen Content Coding Extensions (both documents being publicly available after an editing period of about a month).

And what about audio, the audio subgroup has decided that ISO/IEC DIS 23008-3 3D Audio shall be promoted directly to IS which means that the DIS was already at such a good state that only editorial comments are applied which actually saves a balloting cycle. We have to congratulate the audio subgroup for this remarkable milestone.

Finally, I’d like to discuss a few topics related to DASH which is progressing towards its 3rd edition which will incorporate amendment 2 (Spatial Relationship Description, Generalized URL parameters and other extensions), amendment 3 (Authentication, Access Control and multiple MPDs), and everything else that will be incorporated within this year, like some aspects documented in the technologies under consideration or currently being discussed within the core experiments (CE).
Currently, MPEG-DASH conducts 5 core experiments:
  • Server and Network Assisted DASH (SAND)
  • DASH over Full Duplex HTTP-based Protocols (FDH)
  • URI Signing for DASH (CE-USD)
  • SAP-Independent Segment SIgnaling (SISSI)
  • Content aggregation and playback control (CAPCO)
The description of core experiments is publicly available and, compared to the previous meeting, we have a new CE which is about content aggregation and playback control (CAPCO) which "explores solutions for aggregation of DASH content from multiple live and on-demand origin servers, addressing applications such as creating customized on-demand and live programs/channels from multiple origin servers per client, targeted preroll ad insertion in live programs and also limiting playback by client such as no-skip or no fast forward.” This process is quite open and anybody can join by subscribing to the email reflector.

The CE for DASH over Full Duplex HTTP-based Protocols (FDH) is becoming major and basically defines the usage of DASH for push-features of WebSockets and HTTP/2. At this meeting MPEG issues a working draft and also the CE on Server and Network Assisted DASH (SAND) got its own part 5 where it goes to CD but documents are not publicly available. However, I'm pretty sure I can report more on this next time, so stay tuned or feel free to comment here.

Friday, February 27, 2015

IEEE JSAC Special Issue: Video Distribution over Future Internet

Special issue on Video Distribution over Future Internet 

Extended Submission Deadline: May 1529, 2015

The current Internet is under tremendous pressure due to the exponential growth in bandwidth demand, fueled by the transfer of video consumption to online distribution, IPTV, streaming services such as Netflix, and from phone networks to videoconferencing and Skype-like video communications. The Internet has also democratized the creation, distribution and sharing of user-generated video contents through services such as YouTube, Vimeo or Hulu. The situation is further aggravated by the emerging trends of adopting higher definition video streams, requesting more and more bandwidth. Indeed, the Cisco Visual Networking Index (VNI) projects that video consumption will amount to 90% of the global consumer traffic by 2017. Another shift predicted by Cisco VNI is that most data communications will be wireless by 2018.

To cope with the bandwidth growth, the shift to wireless, and to solve other related issues (e.g., naming, security, etc) with the current Internet, new architectures for the future Internet have been proposed and prototyped. Examples include Content-Centric Networks (CCN) or Named Data Networking (NDN), or some content-based extensions to Software-Defined Networking (SDN), among others. None of these emerging architectures deals specifically with video distribution, as they need to support a wider range of services, but all would have to support videos in an efficient manner. Therefore, the study of video distribution over the future Internet is of primary importance: how well does future Internet architecture facilitate video delivery? What kind of video distribution mechanisms need to be created to run on the future Internet? How will video be supported in the wireless portion of the future Internet? Can the current video distribution mechanisms (such as end-to-end dynamic rate adaptation schemes) be used or even enhanced for the future Internet? What are subjective/objective metrics for performance measurement? How to provide real-time guarantees for live and interactive video streams?

While the topic is quite wide, we will narrow the focus of this special issue on the fundamental problems of video distribution and delivery in the future Internet. We invite submissions of high-quality original technical and survey papers, which have not been published previously, on video distribution in the future Internet, including the following non-exhaustive list of topics. Please note that all topics must be understood in the context of the future Internet as outlined above.
  • Network-assisted video distribution, network support for multimedia, specifically supporting wireless environments
  • New information-centric and software-defined architectures to support wired and wireless video streaming
  • Resource allocation for wired and wireless video distribution
  • Media streaming, distribution, and storage support in the future Internet
  • In-network caching/storage, named data retrieval, publish/subscribe for video distribution in wired and wireless networks
  • Next generation Content Delivery Networks (CDN)
  • Adaptive streaming and rate adaptation for video streaming in the future Internet for wired and wireless networks
  • Peer-to-peer aspects of video multimedia distribution, including scaling and capacity
  • QoS/QoE measurement and support for video distribution in the future Internet
  • User-generated content and social networks for multi-media
  • Video compression techniques explicitly supporting the future Internet
  • Big-Data mechanisms (say referral engines or content placement algorithms) for video content over future Internet
  • Social-aware video content distribution over future Internet
  • Integration of video distribution and multimedia computing over future Internet
  • Testbeds and measurements of video distribution over future Internet
  • Cost and economic models for video distribution over future Internet
  • Theoretical foundations for video distribution over future Internet, e.g., network coding, information theory, machine learning, etc
Special Issue Editors
  • Prof. Cedric Westphal, Huawei Innovations & UCSC, USA 
  • Prof. Tommaso Melodia, Northeastern University, Boston, MA, USA 
  • Prof. Christian Timmerer, Alpen-Adria-Universität Klagenfurt, Austria
  • Prof. Wenwu Zhu, Tsinghua University, Beijing, China
Important Dates
  • Paper Submission due: 05/29/2015
  • First review complete: 09/15/2015
  • Acceptance Notification: 11/15/2015
  • Camera-ready version: 12/15/2015
  • Publication date: Second Quarter 2016 
Manuscript submissions and reviewing process: All submissions must be original work that has not been published or submitted elsewhere. For submission format, please follow IEEE JSAC guidelines ( Each paper will go through a two-round rigorous reviewing process by at least three leading experts in related areas. Papers should be submitted through EDAS (

ICME 2015: Over-the-Top Content Delivery: State of the Art and Challenges Ahead

Supported by
Tutorial at ICME 2015
June 29 - July 3, 2015
Torino, Italy

Abstract: Over-the-top content delivery is becoming increasingly attractive for both live and on-demand content thanks to the popularity of platforms like YouTube, Vimeo, Netflix, Hulu, Maxdome, etc. In this tutorial, we present state of the art and challenges ahead in over-the-top content delivery. In particular, the goal of this tutorial is to provide an overview of adaptive media delivery, specifically in the context of HTTP adaptive streaming (HAS) including the recently ratified MPEG-DASH standard. The main focus of the tutorial will be on the common problems in HAS deployments such as client design, QoE optimization, multi-screen and hybrid delivery scenarios, and synchronization issues. For each problem, we will examine proposed solutions along with their pros and cons. In the last part of the tutorial, we will look into the open issues and review the work-in-progress and future research directions.

The tutorial will be held on June 29, 2015 in the afternoon.

Slides will be provided on time and a preliminary version (from previous presentations) can be found here and here.

Biography of Presenters

Christian Timmerer received his M.Sc. (Dipl.-Ing.) in January 2003 and his Ph.D. (Dr.techn.) in June 2006 (for research on the adaptation of scalable multimedia content in streaming and constraint environments) both from the Alpen-Adria-Universität Klagenfurt. He is currently an Associate Professor at the Institute of Information Technology (ITEC) within the Multimedia Communication Group. His research interests include immersive multimedia communication, streaming, adaptation, Quality of Experience, and Sensory Experience.

He has published more than 150 papers in these areas and he has organized a number of special sessions and issues in this domain, e.g., “Special Session on MMT/DASH” (MMsys 2011, followed by a special issue in Signal Processing: Image Communication, 2012), “Special Issue on Adaptive Media Streaming” (IEEE JSAC, published 2014). Furthermore, he was the general chair of WIAMIS 2008, QoMEX 2013, and QCMan 2014; will be general chair of ACM Multimedia Systems 2016. He is an editorial board member of IEEE Computer, associate editor for IEEE Transactions on Multimedia, area editor for the Elsevier journal on Signal Processing: Image Communication and a key member of the Interest Groups (IG) on Image and Video Coding as well as Quality of Experience and Director of the Review Board of the IEEE Multimedia Communication Technical Committee. Finally, he writes a regular column for ACM SIGMM Records where he serves as an editor and he is a member of the ACM SIGMM Open Source Software Committee. Dr. Timmerer participated in the work of ISO/MPEG for more then 10 years, notably as the head of the Austrian delegation, coordinator of several core experiments, co-chair of several ad- hoc groups, and as an editor for various standards, notably the MPEG-21 Multimedia Framework and the MPEG Extensible Middleware (MXM which became MPEG-M). His current contributions are in the area of MPEG-V (Media Context and Control) and Dynamic Adaptive Streaming over HTTP (DASH), for which he also serves as an editor. He received various ISO/IEC certificates of appreciation.

Ali C. Begen is with the Video and Content Platforms Research and Advanced Development Group at Cisco. His interests include networked entertainment, Internet multimedia, transport protocols and content delivery. Ali is currently working on architectures and protocols for next-generation video transport and distribution over IP networks, and he is an active contributor in the IETF and MPEG in these areas. Ali holds a Ph.D. degree in electrical and computer engineering from Georgia Tech. He received the Best Student-paper Award at IEEE ICIP 2003, the Most-cited Paper Award from Elsevier Signal Processing: Image Communication in 2008, and the Best-paper Award at Packet Video Workshop 2012. Ali has been an editor for the Consumer Communications and Networking series in the IEEE Communications Magazine since 2011 and an associate editor for the IEEE Transactions on Multimedia since 2013. He is a senior member of the IEEE and a senior member of the ACM. Further information on Ali’s projects, publications and presentations can be found at

Tuesday, February 10, 2015

Multimedia Streaming in Information-Centric Networks (MuSIC)

Call for Papers

2015 IEEE ICME Workshop
Multimedia Streaming in Information-Centric Networks (MuSIC)
Friday, July 3, 2015, Torino, Italy

Motivation and Goals

According to the Cisco Visual Networking Index and to Sandvine Global Internet Phenomena Reports, multimedia, in particular video for real-time entertainment, are the predominant sources of traffic on the current Internet and continue to grow. However, the Internet protocols and mechanisms have not at all been designed for the challenging real-time communication media like video and voice streaming and conferencing, such that îthe Internet only just works,î as Mark Handley put it. Intense research on Quality of Service (QoS) schemes and frameworks has been conducted over the past decades, not resulting in practical and widely accepted mechanisms in the IP networking world. Currently, Content Delivery Networks (CDNs) are the primary means to deliver massive amounts of real-time content, e.g., video streams, to clients in a satisfying manner.

Countering these problems and challenges, many Future Internet initiatives and projects have been and are being undertaken around the globe. Among them, Information-Centric Networking (ICN) is a promising approach, bringing content and efficient content distribution into focus. Several basic ICN concepts are quite similar to application-layer protocols in the IP world, e.g., a publish-subscribe approach in PSIRP/PURSUIT, pull-based data transport in CCN/NDN (interest/data packets) and in Adaptive HTTP Streaming approaches (request/response behavior).

Interestingly, though, the two communities, on Multimedia Systems/Communications and on Information-Centric Networking, have barely interacted. Multimedia communications researchers still mostly think and operate in the context of IP networks, while ICN researchers mainly discuss key networking aspects, not focusing on the requirements, challenges and opportunities of real-time multimedia data delivery/streaming (even though there are notable exceptions). Yet, recent intense discussions on the IRTF mailing list on video delivery and QoS/QoE and several publications (among them, an Internet Draft) indicate increased interest of ICN experts in multimedia communication.
The most important goal of this workshop is therefore to provide a forum that brings those two communities together, to spawn vivid discussions and intense exchange and learnings at the intersection of the two areas, and to help establish common terminology, work, and projects. The committees of the workshop are composed of leading members of both communities, in an attempt to solicit broad interest and good submissions to the workshop.

The workshop will emphasize video-on-demand (VoD) and voice/video conferencing (live) applications on ICNs, but other distributed multimedia applications are welcome, such as gaming. All aspects of media streaming in ICN will be addressed, including: basic principles and insights; protocols, mechanisms and policies (strategies) in ICN nodes; routing; measures and metrics for real-time behavior, QoS and QoE; evaluation methodology; prototype implementations, testbeds, and demos; and comparisons with IP-based systems. The workshop is open to discuss media streaming in all ICN approaches; comparisons of different ICN architectures are encouraged. Demos are welcome.

Topics of Interest (including, but not limited to)

  • Video-on-demand applications, prototypes, and demos over ICN
  • Voice/video conferencing applications, prototypes, and demos over ICN
  • Novel multimedia applications, prototypes, demos over ICN
  • Error and loss control and mitigation
  • Congestion detection and control
  • Naming and routing of media streams
  • Forwarding, aggregation, replication strategies (interests and content)
  • Caching strategies
  • Caching effects (probably unexpected and/or undesired)
  • DRM and its impact on or interplay with caching
  • Content adaptation in ICN
  • Media stream adaptation, bandwidth estimation,... on clients
  • Use of scalable media content
  • Fairness issues and metrics in ICN
  • Security and privacy issues for MM streaming over ICN
  • QoS and QoE mechanisms and metrics: impact on and interplay with ICN
  • Evaluation methodologies, in particular ICN simulation and experimental testbeds
  • Deployment and scalability issues

Submissions to the Workshop

  • Paper length: Prospective authors are invited to submit full-length papers, up to 6 pages long, by March 30, 2015.
  • Paper format: For author guidelines†and†paper templates please see:
  • Paper submission: All submissions are to be made via CMT web site at:† Please select "Workshop on Multimedia Streaming in Information-Centric Networks (MuSIC)".
  • Review process: Each submission will be peer-reviewed by at least three members of the TPC.
  • Accepted papers: Papers accepted for the workshop must be presented by one of the authors. Papers will be published in the Proceedings of ICME Workshops and also on-line in the IEEE Xplore digital library.

Important Dates

  • Paper submission:   March 30, 2015
  • Paper acceptance:   April 30, 2015
  • Camera-ready paper: May 15, 2015
  • Workshop:           July 3, 2015


Organizers and Technical Program Committee Chairs
- Hermann Hellwagner, Klagenfurt University, Austria
- George C. Polyzos, AUEB, Greece

Steering Committee
- Klara Nahrstedt, UIUC, USA
- George Pavlou, University College London, UK
- Cedric Westphal, Huawei, USA
- Chang Wen Chen, SUNY at Buffalo, USA

Technical Program Committee
- Alexander Afanasyev, UCLA, USA
- Ali Begen, Cisco, Canada
- Laszlo Bˆszˆrmenyi, Klagenfurt University, Austria
- Jeff Burke, UCLA, USA
- Giovanna Carofiglio, Cisco Systems, France
- Wei Koong Chai, University College London, UK
- Wolfgang Effelsberg, Univ. Mannheim & TU Darmstadt, Germany
- Abdulmotaleb El Saddik, University of Ottawa, Canada
- Pascal Frossard, EPFL, Switzerland
- Carsten Griwodz, Simula Research Lab & Univ.of Oslo, Norway
- Mohamed Hefeeda, Simon Fraser University, Canada
- Dirk Kutscher, NEC Labs Europe, Germany
- Giannis Marias, AUEB, Greece
- Luca Muscariello, Orange Labs, France
- Klara Nahrstedt, UIUC, USA
- Bˆrje Ohlman, Ericsson Research, Sweden
- Wei Tsang Ooi, National University of Singapore
- Dave Oran, Cisco, USA
- Jˆrg Ott, Aalto University, Finland
- Christos Papadopoulos, Colorado State University, USA
- Benjamin Rainer, Klagenfurt University, Austria
- Damien Saucez, INRIA, France
- Gwendal Simon, Telecom Bretagne, France
- Vasilios Siris, AUEB, Greece
- Ignacio Solis, PARC, USA
- Ralf Steinmetz, TU Darmstadt, Germany
- Christian Timmerer, Klagenfurt University, Austria
- Dirk Trossen, InterDigital, UK
- Laura Toni, EPFL, Switzerland
- Christian Tschudin, Universit‰t Basel, Switzerland
- George Xylomenos, AUEB, Greece
- Yonggang Wen, Nanyang Technological University, Singapore
- Roger Zimmermann, National University of Singapore

Monday, January 12, 2015

MPEG news: a report from the 110th meeting, Strasbourg, France

This blog post is also available at SIGMM records.

The 110th MPEG meeting was held at the Strasbourg Convention and Conference Centre featuring the following highlights:

  • The future of video coding standardization
  • Workshop on media synchronization
  • Standards at FDIS: Green Metadata and CDVS
  • What's happening in MPEG-DASH?
Additional details about MPEG's 110th meeting can be also found here including the official press release and all publicly available documents.

The Future of Video Coding Standardization

MPEG110 hosted a panel discussion about the future of video coding standardization. The panel was organized jointly by MPEG and ITU-T SG 16's VCEG featuring Roger Bolton (Ericsson), Harald Alvestrand (Google), Zhong Luo (Huawei), Anne Aaron (Netflix), Stéphane Pateux (Orange), Paul Torres (Qualcomm), and JeongHoon Park (Samsung).

As expected, "maximizing compression efficiency remains a fundamental need" and as usual, MPEG will study "future application requirements, and the availability of technology developments to fulfill these requirements". Therefore, two Ad-hoc Groups (AhGs) have been established which are open to the public:
The presentations of the brainstorming session on the future of video coding standardization can be found here.

Workshop on Media Synchronization

MPEG101 also hosted a workshop on media synchronization for hybrid delivery (broadband-broadcast) featuring six presentations "to better understand the current state-of-the-art for media synchronization and identify further needs of the industry".
  • An overview of MPEG systems technologies providing advanced media synchronization, Youngkwon Lim, Samsung
  • Hybrid Broadcast - Overview of DVB TM-Companion Screens and Streams specification, Oskar van Deventer, TNO
  • Hybrid Broadcast-Broadband distribution for new video services :  a use cases perspective, Raoul Monnier, Thomson Video Networks
  • HEVC and Layered HEVC for UHD deployments, Ye Kui Wang, Qualcomm
  • A fingerprinting-based audio synchronization technology, Masayuki Nishiguchi, Sony Corporation
  • Media Orchestration from Capture to Consumption, Rob Koenen, TNO
The presentation material is available here. Additionally, MPEG established an AhG on timeline alignment (that's how the project is internally called) to study use cases and solicit contributions on gap analysis and also technical contributions [email][subscription].

Standards at FDIS: Green Metadata and CDVS

My first report on MPEG Compact Descriptors for Visual Search (CDVS) dates back to July 2011 which provides details about the call for proposals. Now, finally, the FDIS has been approved during the 110th MPEG meeting. CDVS defines a compact image description that facilitates the comparison and search of pictures that include similar content, e.g. when showing the same objects in different scenes from different viewpoints. The compression of key point descriptors not only increases compactness, but also significantly speeds up, when compared to a raw representation of the same underlying features, the search and classification of images within large image databases. Application of CDVS for real-time object identification, e.g. in computer vision and other applications, is envisaged as well.

Another standard reached FDIS status entitled Green Metadata (first reported in August 2012). This standard specifies the format of metadata that can be used to reduce energy consumption from the encoding, decoding, and presentation of media content, while simultaneously controlling or avoiding degradation in the Quality of Experience (QoE). Moreover, the metadata specified in this standard can facilitate a trade-off between energy consumption and QoE. MPEG is also working on amendments to the ubiquitous MPEG-2 TS ISO/IEC 13818-1 and ISOBMFF ISO/IEC 14496-12 so that green metadata can be delivered by these formats.

What's happening in MPEG-DASH?

MPEG-DASH is in a kind of maintenance mode but still receiving new proposals in the area of SAND parameters and some core experiments are going on. Also, the DASH-IF is working towards new interoperability points and test vectors in preparation of actual deployments. When speaking about deployments, they are happening, e.g., a 40h live stream right before Christmas (by bitmovin, a top-100 company that matters most in online video). Additionally, VideoNext was co-located with CoNEXT'14 targeting scientific presentations about the design, quality and deployment of adaptive video streaming. Webex recordings of the talks are available here. In terms of standardization, MPEG-DASH is progressing towards the 2nd amendment including spatial relationship description (SRD), generalized URL parameters and other extensions. In particular, SRD will enable new use cases which can be only addressed using MPEG-DASH and the FDIS is scheduled for the next meeting which will be in Geneva, Feb 16-20, 2015. I'll report on this within my next blog post, stay tuned..

Tuesday, November 18, 2014

ACM International Conference on Interactive Experiences for Television & Online Video

*** TVX 2015 ***
ACM International Conference on Interactive Experiences for Television & Online Video
3rd – 5th June 2015

Hosted by iMinds Digital Society Department at the Crowne Plaza Hotel, Brussels, Belgium

Jointly organised with the International Symposium on Media Innovations (ISMI) and the Private Television Conference

  •  November 15, 2014: Course and Workshop proposals
  • January 12, 2015: Full and Short Paper submissions
  • March 2, 2015: WiP, TVX in Industry, Demo, Doctoral Consortium
TVX is the ACM International Conference on Interactive Experiences for Television and Online Video. TVX is the leading international conference for presentation and discussion of research into online video and TV interaction and user experience. The conference brings together international researchers and practitioners from a wide range of disciplines, ranging from human-computer interaction, multimedia engineering and design to media studies, media psychology and sociology. In addition to standard research paper presentations the conference includes a wide range of formats for presentation and discussion of research, including Industry Papers, Demos, Works-in-Progress, and also provides the opportunity to participate in the Doctoral Consortium and to run and attend courses and workshops on specialist topics in TV and online video interaction and user experience.

Topics of interest include (but are not limited to):
  • Content Production: traditional & novel content production for the new media landscape, including cross-platform services and interactive storytelling, and personalisation.
  • Systems & Infrastructures: system designs and architectures and their evaluation, including delivery, transmission, and synchronization of media.
  • Interaction Technologies & techniques: including gestural and multi-sensory, multi-display systems, and interaction for device ecosystems.
  • Experience Design & Evaluation: TV and online video design and evaluation research, including social and shared experiences.
  • Media Studies: including consumption practices and theoretical and practical ethical, regulatory, and policy issues.
  • Empirical Methods: novel methods for evaluating TV and online video experience, and audience measurement.
  • Data Science for TV & Online Video: advances in techniques for collaborative filtering, interactive/synchronous environments, collective intelligence and crowd-sourcing, and location-based and context-aware applications and services.
  • Business Models & Marketing: research and practice around novel business models and marketing strategies for the new media landscape of television and online video. Studies around novel ways of advertisement models and strategies.
  • Innovation & Visions of Future TV & Online Video: research on innovative design strategies, new concepts, and prototype experiences for television and online video, including case studies and media artworks and performances.

Contributions must describe unpublished original work, emphasizing completed or advanced research, and a parallel submission to other venues should be clearly indicated to the program committee. Research paper submissions are double-blind and will be reviewed by at least three program committee members.

For detailed submission guidelines, including the use of the ACM SIGCHI PCS submission system see:


Along the inclusion and accessibility strategy at TVX2015, we also provide a mentoring opportunity. During the TVX submission process, we can provide the opportunity to bring the experience of established researchers to new researchers. In the mentorship program we get you in contact with a specific member of the community who will provide feedback and support for your submission.

Please find more information on how to ask for mentoring, becoming a mentor, and our existing mentors:


For up-to-date information and further details visit:

For questions, please contact us on:



David Geerts, iMinds / KU Leuven, Belgium

Lieven De Marez, iMinds / UGent, Belgium

Caroline Pauwels, iMinds / VUB, Belgium


Frank Bentley, Yahoo Labs, USA

Christian Timmerer, Alpen-Adria-Universität Klagenfurt, Austria


Hokyoung Blake Ryu, Hanyang University, South Korea

Jeroen Vanattenhoven, iMinds / KU Leuven, Belgium


Rene Kaiser, Joanneum Research, Austria

Noor Ali-Hasan, Google, USA


Pedro Almeida, University of Aveiro, Portugal

Santosh Basapur, Illinois Institute of Technology, USA


Marian Ursu, University of York, UK

Teresa Chambel, University of Lisbon, Portugal


Katia Aerts, Mike Matton, VRT, Belgium


Tom Bartindale, Culture Lab, Newcastle University, UK

Rinze Leenheer, iMinds / KU Leuven, Belgium


Reuben Kirkham, Culture Lab, Newcastle University, UK

Tom Evens, iMinds / UGent, Belgium


Jonathan Huyghe, iMinds / KU Leuven, Belgium