Sunday, October 19, 2014

A bit of MPEG-DASH or the story behind bitdash!

Back in time, some years ago, it was around 2008 (or so), some people within MPEG were wondering whether it is worth looking beyond the MPEG-2 Transport Stream or new means for the delivery of media content encoded with MPEG codecs, specifically in the context of new codecs and use cases (e.g., ultra-high definition). There was even the saying like "hey, MPEG-2 is quite old, we need something new" and some of you probably know, "new is always better" ;)

In any case, MPEG started - like they ever did if something new pops up and there's a critical mass of supporters - an exploration phase resulting in - if there's enough evidence and interest (there was enough of both) - drafting use cases, context/objectives, and, finally, requirements followed by a call for proposals. In July 2010 I've chaired the evaluation of the responses to the call for proposals on HTTP Streaming of MPEG Media and two years later, the first edition of MPEG Dynamic Adaptive Streaming over HTTP (DASH) was published as ISO/IEC 23009-1. Another two years later,  the second edition was published in 2014 and now MPEG is working on further extensions, possibly leading to a third edition sooner or later.

Soon after the evaluation of the responses to the call for proposals, first open source implementations became available and dash.itec.aau.at became a major portal for tools and datasets around MPEG-DASH. After setting up this Web site we thought about the next step and this was founding a company called bitmovin which provides online streaming solutions (OTT), enabling highest-quality media experience for the end user.

Early October 2014 we launched the official product Web site www.dash-player.com for our MPEG-DASH player bitdash. bitdash enables MPEG-DASH playback on any Web browser including mobile platforms using either HTML5 or Flash, depending on the browser and its version. bitdash pays high attention to stability and performance, featuring fast startup, best adaption algorithms and, thus, highest media quality of experience to the end user.

Feel free to download a free version on www.dash-player.com and give it a try!

Friday, September 26, 2014

Call for Poster/Demo: ACM VideoNext 2014

Call for Posters/Demo: ACM VideoNext 2014
02 December, Sydney Australia
In Conjunction with ACM CoNEXT 2014

VideoNext 2014 is inviting submissions for a special poster and demo session that will foster lively, informal, and in-depth discussions on emerging topics in video streaming and multimedia communications. Topics of interest include, but are not limited to:
  • Media streaming, distribution, and storage
  • Cloud-assisted video streaming including encoding, transcoding, and adaptation
  • Energy efficient multimedia streaming
  • Peer-to-peer and cooperative video streaming
  • Multimedia communications and system security
  • Networked games and real-time immersive systems
  • Web 2.0 systems and social networks
  • Streaming next generation video like multi-view, panorama and 3D
  • Wireless networks and embedded systems for multimedia applications
  • Compressive sensing for efficient video capture, processing, and streaming

What and How to Submit

VideoNext posters and demos will be selected on the basis of two-page PDF abstracts, with fonts no smaller than 10 point, using the same format as for regular papers.  Abstracts must be submitted using the submission system linked from the call for poster/demo site:


Posters will be reviewed and selected on the following basis:
  • Submissions must describe new, interesting work, not previously presented. Posters may be accompanied by demos (subject to limited space). Preference will be given toward posters accompanied by demos.
  • Student submissions meeting the above criteria will be given preference; however, non-students may also submit abstracts.
Please provide the following information in your PDF file in addition to presenting your research:
  • Poster title
  • Author names, affiliations and email addresses
  • Mark which authors, if any, are students
  • Indicate if you plan to set up a demo with your poster. If so, the submission must include the requirements for the demo setup and presentation. Note that the authors will be responsible to bring and set up any equipment they will need.

All submissions will be reviewed by the TPC of VideoNext 2014.

Accepted posters will be published online on the workshop Web site. At least one author should register and present the poster throughout the entire session. Authors of the best posters will be given award certificates and three minutes each to present their work before the poster session.

Important Dates
  • Submission: 17 October 2014
  • Notifications: 24 October 2014

Monday, August 18, 2014

Over-the-Top Content Delivery: State of the Art and Challenges Ahead

Tutorial at ACM Multimedia 2014
November 3-7, 2014
Orlando, Florida, USA

Abstract: In this tutorial we present state of the art and challenges ahead in over-the-top content delivery. It particular, the goal of this tutorial is to provide an overview of adaptive media delivery, specifically in the context of HTTP adaptive streaming (HAS) including the recently ratified MPEG-DASH standard. The main focus of the tutorial will be on the common problems in HAS deployments such as client design, QoE optimization, multi-screen and hybrid delivery scenarios, and synchronization issues. For each problem, we will examine proposed solutions along with their pros and cons. In the last part of the tutorial, we will look into the open issues and review the work-in-progress and future research directions.

Biography of Presenters
Christian Timmerer is an Associate Professor at the Institute of Information Technology (ITEC), Multimedia Communication Group (MMC), Alpen-Adria-Universität Klagenfurt, Austria. His research interests include immersive multimedia communication, streaming, adaptation, and Quality of Experience (QoE). He was the general chair of QoMEX’13, WIAMIS’08, AVSTP2P’10 (co-located with ACMMM’10), WoMAN’11 (co-located with ICME’11), and TPC co-chair of QoMEX’12. He has participated in several EC-funded projects, notably DANAE, ENTHRONE, P2P-Next, ALICANTE, SocialSensor, and ICoSOLE. He is an Associate Editor for IEEE Computer Science Computing Now, Area Editor for Elsevier Signal Processing: Image Communication, Review Board Member of IEEE MMTC, editor of ACM SIGMM Records, and member of ACM SIGMM Open Source Software Committee. He also participated in ISO/MPEG work for several years, notably in the area of MPEG-21, MPEG-M, MPEG-V, and DASH (incl. DASH Industry Forum). He received his PhD in 2006 from the Klagenfurt University. Follow him on http://www.twitter.com/timse7 and subscribe to his blog http://blog.timmerer.com

Ali C. Begen is with the Video and Content Platforms Research and Advanced Development Group at Cisco. His interests include networked entertainment, Internet multimedia, transport protocols and content delivery. Ali is currently working on architectures and protocols for next-generation video transport and distribution over IP networks. He is an active contributor in the IETF and MPEG, and has given a number of keynotes, tutorials and guest lectures in these areas. Ali holds a Ph.D. degree in electrical and computer engineering from Georgia Tech. He received the Best Student-paper Award at IEEE ICIP 2003, the Most-cited Paper Award from Elsevier Signal Processing: Image Communication in 2008, and the Best-paper Award at Packet Video Workshop 2012. Ali has been an editor for the Consumer Communications and Networking series in the IEEE Communications Magazine since 2011 and an associate editor for the IEEE Transactions on Multimedia since 2013. He served as a general co-chair for ACM Multimedia Systems 2011 and Packet Video Workshop 2013. He is a senior member of the IEEE and a senior member of the ACM. Further information on Ali’s projects, publications, presentations and professional activities can be found at http://ali.begen.net.

Monday, July 7, 2014

bitmovin White Papers on bitcodin and bitdash for MPEG-DASH

bitmovin recently release two white papers related to MPEG-DASH describing two major components of any DASH-based ecosystem. The first white paper is related to preparing content compliant to MPEG-DASH utilizing cloud infrastructure and is referred to bitcodin™ (PDF) offering transcoding & streaming as a service (T&SaaS). As such it provides benefits across multiple dimensions:

  • Remove capacity bottlenecks in the streaming media workflows. 
  • Flexibility to scale resources and associated operational costs with the demand. 
  • Right-size encoding and streaming infrastructure. 
  • Eliminate the necessity for capital investments in dedicated encoding systems. 
  • Full flexibility to choose quality and speed of encoding. 
  • Reduce reliance on specific technical encoding/streaming expertise. 

The second white paper is about the client adaptation framework which is essential for every playback device. Therefore, bitdash™ (PDF) is a suite of highly optimized MPEG-DASH clients for the broadest range of platforms and devices, delivering the best streaming performance and user experience, in particular in adverse (mobile) network conditions.

bitdash™ is the result of continued R&D investments and incorporates patent pending technology  resulting in MPEG-DASH compliant client solutions that deliver up to 101% higher effective  media throughput as well as significantly higher Quality of Experience (QoE) compared to existing adaptive bitrate streaming technologies and clients.

Further information can be found at http://www.bitmovin.net/.


Thursday, June 26, 2014

VideoNext: Design, Quality and Deployment of Adaptive Video Streaming


The workshop co-located with CoNEXT 2014
December 2, 2014
Sydney, Australia

Submission deadline changed: August 29, 2014 (no further extensions)

Call for Papers

As we continue to develop our ability to generate, process, and display video at increasingly higher quality, we confront the challenge of streaming the same video to the end user. Device heterogeneity in terms of size and processing capabilities combined with the lack of timing guarantees of packet switching networks is forcing the industry to adopt streaming solutions capable of dynamically adapting the video quality in response to resource variability in the end-to-end transport chain. For example, many vendors and providers are already trialing their own proprietary adaptive video streaming platforms while MPEG has recently ratified a standard, called Dynamic Adaptive Streaming over HTTP (DASH), to facilitate widespread deployment of such technology. However, how to best adapt the video to ensure highest user quality of experience while consuming the minimum network resources poses many fundamental challenges, which is attracting the attention of researchers from both academia and industry. The goal of this workshop is to bring together researchers and developers working on all aspects of adaptive video streaming with special emphasis on innovative concepts backed up by experimental evidence.

Specific areas of interest include, but are not limited to:
  • New metrics for measuring user quality of experience (QoE) for adaptive video streaming
  • Solutions for improving streaming QoE for high-speed user mobility
  • Analysis, modelling, and experimentation of DASH
  • Exploitation of user contexts for improving efficiency of adaptive streaming
  • Big data analytics to assess viewer experience of adaptive video
  • Efficient and fair bandwidth sharing techniques for bottleneck links supporting multiple adaptive video streams
  • Network functions to assist and improve adaptive video streaming
  • Synchronization issues in adaptive video streaming (inter-media, inter-device/destination)
  • Methods for effective simulation or emulation of large scale adaptive video streaming platforms
  • Cloud-assisted adaptive video streaming including encoding, transcoding, and adaptation in general
  • Attack scenarios and solutions for adaptive video streaming
  • Energy-efficient adaptive streaming for resource-constraint mobile devices
  • Reproducible research in adaptive video streaming: datasets, evaluation methods, benchmarking, standardization efforts, open source tools
  • Novel use cases and applications in the area of adaptive video streaming

The workshop is considered an integral part of the CoNEXT 2014 conference. All workshop papers will be published in the same set of proceedings as the main conference, and available on the ACM Digital Library. Publication at this workshop is not intended to preclude later publication of an extended version of the paper. At least one author of each accepted papers is expected to present his/her paper at the workshop.

Instructions for Authors

A submission must be no greater than 6 pages in length including all figures, tables, references, appendices, etc., and must be a PDF file of less than 10MB. The review process is single-blind.

Follow the same formatting guidelines as the CoNEXT conference, except VideoNext has a 6 page limit and a 10MB file size limit. See the “Formatting Guidelines” section. Submissions that deviate from these guidelines will be rejected without consideration.

Then use the paper submission site to submit your paper by 8:59 pm Pacific Standard Time (PDT), August 29, 2014.
Important dates
  • Paper Submission: August 2229, 2014 20:59 PDT
  • Notification of Acceptance: September 30, 2014
  • Camera-ready Papers Due: October 24, 2014
  • Workshop: December 2, 2014

TPC co-chairs
  • Mahbub Hassan, University of New South Wales, Australia
  • Ali C. Begen, Cisco Canada
  • Christian Timmerer, Alpen-Adria-Universität Klagenfurt, Austria

Technical Program Committee
  • Alexander Raake, Deutsche Telecom Labs, Germany
  • Carsten Griwodz, University of Oslo/Simula, Sweden
  • Chao Chen, Qualcom, USA
  • Colin Perkins, University of Glasgow, Scotland
  • Constantine Dovrolis, Georgia Tech, USA
  • Grenville Armitage, Swinburne University of Technology, Australia
  • Imed Bouazizi, Samsung
  • Kuan-Ta Chen, Academia Sinica
  • Magda El Zarki, University of California Irvine, USA
  • Manzur Murshed, Federation University Australia, Australia
  • Pal Halvorsen, University of Oslo/Simula
  • Polychronis Koutsakis, Technical University of Crete, Greece
  • Roger Zimmerman, National University of Singapore, Singapore
  • Saverio Mascolo, University of Bari, Italy
  • Shervin Shirmohammadi ,University of Ottawa, Canada
  • Victor Leung, University of British Columbia, Canada

Friday, April 25, 2014

MPEG news: a report from the 108th meeting, Valencia, Spain

This blog post is also available at bitmovin tech blog and SIGMM records.

The 108th MPEG meeting was held at the Palacio de Congresos de Valencia in Spain featuring the following highlights (no worries about the acronyms, this is on purpose and they will be further explained below):
  • Requirements: PSAF, SCC, CDVA
  • Systems: M2TS, MPAF, Green Metadata
  • Video: CDVS, WVC, VCB
  • JCT-VC: SHVC, SCC
  • JCT-3D: MV/3D-HEVC, 3D-AVC
  • Audio: 3D audio 
Opening Plenary of the 108th MPEG meeting in Valencia, Spain.
The official MPEG press release can be downloaded from the MPEG Web site. Some of the above highlighted topics will be detailed in the following and, of course, there’s an update on DASH-related matters at the end.

As indicated above, MPEG is full of (new) acronyms and in order to become familiar with those, I’ve put them deliberately in the overview but I will explain them further below.

PSAF – Publish/Subscribe Application Format

Publish/subscribe corresponds to a new network paradigm related to content-centric networking (or information-centric networking) where the content is addressed by its name rather than location. An application format within MPEG typically defines a combination of existing MPEG tools jointly addressing the needs for a given application domain, in this case, the publish/subscribe paradigm. The current requirements and a preliminary working draft are publicly available.

SCC – Screen Content Coding

I’ve introduced this topic in my previous report and this meeting the responses to the CfP have been evaluated. In total, seven responses have been received which meet all requirements and, thus, the actual standardization work is transferred to JCT-VC. Interestingly, the results of the CfP are publicly available. Within JCT-VC, a first test model has been defined and core experiments have been established. I will report more on this as an output of the next meetings…

CDVA – Compact Descriptors for Video Analysis

This project has been renamed from compact descriptors for video search to compact descriptors for video analysis and comprises a publicly available vision statement. That is, interested parties are welcome to join this new activity within MPEG.

M2TS – MPEG-2 Transport Stream

At this meeting, various extensions to M2TS have been defined such as transport of multi-view video coding depth information and extensions to HEVC, delivery of timeline for external data as well as carriage of layered HEVC, green metadata, and 3D audio. Hence, M2TS is still very active and multiple amendments are developed in parallel.

MPAF – Multimedia Preservation Application Format

The committee draft for MPAF has been approved and, in this context, MPEG-7 is extended with additional description schemes.

Green Metadata

Well, this standard does not have its own acronym; it’s simply referred to as MPEG-GREEN. The draft international standard has been approved and national bodies will vote on it at the JTC 1 level. It basically defines metadata to allow clients operating in an energy-efficient way. It comes along with amendments to M2TS and ISOBMFF that enable the carriage and storage of this metadata.

CDVS – Compact Descriptors for Visual Search

CDVS is at DIS stage and provide improvements on global descriptors as well as non-normative improvements of key-point detection and matching in terms of speedup and memory consumption. As all standards at DIS stage, national bodies will vote on it at the JTC 1 level. 

What’s new in the video/audio-coding domain?
  • WVC – Web Video Coding: This project reached final draft international standard with the goal to provide a video-coding standard for Web applications. It basically defines a profile of the MPEG-AVC standard including those tools not encumbered by patents.
  • VCB – Video Coding for Browsers: The committee draft for part 31 of MPEG-4 defines video coding for browsers and basically defines VP8 as an international standard. This is explains also the difference to WVC.
  • SHVC – Scalable HEVC extensions: As for SVC, SHVC will be defined as an amendment to HEVC providing the same functionality as SVC, scalable video coding functionality.
  • MV/3D-HEVC, 3D-AVC: These are multi-view and 3D extensions for the HEVC and AVC standards respectively.
  • 3D Audio: Also, no acronym for this standard although I would prefer 3DA. However, CD has been approved at this meeting and the plan is to have DIS at the next meeting. At the same time, the carriage and storage of 3DA is being defined in M2TS and ISOBMFF respectively. 
Finally, what’s new in the media transport area, specifically DASH and MMT?

As interested readers know from my previous reports, DASH 2nd edition has been approved has been approved some time ago. In the meantime, a first amendment to the 2nd edition is at draft amendment state including additional profiles (mainly adding xlink support) and time synchronization. A second amendment goes to the first ballot stage referred to as proposed draft amendment and defines spatial relationship description, generalized URL parameters, and other extensions. Eventually, these two amendments will be integrated in the 2nd edition which will become the MPEG-DASH 3rd edition. Also a corrigenda on the 2nd edition is currently under ballot and new contributions are still coming in, i.e., there is still a lot of interest in DASH. For your information – there will be two DASH-related sessions at Streaming Forum 2014.

On the other hand, MMT’s amendment 1 is currently under ballot and amendment 2 defines header compression and cross-layer interface. The latter has been progressed to a study document which will be further discussed at the next meeting. Interestingly, there will be a MMT developer’s day at the 109th MPEG meeting as in Japan, 4K/8K UHDTV services will be launched based on MMT specifications and in Korea and China, implementation of MMT is now under way. The developer’s day will be on July 5th (Saturday), 2014, 10:00 – 17:00 at the Sapporo Convention Center. Therefore, if you don’t know anything about MMT, the developer’s day is certainly a place to be.

Contact:

Dr. Christian Timmerer
CIO bitmovin GmbH | christian.timmerer@bitmovin.net
Alpen-Adria-Universität Klagenfurt | christian.timmerer@aau.at

What else? That is, some publicly available MPEG output documents… (Dates indicate availability and end of editing period, if applicable, using the following format YY/MM/DD):
  • Text of ISO/IEC 13818-1:2013 PDAM 7 Carriage of Layered HEVC (14/05/02) 
  • WD of ISO/IEC 13818-1:2013 AMD Carriage of Green Metadata (14/04/04) 
  • WD of ISO/IEC 13818-1:2013 AMD Carriage of 3D Audio (14/04/04) 
  • WD of ISO/IEC 13818-1:2013 AMD Carriage of additional audio profiles & levels (14/04/04) 
  • Text of ISO/IEC 14496-12:2012 PDAM 4 Enhanced audio support (14/04/04) 
  • TuC on sample variants, signatures and other improvements for the ISOBMFF (14/04/04) 
  • Text of ISO/IEC CD 14496-22 3rd edition (14/04/04) 
  • Text of ISO/IEC CD 14496-31 Video Coding for Browsers (14/04/11) 
  • Text of ISO/IEC 15938-5:2005 PDAM 5 Multiple text encodings, extended classification metadata (14/04/04) 
  • WD 2 of ISO/IEC 15938-6:201X (2nd edition) (14/05/09) 
  • Text of ISO/IEC DIS 15938-13 Compact Descriptors for Visual Search (14/04/18) 
  • Test Model 10: Compact Descriptors for Visual Search (14/05/02) 
  • WD of ARAF 2nd Edition (14/04/18) 
  • Use cases for ARAF 2nd Edition (14/04/18) 
  • WD 5.0 MAR Reference Model (14/04/18) 
  • Logistic information for the 5th JAhG MAR meeting (14/04/04) 
  • Text of ISO/IEC CD 23000-15 Multimedia Preservation Application Format (14/04/18) 
  • WD of Implementation Guideline of MP-AF (14/04/04) 
  • Requirements for Publish/Subscribe Application Format (PSAF) (14/04/04) 
  • Preliminary WD of Publish/Subscribe Application Format (14/04/04) 
  • WD2 of ISO/IEC 23001-4:201X/Amd.1 Parser Instantiation from BSD (14/04/11) 
  • Text of ISO/IEC 23001-8:2013/DCOR1 (14/04/18) 
  • Text of ISO/IEC DIS 23001-11 Green Metadata (14/04/25) 
  • Study Text of ISO/IEC 23002-4:201x/DAM2 FU and FN descriptions for HEVC (14/04/04) 
  • Text of ISO/IEC 23003-4 CD, Dynamic Range Control (14/04/11) 
  • MMT Developers’ Day in 109th MPEG meeting (14/04/04) 
  • Results of CfP on Screen Content Coding Tools for HEVC (14/04/30) 
  • Study Text of ISO/IEC 23008-2:2013/DAM3 HEVC Scalable Extensions (14/06/06) 
  • HEVC RExt Test Model 7 (14/06/06) 
  • Scalable HEVC (SHVC) Test Model 6 (SHM 6) (14/06/06) 
  • Report on HEVC compression performance verification testing (14/04/25) 
  • HEVC Screen Content Coding Test Model 1 (SCM 1) (14/04/25) 
  • Study Text of ISO/IEC 23008-2:2013/PDAM4 3D Video Extensions (14/05/15) 
  • Test Model 8 of 3D-HEVC and MV-HEVC (14/05/15) 
  • Text of ISO/IEC 23008-3/CD, 3D audio (14/04/11) 
  • Listening Test Logistics for 3D Audio Phase 2 (14/04/04) 
  • Active Downmix Control (14/04/04) 
  • Text of ISO/IEC PDTR 23008-13 Implementation Guidelines for MPEG Media Transport (14/05/02) 
  • Text of ISO/IEC 23009-1 2nd edition DAM 1 Extended Profiles and availability time synchronization (14/04/18) 
  • Text of ISO/IEC 23009-1 2nd edition PDAM 2 Spatial Relationship Description, Generalized URL parameters and other extensions (14/04/18) 
  • Text of ISO/IEC PDTR 23009-3 2nd edition DASH Implementation Guidelines (14/04/18) 
  • MPEG vision for Compact Descriptors for Video Analysis (CDVA) (14/04/04) 
  • Plan of FTV Seminar at 109th MPEG Meeting (14/04/04) 
  • Draft Requirements and Explorations for HDR /WCG Content Distribution and Storage (14/04/04) 
  • Working Draft 2 of Internet Video Coding (IVC) (14/04/18) 
  • Internet Video Coding Test Model (ITM) v 9.0 (14/04/18) 
  • Uniform Timeline Alignment (14/04/18) 
  • Plan of Seminar on Hybrid Delivery at the 110th MPEG Meeting (14/04/04) 
  • WD 2 of MPEG User Description (14/04/04)

Thursday, April 24, 2014

Quality of Experience: Advanced Concepts, Applications, and Methods

Quality of Experience
Advanced Concepts, Applications, and Methods

Sebastian Möller and Alexander Raake (Eds.)

  • Develops and applies the definition of Quality of Experience in telecommunication services
  • Includes examples and guidelines for many fields of applications
  • Written and Edited by known experts in the field
This pioneering book develops definitions and concepts related to Quality of Experience in the context of multimedia- and telecommunications-related applications, systems and services, and applies these to various fields of communication and media technologies. The editors bring together numerous key-protagonists of the new discipline “Quality of Experience” and combine the state-of-the-art knowledge in one single volume.

The Sensory Experience Lab (SELab) is also having its chapter within this book entitled Sensory Experience: Quality of Experience Beyond Audio-Visual with the abstract as follows:

Abstract: This chapter introduces the concept of Sensory Experience which aims to define the Quality of Experience (QoE) going beyond audio-visual content. In particular, we show how to utilize sensory effects such as ambient light, scent, wind, or vibration as additional dimensions contributing to the quality of the user experience. Therefore, we utilize a standardized representation format for sensory effects that are attached to traditional multimedia resources such as audio, video, and image contents. Sensory effects are rendered on special devices (e.g., fans, lights, motion chair, scent emitter) in synchronization with the traditional multimedia resources and shall stimulate also other senses than hearing and seeing with the intention to increase the Quality of Experience (QoE), in this context referred to as Sensory Experience.