Monday, July 7, 2014

bitmovin White Papers on bitcodin and bitdash for MPEG-DASH

bitmovin recently release two white papers related to MPEG-DASH describing two major components of any DASH-based ecosystem. The first white paper is related to preparing content compliant to MPEG-DASH utilizing cloud infrastructure and is referred to bitcodin™ (PDF) offering transcoding & streaming as a service (T&SaaS). As such it provides benefits across multiple dimensions:

  • Remove capacity bottlenecks in the streaming media workflows. 
  • Flexibility to scale resources and associated operational costs with the demand. 
  • Right-size encoding and streaming infrastructure. 
  • Eliminate the necessity for capital investments in dedicated encoding systems. 
  • Full flexibility to choose quality and speed of encoding. 
  • Reduce reliance on specific technical encoding/streaming expertise. 

The second white paper is about the client adaptation framework which is essential for every playback device. Therefore, bitdash™ (PDF) is a suite of highly optimized MPEG-DASH clients for the broadest range of platforms and devices, delivering the best streaming performance and user experience, in particular in adverse (mobile) network conditions.

bitdash™ is the result of continued R&D investments and incorporates patent pending technology  resulting in MPEG-DASH compliant client solutions that deliver up to 101% higher effective  media throughput as well as significantly higher Quality of Experience (QoE) compared to existing adaptive bitrate streaming technologies and clients.

Further information can be found at http://www.bitmovin.net/.


Thursday, June 26, 2014

VideoNext: Design, Quality and Deployment of Adaptive Video Streaming


The workshop co-located with CoNEXT 2014
December 2, 2014
Sydney, Australia

Call for Papers

As we continue to develop our ability to generate, process, and display video at increasingly higher quality, we confront the challenge of streaming the same video to the end user. Device heterogeneity in terms of size and processing capabilities combined with the lack of timing guarantees of packet switching networks is forcing the industry to adopt streaming solutions capable of dynamically adapting the video quality in response to resource variability in the end-to-end transport chain. For example, many vendors and providers are already trialing their own proprietary adaptive video streaming platforms while MPEG has recently ratified a standard, called Dynamic Adaptive Streaming over HTTP (DASH), to facilitate widespread deployment of such technology. However, how to best adapt the video to ensure highest user quality of experience while consuming the minimum network resources poses many fundamental challenges, which is attracting the attention of researchers from both academia and industry. The goal of this workshop is to bring together researchers and developers working on all aspects of adaptive video streaming with special emphasis on innovative concepts backed up by experimental evidence.

Specific areas of interest include, but are not limited to:
  • New metrics for measuring user quality of experience (QoE) for adaptive video streaming
  • Solutions for improving streaming QoE for high-speed user mobility
  • Analysis, modelling, and experimentation of DASH
  • Exploitation of user contexts for improving efficiency of adaptive streaming
  • Big data analytics to assess viewer experience of adaptive video
  • Efficient and fair bandwidth sharing techniques for bottleneck links supporting multiple adaptive video streams
  • Network functions to assist and improve adaptive video streaming
  • Synchronization issues in adaptive video streaming (inter-media, inter-device/destination)
  • Methods for effective simulation or emulation of large scale adaptive video streaming platforms
  • Cloud-assisted adaptive video streaming including encoding, transcoding, and adaptation in general
  • Attack scenarios and solutions for adaptive video streaming
  • Energy-efficient adaptive streaming for resource-constraint mobile devices
  • Reproducible research in adaptive video streaming: datasets, evaluation methods, benchmarking, standardization efforts, open source tools
  • Novel use cases and applications in the area of adaptive video streaming

The workshop is considered an integral part of the CoNEXT 2014 conference. All workshop papers will be published in the same set of proceedings as the main conference, and available on the ACM Digital Library. Publication at this workshop is not intended to preclude later publication of an extended version of the paper. At least one author of each accepted papers is expected to present his/her paper at the workshop.

Instructions for Authors

A submission must be no greater than 6 pages in length including all figures, tables, references, appendices, etc., and must be a PDF file of less than 10MB. The review process is single-blind.

Follow the same formatting guidelines as the CoNEXT conference, except VideoNext has a 6 page limit and a 10MB file size limit. See the “Formatting Guidelines” section. Submissions that deviate from these guidelines will be rejected without consideration.

Important dates
  • Paper Submission: August 22, 2014 20:59 PDT
  • Notification of Acceptance: September 30, 2014
  • Camera-ready Papers Due: October 24, 2014
  • Workshop: December 2, 2014

TPC co-chairs
  • Mahbub Hassan, University of New South Wales, Australia
  • Ali C. Begen, Cisco Canada
  • Christian Timmerer, Alpen-Adria-Universität Klagenfurt, Austria

Technical Program Committee
  • Alexander Raake, Deutsche Telecom Labs, Germany
  • Carsten Griwodz, University of Oslo/Simula, Sweden
  • Chao Chen, Qualcom, USA
  • Colin Perkins, University of Glasgow, Scotland
  • Constantine Dovrolis, Georgia Tech, USA
  • Grenville Armitage, Swinburne University of Technology, Australia
  • Imed Bouazizi, Samsung
  • Kuan-Ta Chen, Academia Sinica
  • Magda El Zarki, University of California Irvine, USA
  • Manzur Murshed, Federation University Australia, Australia
  • Pal Halvorsen, University of Oslo/Simula
  • Polychronis Koutsakis, Technical University of Crete, Greece
  • Roger Zimmerman, National University of Singapore, Singapore
  • Saverio Mascolo, University of Bari, Italy
  • Shervin Shirmohammadi ,University of Ottawa, Canada
  • Victor Leung, University of British Columbia, Canada

Friday, April 25, 2014

MPEG news: a report from the 108th meeting, Valencia, Spain

This blog post is also available at bitmovin tech blog and SIGMM records.

The 108th MPEG meeting was held at the Palacio de Congresos de Valencia in Spain featuring the following highlights (no worries about the acronyms, this is on purpose and they will be further explained below):
  • Requirements: PSAF, SCC, CDVA
  • Systems: M2TS, MPAF, Green Metadata
  • Video: CDVS, WVC, VCB
  • JCT-VC: SHVC, SCC
  • JCT-3D: MV/3D-HEVC, 3D-AVC
  • Audio: 3D audio 
Opening Plenary of the 108th MPEG meeting in Valencia, Spain.
The official MPEG press release can be downloaded from the MPEG Web site. Some of the above highlighted topics will be detailed in the following and, of course, there’s an update on DASH-related matters at the end.

As indicated above, MPEG is full of (new) acronyms and in order to become familiar with those, I’ve put them deliberately in the overview but I will explain them further below.

PSAF – Publish/Subscribe Application Format

Publish/subscribe corresponds to a new network paradigm related to content-centric networking (or information-centric networking) where the content is addressed by its name rather than location. An application format within MPEG typically defines a combination of existing MPEG tools jointly addressing the needs for a given application domain, in this case, the publish/subscribe paradigm. The current requirements and a preliminary working draft are publicly available.

SCC – Screen Content Coding

I’ve introduced this topic in my previous report and this meeting the responses to the CfP have been evaluated. In total, seven responses have been received which meet all requirements and, thus, the actual standardization work is transferred to JCT-VC. Interestingly, the results of the CfP are publicly available. Within JCT-VC, a first test model has been defined and core experiments have been established. I will report more on this as an output of the next meetings…

CDVA – Compact Descriptors for Video Analysis

This project has been renamed from compact descriptors for video search to compact descriptors for video analysis and comprises a publicly available vision statement. That is, interested parties are welcome to join this new activity within MPEG.

M2TS – MPEG-2 Transport Stream

At this meeting, various extensions to M2TS have been defined such as transport of multi-view video coding depth information and extensions to HEVC, delivery of timeline for external data as well as carriage of layered HEVC, green metadata, and 3D audio. Hence, M2TS is still very active and multiple amendments are developed in parallel.

MPAF – Multimedia Preservation Application Format

The committee draft for MPAF has been approved and, in this context, MPEG-7 is extended with additional description schemes.

Green Metadata

Well, this standard does not have its own acronym; it’s simply referred to as MPEG-GREEN. The draft international standard has been approved and national bodies will vote on it at the JTC 1 level. It basically defines metadata to allow clients operating in an energy-efficient way. It comes along with amendments to M2TS and ISOBMFF that enable the carriage and storage of this metadata.

CDVS – Compact Descriptors for Visual Search

CDVS is at DIS stage and provide improvements on global descriptors as well as non-normative improvements of key-point detection and matching in terms of speedup and memory consumption. As all standards at DIS stage, national bodies will vote on it at the JTC 1 level. 

What’s new in the video/audio-coding domain?
  • WVC – Web Video Coding: This project reached final draft international standard with the goal to provide a video-coding standard for Web applications. It basically defines a profile of the MPEG-AVC standard including those tools not encumbered by patents.
  • VCB – Video Coding for Browsers: The committee draft for part 31 of MPEG-4 defines video coding for browsers and basically defines VP8 as an international standard. This is explains also the difference to WVC.
  • SHVC – Scalable HEVC extensions: As for SVC, SHVC will be defined as an amendment to HEVC providing the same functionality as SVC, scalable video coding functionality.
  • MV/3D-HEVC, 3D-AVC: These are multi-view and 3D extensions for the HEVC and AVC standards respectively.
  • 3D Audio: Also, no acronym for this standard although I would prefer 3DA. However, CD has been approved at this meeting and the plan is to have DIS at the next meeting. At the same time, the carriage and storage of 3DA is being defined in M2TS and ISOBMFF respectively. 
Finally, what’s new in the media transport area, specifically DASH and MMT?

As interested readers know from my previous reports, DASH 2nd edition has been approved has been approved some time ago. In the meantime, a first amendment to the 2nd edition is at draft amendment state including additional profiles (mainly adding xlink support) and time synchronization. A second amendment goes to the first ballot stage referred to as proposed draft amendment and defines spatial relationship description, generalized URL parameters, and other extensions. Eventually, these two amendments will be integrated in the 2nd edition which will become the MPEG-DASH 3rd edition. Also a corrigenda on the 2nd edition is currently under ballot and new contributions are still coming in, i.e., there is still a lot of interest in DASH. For your information – there will be two DASH-related sessions at Streaming Forum 2014.

On the other hand, MMT’s amendment 1 is currently under ballot and amendment 2 defines header compression and cross-layer interface. The latter has been progressed to a study document which will be further discussed at the next meeting. Interestingly, there will be a MMT developer’s day at the 109th MPEG meeting as in Japan, 4K/8K UHDTV services will be launched based on MMT specifications and in Korea and China, implementation of MMT is now under way. The developer’s day will be on July 5th (Saturday), 2014, 10:00 – 17:00 at the Sapporo Convention Center. Therefore, if you don’t know anything about MMT, the developer’s day is certainly a place to be.

Contact:

Dr. Christian Timmerer
CIO bitmovin GmbH | christian.timmerer@bitmovin.net
Alpen-Adria-Universität Klagenfurt | christian.timmerer@aau.at

What else? That is, some publicly available MPEG output documents… (Dates indicate availability and end of editing period, if applicable, using the following format YY/MM/DD):
  • Text of ISO/IEC 13818-1:2013 PDAM 7 Carriage of Layered HEVC (14/05/02) 
  • WD of ISO/IEC 13818-1:2013 AMD Carriage of Green Metadata (14/04/04) 
  • WD of ISO/IEC 13818-1:2013 AMD Carriage of 3D Audio (14/04/04) 
  • WD of ISO/IEC 13818-1:2013 AMD Carriage of additional audio profiles & levels (14/04/04) 
  • Text of ISO/IEC 14496-12:2012 PDAM 4 Enhanced audio support (14/04/04) 
  • TuC on sample variants, signatures and other improvements for the ISOBMFF (14/04/04) 
  • Text of ISO/IEC CD 14496-22 3rd edition (14/04/04) 
  • Text of ISO/IEC CD 14496-31 Video Coding for Browsers (14/04/11) 
  • Text of ISO/IEC 15938-5:2005 PDAM 5 Multiple text encodings, extended classification metadata (14/04/04) 
  • WD 2 of ISO/IEC 15938-6:201X (2nd edition) (14/05/09) 
  • Text of ISO/IEC DIS 15938-13 Compact Descriptors for Visual Search (14/04/18) 
  • Test Model 10: Compact Descriptors for Visual Search (14/05/02) 
  • WD of ARAF 2nd Edition (14/04/18) 
  • Use cases for ARAF 2nd Edition (14/04/18) 
  • WD 5.0 MAR Reference Model (14/04/18) 
  • Logistic information for the 5th JAhG MAR meeting (14/04/04) 
  • Text of ISO/IEC CD 23000-15 Multimedia Preservation Application Format (14/04/18) 
  • WD of Implementation Guideline of MP-AF (14/04/04) 
  • Requirements for Publish/Subscribe Application Format (PSAF) (14/04/04) 
  • Preliminary WD of Publish/Subscribe Application Format (14/04/04) 
  • WD2 of ISO/IEC 23001-4:201X/Amd.1 Parser Instantiation from BSD (14/04/11) 
  • Text of ISO/IEC 23001-8:2013/DCOR1 (14/04/18) 
  • Text of ISO/IEC DIS 23001-11 Green Metadata (14/04/25) 
  • Study Text of ISO/IEC 23002-4:201x/DAM2 FU and FN descriptions for HEVC (14/04/04) 
  • Text of ISO/IEC 23003-4 CD, Dynamic Range Control (14/04/11) 
  • MMT Developers’ Day in 109th MPEG meeting (14/04/04) 
  • Results of CfP on Screen Content Coding Tools for HEVC (14/04/30) 
  • Study Text of ISO/IEC 23008-2:2013/DAM3 HEVC Scalable Extensions (14/06/06) 
  • HEVC RExt Test Model 7 (14/06/06) 
  • Scalable HEVC (SHVC) Test Model 6 (SHM 6) (14/06/06) 
  • Report on HEVC compression performance verification testing (14/04/25) 
  • HEVC Screen Content Coding Test Model 1 (SCM 1) (14/04/25) 
  • Study Text of ISO/IEC 23008-2:2013/PDAM4 3D Video Extensions (14/05/15) 
  • Test Model 8 of 3D-HEVC and MV-HEVC (14/05/15) 
  • Text of ISO/IEC 23008-3/CD, 3D audio (14/04/11) 
  • Listening Test Logistics for 3D Audio Phase 2 (14/04/04) 
  • Active Downmix Control (14/04/04) 
  • Text of ISO/IEC PDTR 23008-13 Implementation Guidelines for MPEG Media Transport (14/05/02) 
  • Text of ISO/IEC 23009-1 2nd edition DAM 1 Extended Profiles and availability time synchronization (14/04/18) 
  • Text of ISO/IEC 23009-1 2nd edition PDAM 2 Spatial Relationship Description, Generalized URL parameters and other extensions (14/04/18) 
  • Text of ISO/IEC PDTR 23009-3 2nd edition DASH Implementation Guidelines (14/04/18) 
  • MPEG vision for Compact Descriptors for Video Analysis (CDVA) (14/04/04) 
  • Plan of FTV Seminar at 109th MPEG Meeting (14/04/04) 
  • Draft Requirements and Explorations for HDR /WCG Content Distribution and Storage (14/04/04) 
  • Working Draft 2 of Internet Video Coding (IVC) (14/04/18) 
  • Internet Video Coding Test Model (ITM) v 9.0 (14/04/18) 
  • Uniform Timeline Alignment (14/04/18) 
  • Plan of Seminar on Hybrid Delivery at the 110th MPEG Meeting (14/04/04) 
  • WD 2 of MPEG User Description (14/04/04)

Thursday, April 24, 2014

Quality of Experience: Advanced Concepts, Applications, and Methods

Quality of Experience
Advanced Concepts, Applications, and Methods

Sebastian Möller and Alexander Raake (Eds.)

  • Develops and applies the definition of Quality of Experience in telecommunication services
  • Includes examples and guidelines for many fields of applications
  • Written and Edited by known experts in the field
This pioneering book develops definitions and concepts related to Quality of Experience in the context of multimedia- and telecommunications-related applications, systems and services, and applies these to various fields of communication and media technologies. The editors bring together numerous key-protagonists of the new discipline “Quality of Experience” and combine the state-of-the-art knowledge in one single volume.

The Sensory Experience Lab (SELab) is also having its chapter within this book entitled Sensory Experience: Quality of Experience Beyond Audio-Visual with the abstract as follows:

Abstract: This chapter introduces the concept of Sensory Experience which aims to define the Quality of Experience (QoE) going beyond audio-visual content. In particular, we show how to utilize sensory effects such as ambient light, scent, wind, or vibration as additional dimensions contributing to the quality of the user experience. Therefore, we utilize a standardized representation format for sensory effects that are attached to traditional multimedia resources such as audio, video, and image contents. Sensory effects are rendered on special devices (e.g., fans, lights, motion chair, scent emitter) in synchronization with the traditional multimedia resources and shall stimulate also other senses than hearing and seeing with the intention to increase the Quality of Experience (QoE), in this context referred to as Sensory Experience.

Tuesday, April 15, 2014

bitmovin: Junior / Senior C++ Software Engineer (m/f)

bitmovin GmbH is an Austrian start-up and research spin-off, providing industry leading over-the-top streaming solutions and cloud-based encoding systems which enable best-quality media experience for the user. Our team in Klagenfurt (Austria) is searching for a

Junior / Senior C++ Software Engineer (m/f)

to design and develop innovative software systems for multimedia and streaming purposes in
cloud-, Web- and mobile environments.

Our Offering:
  • Working in an innovative international team
  • Organisation with low hierarchy
  • Opportunity to make an impact to the whole multimedia industry
  • Worldwide customerbase
  • Modern office in the Lakeside Science and Technology Park in Klagenfurt
  • Salary above average & collective agreement, depending on qualification
Our Requirements:
  • University degree in Computer Science, Information Technology, etc. or equivalent education (HTL, College, etc.).
  • Junior: Basic knowledge on C++ software development
  • Senior: Several years of experience in C++ software development
  • Knowledge on design-patterns, frameworks, development- and test environments, Linux, etc.
Further details here. All jobs at bitmovin here.

Monday, April 14, 2014

Special Sessions at QoMEX 2014

QoMEX 2014 is inviting submissions to the following special sessions:
The purpose of these special sessions is to complement the regular program with new or emerging topics of particular interest to the community.

The submission deadlines and review process for papers in the special sessions are the same as for regular papers. To submit your contribution to a special session, follow the submission process for regular papers and select the session title as one of the paper’s topics.

Important dates:
  • Submission deadline: May 4, 2014
  • Notification of acceptance: June 15, 2014
  • Camera-ready papers: July 13, 2014
  • Workshop dates: September 18-20, 2014

Monday, March 24, 2014

PostDoc position (3 years) at Alpen-Adria-Universität Klagenfurt, Austria



Institute of Information Technology, Multimedia Communication (MMC) Group
(Prof. Hermann Hellwagner) 

The MMC group at Klagenfurt University, Austria, is offering a full, three-year PostDoc position (available now) in a basic research project called CONCERT (http://www.concert-project.org/). Important facts about the project are given in the following. We seek candidates with strong expertise in one or several of the following areas: Multimedia Communication, Machine Learning, Multi-Agent Systems, Uncertainty in Artificial Intelligence (Probabilistic Models, Bayesian Networks, Game Theory). Applications should be sent to Prof. Hermann Hellwagner <hermann.hellwagner@aau.at>.

Title: A Context-Adaptive Content Ecosystem Under Uncertainty (CONCERT)
Duration: 3 years
Websitehttp://www.concert-project.org/

Partners
  • University College London (UK): Prof. George Pavlou, Dr. Wei Chai (Coordinator)
  • University of Surrey (UK): Dr. Ning Wang
  • Ecole Polytechnique Fédérale de Lausanne (CH): Prof. Pascal Frossard
  • Alpen-Adria-Universität (AAU) Klagenfurt (AT): Prof. Hermann Hellwagner 
Project character and funding

CONCERT is an international basic research project accepted as a CHIST-ERA project under the Call for Proposals “Context- and Content-Adaptive Communication Networks”. CHIST-ERA is an ERANET consortium, part of the EC FP7 Programme “Future and Emerging Technologies (FET)” (http://www.chistera.eu/). Funding is provided by the national basic research funding agencies, in Austria the FWF (Austrian Science Fund).

Abstract

The objective of CONCERT is to develop a content ecosystem encompassing all rele- vant players which will be able to perform intelligent content and network adaptation in highly dynamic conditions under uncertainty. This ecosystem will have as basis emerging information-/content-centric networking technologies which support in- trinsic in-network content manipulation. The project will consider uncertainty aspects in the following two application domains: (1) social media networks based on user generated content and (2) CDN-like professional content distribution (Content Distri- bution Networks). Three dimensions of uncertainties will be addressed: (1) heteroge- neous and changing service requirements by end users, (2) threats that may have adverse impacts on the content ecosystem, as well as (3) opportunities that can be exploited by specific players in order to have their costs reduced.

In order to manage and exploit the uncertainty aspects, CONCERT defines a two- dimensional content and network adaptation framework that operates both cross- layer and cross-player. First, the decision on any single adaptation action needs to take into account context information from both the content application layer and the underlying network. Second, we consider joint content and network adaptation in order to simultaneously achieve optimised service performance and network re- source utilisation. Finally, some complex uncertainty scenarios require coordinated content and network adaptation across different ecosystem players. In this case, in- consistent or even conflicting adaptation objectives and different levels of context knowledge need to be reconciled and are key research issues.

In order to achieve adaptation solutions capable of coping with different uncertain- ties, the project will develop advanced learning, decision-making and negotiation techniques. Learning is required for deriving accurate system behavioural patterns according to the acquired context knowledge. This will then drive decision-making functions for taking the most appropriate adaptation actions to address these uncer- tainties. Negotiation techniques are required for resolving potential tussles between specific content/network adaptation objectives by different players in the content ecosystem. The project will consider both centralised and distributed approaches in which learning and decision-making processes on adaptation actions can be per- formed either at the central adaptation domain controller or in a decentralised man- ner across multiple network elements. In the latter case, emerging information- /content-centric networks will become much more intelligent, with content-aware devices performing self-adaptation according to their own context knowledge but through coordination in order to achieve global near-optimality and stability.