Thursday, May 10, 2018

MPEG news: a report from the 122nd meeting, San Diego, CA, USA

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects. Additionally, this version of the blog post will be also posted at ACM SIGMM Records.


The MPEG press release comprises the following topics:
  • Versatile Video Coding (VVC) project starts strongly in the Joint Video Experts Team
  • MPEG issues Call for Proposals on Network-based Media Processing
  • MPEG finalizes 7th edition of MPEG-2 Systems Standard
  • MPEG enhances ISO Base Media File Format (ISOBMFF) with two new features
  • MPEG-G standards reach Draft International Standard for transport and compression technologies

Versatile Video Coding (VVC) – MPEG’ & VCEG’s new video coding project starts strong

The Joint Video Experts Team (JVET), a collaborative team formed by MPEG and ITU-T Study Group 16’s VCEG, commenced work on a new video coding standard referred to as Versatile Video Coding (VVC). The goal of VVC is to provide significant improvements in compression performance over the existing HEVC standard (i.e., typically twice as much as before) and to be completed in 2020. The main target applications and services include — but not limited to — 360-degree and high-dynamic-range (HDR) videos. In total, JVET evaluated responses from 32 organizations using formal subjective tests conducted by independent test labs. Interestingly, some proposals demonstrated compression efficiency gains of typically 40% or more when compared to using HEVC. Particular effectiveness was shown on ultra-high definition (UHD) video test material. Thus, we may expect compression efficiency gains well-beyond the targeted 50% for the final standard.

Research aspects: Compression tools and everything around it including its objective and subjective assessment. The main application area is clearly 360-degree and HDR. Watch out conferences like PCS and ICIP (later this year), which will be full of papers making references to VVC. Interestingly, VVC comes with a first draft, a test model for simulation experiments, and a technology benchmark set which is useful and important for any developments for both inside and outside MPEG as it allows for reproducibility.

MPEG issues Call for Proposals on Network-based Media Processing

This Call for Proposals (CfP) addresses advanced media processing technologies such as network stitching for VR service, super resolution for enhanced visual quality, transcoding, and viewport extraction for 360-degree video within the network environment that allows service providers and end users to describe media processing operations that are to be performed by the network. Therefore, the aim of network-based media processing (NBMP) is to allow end user devices to offload certain kinds of processing to the network. Therefore, NBMP describes the composition of network-based media processing services based on a set of media processing functions and makes them accessible through Application Programming Interfaces (APIs). Responses to the NBMP CfP will be evaluated on the weekend prior to the 123rd MPEG meeting in July 2018.

Research aspects: This project reminds me a lot about what has been done in the past in MPEG-21, specifically Digital Item Adaptation (DIA) and Digital Item Processing (DIP). The main difference is that MPEG targets APIs rather than pure metadata formats, which is a step forward into the right direction as APIs can be implemented and used right away. NBMP will be particularly interesting in the context of new networking approaches including, but not limited to, software-defined networking (SDN), information-centric networking (ICN), mobile edge computing (MEC), fog computing, and related aspects in the context of 5G.

7th edition of MPEG-2 Systems Standard and ISO Base Media File Format (ISOBMFF) with two new features

More than 20 years since its inception development of MPEG-2 systems technology (i.e., transport/program stream) continues. New features include support for: (i) JPEG 2000 video with 4K resolution and ultra-low latency, (ii) media orchestration related metadata, (iii) sample variance, and (iv) HEVC tiles.

The partial file format enables the description of an ISOBMFF file partially received over lossy communication channels. This format provides tools to describe reception data, the received data and document transmission information such as received or lost byte ranges and whether the corrupted/lost bytes are present in the file and repair information such as location of the source file, possible byte offsets in that source, byte stream position at which a parser can try processing a corrupted file. Depending on the communication channel, this information may be setup by the receiver or through out-of-band means.

ISOBMFF's sample variants (2nd edition), which are typically used to provide forensic information in the rendered sample data that can, for example, identify the specific Digital Rights Management (DRM) client which has decrypted the content. This variant framework is intended to be fully compatible with MPEG’s Common Encryption (CENC) and agnostic to the particular forensic marking system used.

Research aspects: MPEG systems standards are mainly relevant for multimedia systems research with all its characteristics. The partial file format is specifically interesting as it targets scenarios with lossy communication channels.

MPEG-G standards reach Draft International Standard for transport and compression technologies

MPEG-G provides a set of standards enabling interoperability for applications and services dealing with high-throughput deoxyribonucleic acid (DNA) sequencing. At its 122nd meeting, MPEG promoted its core set of MPEG-G specifications, i.e., transport and compression technologies, to Draft International Standard (DIS) stage. Such parts of the standard provide new transport technologies (ISO/IEC 23092-1) and compression technologies (ISO/IEC 23092-2) supporting rich functionality for the access and transport including streaming of genomic data by interoperable applications. Reference software (ISO/IEC 23092-4) and conformance (ISO/IEC 23092-5) will reach this stage in the next 12 months.

Research aspects: the main focus of this work item is compression and transport is still in its infancy. Therefore, research on the actual delivery for compressed DNA information as well as its processing is solicited.

What else happened at MPEG122?

  • Requirements is exploring new video coding tools dealing with low-complexity and process enhancements.
  • The activity around coded representation of neural networks has defined a set of vital use cases and is now calling for test data to be solicited until the next meeting.
  • The MP4 registration authority (MP4RA) has a new awesome web site http://mp4ra.org/.
  • MPEG-DASH is finally approving and working the 3rd edition comprising consolidated version of recent amendments and corrigenda.
  • CMAF started an exploration on multi-stream support, which could be relevant for tiled streaming and multi-channel audio.
  • OMAF kicked-off its activity towards a 2nd edition enabling support for 3DoF+ and social VR with the plan going to committee draft (CD) in Oct’18. Additionally, there’s a test framework proposed, which allows to assess performance of various CMAF tools. Its main focus is on video but MPEG’s audio subgroup has a similar framework to enable subjective testing. It could be interesting seeing these two frameworks combined in one way or the other.
  • MPEG-I architectures (yes plural) are becoming mature and I think this technical report will become available very soon. In terms of video, MPEG-I looks more closer at 3DoF+ defining common test conditions and a call for proposals (CfP) planned for MPEG123 in Ljubljana, Slovenia. Additionally, explorations for 6DoF and compression of dense representation of light fields are ongoing and have been started, respectively.
  • Finally, point cloud compression (PCC) is in its hot phase of core experiments for various coding tools resulting into updated versions of the test model and working draft.
Research aspects: In this section I would like to focus on DASH, CMAF, and OMAF. Multi-stream support, as mentioned above, is relevant for tiled streaming and multi-channel audio which has been recently studied in the literature and is also highly relevant for industry. The efficient storage and streaming of such kind of content within the file format is an important aspect and often underrepresented in both research and standardization. The goal here is to keep the overhead low while maximizing the utility of the format to enable certain functionalities. OMAF now targets the social VR use case, which has been discussed in the research literature for a while and, finally, makes its way into standardization. An important aspect here is both user and quality of experience, which requires intensive subjective testing.

Finally, on May 10 MPEG will celebrate 30 years as its first meeting dates back to 1988 in Ottawa, Canada with around 30 attendees. The 122nd meeting had more than 500 attendees and MPEG has around 20 active work items. A total of more than 170 standards have been produces (that’s approx. six standards per year) where some standards have up to nine editions like the HEVC standards. Overall, MPEG is responsible for more that 23% of all JTC 1 standards and some of them showing extraordinary longevity regarding extensions, e.g., MPEG-2 systems (24 years), MPEG-4 file format (19 years), and AVC (15 years). MPEG standards serve billions of users (e.g., MPEG-1 video, MP2, MP3, AAC, MPEG-2, AVC, ISOBMFF, DASH). Some — more precisely five — standards have receive Emmy awards in the past (MPEG-1, MPEG-2, AVC (2x), and HEVC).
Tag cloud generated from all existing MPEG press releases.
Thus, happy birthday MPEG! In today’s society starts the high performance era with 30 years, basically the time of “compression”, i.e., we apply all what we learnt and live out everything, truly optimistic perspective for our generation X (millennials) standards body!

Wednesday, April 11, 2018

Guest Speaker at Florida Atlantic University: A Framework for Adaptive Delivery of Omnidirectional Video

A Framework for Adaptive Delivery of Omnidirectional Video

When: April 13, 2018
Where: Florida Atlantic University, FL, USA

Abstract: Omnidirectional or 360-degree videos are considered as a next step towards a truly immersive media experience. Such videos allow the user to change her/his viewing direction while consuming the video. The download- and-play paradigm is replaced by streaming, and the content is hosted solely within the cloud. This talk addresses the need for a scientific framework enabling the adaptive delivery of omnidirectional video within heterogeneous environments. We consider the state-of-the-art techniques for adaptive streaming over HTTP and extend them towards omnidirectional/360-degree videos. In particular, we review the encoding and adaptive streaming options, and present preliminary results reported in the literature. Finally, we provide an overview of the ongoing standardization efforts and highlight the major open issues.

PDF of the paper providing further details and slides of an earlier talk this year are available here.

Sunday, March 25, 2018

IEEE MIPR'18: Automated Objective and Subjective Evaluation of HTTP Adaptive Streaming Systems

Automated Objective and Subjective Evaluation of HTTP Adaptive Streaming Systems

Christian Timmerer (Alpen-Adria-Universität Klagenfurt / Bitmovin), Anatoliy Zabrovskiy (Petrozavodsk State University / Alpen-Adria-Universität Klagenfurt), and Ali C. Begen (Ozyegin University / Networked Media)

Invited paper at IEEE MIPR 2018

PDF available here (coming soon)

Abstract: Streaming audio and video content currently accounts for the majority of the internet traffic and is typically deployed over the top of the existing infrastructure. We are facing the challenge of a plethora of media players and adaptation algorithms showing different behavior but lack a common framework for both objective and subjective evaluation of such systems. This paper aims to close this gap by (i) proposing such a framework, (ii) describing its architecture, (iii) providing an example evaluation, (iv) and discussing open issues.

Slides:

Sunday, March 18, 2018

HVEI'18: A Framework for Adaptive Delivery of Omnidirectional Video

A Framework for Adaptive Delivery of Omnidirectional Video

Christian Timmerer (Alpen-Adria-Universität Klagenfurt / Bitmovin) and Ali C. Begen (Ozyegin University / Networked Media)

Abstract: Omnidirectional or 360-degree videos are considered as a next step towards a truly immersive media experience. Such videos allow the user to change her/his viewing direction while consuming the video. The download-and-play paradigm (including DVD and Blu-ray) is replaced by streaming, and the content is hosted solely within the cloud. This paper addresses the need for a scientific framework enabling the adaptive delivery of omnidirectional video within heterogeneous environments. We consider the state-of-the-art techniques for adaptive streaming over HTTP and extend them towards omnidirectional/360-degree videos. In particular, we review the encoding and adaptive streaming options, and present preliminary results reported in the literature. Finally, we provide an overview about the ongoing standardization efforts and highlight the major open issues.

Slides:

Wednesday, March 7, 2018

PostDoc Assistant (tenure track) at Alpen-Adria-Universität Klagenfurt, Faculty of Technical Sciences

Alpen-Adria-Universität Klagenfurt announces the following open position in compliance with § 107 para. 1 Universities Act 2002:

PostDoc Assistant (tenure track)
[URL]

at the Faculty of Technical Sciences. This is a full-time position (initial employment limited to 6 years) with the option of concluding a qualification agreement (promotion to Assistant Professor). Upon fulfilling the qualification agreement, the position progresses from Assistant to Associate Professor (permanent employment). Starting date is earliest possible.

This opening is aimed exclusively at women and is part of a package of measures to increase the proportion of women in professorships and tenure track positions at the Faculty of Technical Sciences. The successful applicant is expected to be assigned to one of the working groups of the 9 departments of the Faculty of Technical Sciences in order to ensure synergies in research and teaching. The departments cover the following fields:
  • Computer Science
  • Didactics of Computer Science
  • Didactics of Mathematics
  • Information Technology
  • Mathematics
  • Statistics 
Further information about the faculty, its departments and their working groups is available at www.aau.at/tewi.

Tasks and Responsibilities
Participation in the department's research and teaching tasks, including
  • independent research and further development of the candidate's scientific qualification to the level required for Associate Professorship,
  • graduate and undergraduate teaching, examination activities, and supervision of students, 
  • preparation of grant applications and management of research projects, 
  • publications and active participation in international conferences, 
  • establishing international scientific contacts, and 
  • participation in administration, in university committees, and in public relations activities. 
Required Qualifications
  • PhD in one of the above enumerated research fields 
  • Outstanding research achievements and scientific publications 
  • Potential for future scientific work 
  • Teaching experience (at university level) and didactic competence 
  • Excellent English language skills 
Candidates must meet the required qualifications by April 4th 2018 at the latest.

Additional Desired Qualifications
  • Pertinent international experience or practical experience 
  • Embedding in the international research community 
  • Experience in grant applications and project management 
  • Scientific compatibility with at least one of the faculty's research groups 
  • Experience with and interest in interdisciplinary projects 
  • Communication and presentation skills 
  • Leadership, organisational competence, and ability to cooperate in a team 
  • Experience in administration of university departments and in committee work 
  • German language skills 
German language skills are not a formal prerequisite, but proficiency at level B2 is expected with- in two years.

People with disabilities or chronic diseases, who fulfill the requirements, are particularly encour- aged to apply.

Salary and Application

Minimum gross salary for this position is € 51,955.40 per annum (§ 27 Uni-KV B1 lit b), € 61,441.80 after promotion to Assistant Professor (§ 27 Uni-KV A2) and € 66,619.00 after promotion to Associate Professor.

General information for applicants is available on www.aau.at/jobs/information.

We welcome applications in English with the usual documents including three references (address- es of persons who can be contacted by the Alpen-Adria-Universität for information) by April 4th 2018. Applications must be submitted online via www.aau.at/obf (please indicate reference code 711/17). In the cover letter, the research area should be mentioned.

For further information, please contact Assoc. Prof. Dr. Angelika Wiegele, e-mail: frauen- plus.tewi@aau.at.

Travel and accommodation costs incurred during the application process will not be refunded.

Tuesday, February 13, 2018

MPEG news: a report from the 121st meeting, Gwangju, Korea

The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects. Additionally, this version of the blog post will be also posted at ACM SIGMM Records.

The MPEG press release comprises the following topics:
  • Compact Descriptors for Video Analysis (CDVA) reaches Committee Draft level
  • MPEG-G standards reach Committee Draft for metadata and APIs
  • MPEG issues Calls for Visual Test Material for Immersive Applications
  • Internet of Media Things (IoMT) reaches Committee Draft level
  • MPEG finalizes its Media Orchestration (MORE) standard
At the end I will also briefly summarize what else happened with respect to DASH, CMAF, OMAF as well as discuss future aspects of MPEG.

Compact Descriptors for Video Analysis (CDVA) reaches Committee Draft level

The Committee Draft (CD) for CDVA has been approved at the 121st MPEG meeting, which is the first formal step of the ISO/IEC approval process for a new standard. This will become a new part of MPEG-7 to support video search and retrieval applications (ISO/IEC 15938-15).

Managing and organizing the quickly increasing volume of video content is a challenge for many industry sectors, such as media and entertainment or surveillance. One example task is scalable instance search, i.e., finding content containing a specific object instance or location in a very large video database. This requires video descriptors which can be efficiently extracted, stored, and matched. Standardization enables extracting interoperable descriptors on different devices and using software from different providers, so that only the compact descriptors instead of the much larger source videos can be exchanged for matching or querying. The CDVA standard specifies descriptors that fulfil these needs and includes (i) the components of the CDVA descriptor, (ii) its bitstream representation and (iii) the extraction process. The final standard is expected to be finished in early 2019.

CDVA introduces a new descriptor based on features which are output from a Deep Neural Network (DNN). CDVA is robust against viewpoint changes and moderate transformations of the video (e.g., re-encoding, overlays), it supports partial matching and temporal localization of the matching content. The CDVA descriptor has a typical size of 2–4 KBytes per second of video. For typical test cases, it has been demonstrated to reach a correct matching rate of 88% (at 1% false matching rate).

Research aspects: There are probably endless research aspects in the visual descriptor space ranging from validation of the achieved to results so far to further improving informative aspects with the goal to increase correct matching rate (and consequently decreasing the false matching rating). In general, however, the question is whether there's a need for descriptors in the era of bandwidth-storage-computing over-provisioning and the raising usage of artificial intelligence techniques such as machine learning and deep learning.

MPEG-G standards reach Committee Draft for metadata and APIs

In my previous report I introduced the MPEG-G standard for compression and transport technologies of genomic data. At the 121st MPEG meeting, metadata and APIs reached CD level. The former - metadata - provides relevant information associated to genomic data and the latter - APIs - allow for building interoperable applications capable of manipulating MPEG-G files. Additional standardization plans for MPEG-G include the CDs for reference software (ISO/IEC 23092-4) and conformance (ISO/IEC 23092-4), which are planned to be issued at the next 122nd MPEG meeting with the objective of producing Draft International Standards (DIS) at the end of 2018.

Research aspects: Metadata typically enables certain functionality which can be tested and evaluated against requirements. APIs allow to build applications and services on top of the underlying functions, which could be a driver for research projects to make use of such APIs.

MPEG issues Calls for Visual Test Material for Immersive Applications

I have reported about the Omnidirectional Media Format (OMAF) in my previous report. At the 121st MPEG meeting, MPEG was working on extending OMAF functionalities to allow the modification of viewing positions, e.g., in case of head movements when using a head-mounted display, or for use with other forms of interactive navigation. Unlike OMAF which only provides 3 degrees of freedom (3DoF) for the user to view the content from a perspective looking outwards from the original camera position, the anticipated extension will also support motion parallax within some limited range which is referred to as 3DoF+. In the future with further enhanced technologies, a full 6 degrees of freedom (6DoF) will be achieved with changes of viewing position over a much larger range. To develop technology in these domains, MPEG has issued two Calls for Test Material in the areas of 3DoF+ and 6DoF, asking owners of image and video material to provide such content for use in developing and testing candidate technologies for standardization. Details about these calls can be found at https://mpeg.chiariglione.org/.

Research aspects: The good thing about test material is that it allows for reproducibility, which is an important aspect within the research community. Thus, it is more than appreciated that MPEG issues such a call and let's hope that this material will become publicly available. Typically this kind of visual test material targets coding but it would be also interesting to have such test content for storage and delivery.

Internet of Media Things (IoMT) reaches Committee Draft level

The goal of IoMT is is to facilitate the large-scale deployment of distributed media systems with interoperable audio/visual data and metadata exchange. This standard specifies APIs providing media things (i.e., cameras/displays and microphones/loudspeakers, possibly capable of significant processing power) with the capability of being discovered, setting-up ad-hoc communication protocols, exposing usage conditions, and providing media and metadata as well as services processing them. IoMT APIs encompass a large variety of devices, not just connected cameras and displays but also sophisticated devices such as smart glasses, image/speech analyzers and gesture recognizers. IoMT enables the expression of the economic value of resources (media and metadata) and of associated processing in terms of digital tokens leveraged by the use of blockchain technologies.

Research aspects: The main focus of IoMT is APIs which provides easy and flexible access to the underlying device' functionality and, thus, are an important factor to enable research within this interesting domain. For example, using these APIs to enable communicates among these various media things could bring up new forms of interaction with these technologies.

MPEG finalizes its Media Orchestration (MORE) standard

MPEG "Media Orchestration" (MORE) standard reached Final Draft International Standard (FDIS), the final stage of development before being published by ISO/IEC. The scope of the Media Orchestration standard is as follows:
  • It supports the automated combination of multiple media sources (i.e., cameras, microphones) into a coherent multimedia experience.
  • It supports rendering multimedia experiences on multiple devices simultaneously, again giving a consistent and coherent experience.
  • It contains tools for orchestration in time (synchronization) and space.
MPEG expects that the Media Orchestration standard to be especially useful in immersive media settings. This applies notably in social virtual reality (VR) applications, where people share a VR experience and are able to communicate about it. Media Orchestration is expected to allow synchronizing the media experience for all users, and to give them a spatially consistent experience as it is important for a social VR user to be able to understand when other users are looking at them.

Research aspects: This standard enables the social multimedia experience proposed in literature. Interestingly, the W3C is working on something similar referred to as timing object and it would be interesting to see whether these approaches have some commonalities.

What else happened at the MPEG meeting?

DASH is fully in maintenance mode and we are still waiting for the 3rd edition which is supposed to be a consolidation of existing corrigenda and amendments. Currently only minor extensions are proposed and conformance/reference software is being updated. Similar things can be said for CMAF where we have one amendment and one corrigendum under development. Additionally, MPEG is working on CMAF conformance. OMAF has reached FDIS at the last meeting and MPEG is working on reference software and conformance also. It is expected that in the future we will see additional standards and/or technical reports defining/describing how to use CMAF and OMAF in DASH.

Regarding the future video codec, the call for proposals is out since the last meeting as announced in my previous report and responses are due for the next meeting. Thus, it is expected that the 122nd MPEG meeting will be the place to be in terms of MPEG’s future video codec. Speaking about the future, shortly after the 121st MPEG, Leonardo Chiariglione published a blog post entitled “a crisis, the causes and a solution”, which is related to HEVC licensing, Alliance for Open Media (AOM), and possible future options. The blog post certainly caused some reactions within the video community at large and I think this was also intended. Let’s hope it will galvanice the video industry -- not to push the button -- but to start addressing and resolving the issues. As pointed out in one of my other blog posts about what to care about in 2018, the upcoming MPEG meeting in April 2018 is certainly a place to be. Additionally, it highlights some conferences related to various aspects also discussed in MPEG which I'd like to republish here:
  • QoMEX -- Int'l Conf. on Quality of Multimedia Experience -- will be hosted in Sardinia, Italy from May 29-31, which is THE conference to be for QoE of multimedia applications and services. Submission deadline is January 15/22, 2018.
  • MMSys -- Multimedia Systems Conf. -- and specifically Packet Video, which will be on June 12 in Amsterdam, The Netherlands. Packet Video is THE adaptive streaming scientific event 2018. Submission deadline is March 1, 2018.
  • Additionally, you might be interested in ICME (July 23-27, 2018, San Diego, USA), ICIP (October 7-10, 2018, Athens, Greece; specifically in the context of video coding), and PCS (June 24-27, 2018, San Francisco, CA, USA; also in the context of video coding).
  • The DASH-IF academic track hosts special events at MMSys (Excellence in DASH Award) and ICME (DASH Grand Challenge).
  • MIPR -- 1st Int'l Conf. on Multimedia Information Processing and Retrieval -- will be in Miami, Florida, USA from April 10-12, 2018. It has a broad range of topics including networking for multimedia systems as well as systems and infrastructures.

Sunday, January 14, 2018

Delivering Traditional and Omnidirectional Media

This tutorial will be given at the following events:



Abstract

Universal media access as proposed in the late 90s is now closer to reality. Users can generate, distribute and consume almost any media content, anywhere, anytime and with/on any device. A major technical breakthrough was the adaptive streaming over HTTP resulting in the standardization of MPEG-DASH, which is now successfully deployed in most platforms. The next challenge in adaptive media streaming is virtual reality applications and, specifically, omnidirectional (360°) media streaming.
This tutorial first presents a detailed overview of adaptive streaming of both traditional and omnidirectional media, and focuses on the basic principles and paradigms for adaptive streaming. New ways to deliver such media are explored and industry practices are presented. The tutorial then continues with an introduction to the fundamentals of communications over 5G and looks into mobile multimedia applications that are newly enabled or dramatically enhanced by 5G.
A dedicated section in the tutorial covers the much-debated issues related to quality of experience. Additionally, the tutorial provides insights into the standards, open research problems and various efforts that are underway in the streaming industry.

Learning Objectives

Upon attending this tutorial, the participants will have an overview and understanding of the following topics:
  • Principles of HTTP adaptive streaming for the Web/HTML5
  • Principles of omnidirectional (360) media delivery
  • Content generation, distribution and consumption workflows
  • Standards and emerging technologies, new delivery schemes in the adaptive streaming space
  • Measuring, quantifying and improving quality of experience
  • Fundamental technologies of 5G
  • Features and services enabled or enhanced by 5G
  • Current and future research on delivering traditional and omnidirectional media

Table of Contents

Part I: Streaming (Presented by Dr. Begen and Dr. Timmerer)
  • Survey of well-established streaming solutions (DASH, CMAF and Apple HLS)
  • HTML5 video and media extensions
  • Multi-bitrate encoding, and encapsulation and encryption workflows
  • Common issues in scaling and improving quality, multi-screen/hybrid delivery
  • Acquisition, projection, coding and packaging of 360 video
  • Delivery, decoding and rendering methods
  • The developing MPEG-OMAF and MPEG-I standards
Part II: Communications over 5G (Presented by Dr. Ma and Dr. Begen)
  • 5G fundamentals: radio access and core network
  • Multimedia signal processing and communications
  • Emerging mobile multimedia use cases
  • Detailed analysis for selected use cases
  • Improving QoE

Speakers


Ali C. Begen recently joined the computer science department at Ozyegin University. Previously, he was a research and development engineer at Cisco, where he has architected, designed and developed algorithms, protocols, products and solutions in the service provider and enterprise video domains. Currently, in addition to teaching and research, he provides consulting services to industrial, legal, and academic institutions through Networked Media, a company he co-founded. Begen holds a Ph.D. degree in electrical and computer engineering from Georgia Tech. He received a number of scholarly and industry awards, and he has editorial positions in prestigious magazines and journals in the field. He is a senior member of the IEEE and a senior member of the ACM. In January 2016, he was elected as a distinguished lecturer by the IEEE Communications Society. Further information on his projects, publications, talks, and teaching, standards and professional activities can be foundhttp://ali.begen.net

Liangping Ma is with InterDigital, Inc., San Diego, CA. He is an IEEE Communication Society Distinguished Lecturer focusing on 5G technologies and standards, video communication and cognitive radios. He is an InterDigital delegate to the 3GPP New Radio standards. His current research interests include various aspects about ultra-reliable and low-latency communication, such as channel coding, multiple access and resource allocation. Previously, he led the research on Quality of Experience (QoE) driven system optimization for video streaming and interactive video communication. Prior to joining InterDigital in 2009, he was with San Diego Research Center and Argon ST (acquired by Boeing), where he led research on cognitive radios and wireless sensor networks and served as the principal investigators of two projects supported by the Department of Defense and the National Science Foundation, respectively. He is the co-inventor of more than 40 patents and the author/co-author of more than 50 journal and conference papers. He has been the Chair of the San Diego Chapter of the IEEE Communication Society since 2014. He received his PhD from University of Delaware in 2004 and his B.S. from Wuhan University, China, in 1998.

Christian Timmerer received his M.Sc. (Dipl.-Ing.) in January 2003 and his Ph.D. (Dr.techn.) in June 2006 (for research on the adaptation of scalable multimedia content in streaming and constrained environments) both from the Alpen-Adria-Universität (AAU) Klagenfurt. He joined the AAU in 1999 (as a system administrator) and is currently an Associate Professor at the Institute of Information Technology (ITEC) within the Multimedia Communication Group. His research interests include immersive multimedia communications, streaming, adaptation, quality of experience, and sensory experience. He was the general chair of WIAMIS 2008, QoMEX 2013 and MMSys 2016, and has participated in several EC-funded projects, notably DANAE, ENTHRONE, P2P-Next, ALICANTE, SocialSensor, COST IC1003 QUALINET and ICoSOLE. He also participated in ISO/MPEG work for several years, notably in the area of MPEG-21, MPEG-M, MPEG-V, and MPEG-DASH where he also served as a standard editor. In 2012, he co-founded Bitmovin to provide professional services around MPEG-DASH where he currently holds the position of the Chief Innovation Officer (CIO).