Tuesday, September 13, 2016

Open Positions at Bitmovin

About working with Bitmovin

Bitmovin, a YCombinator company, is a fast growing privately owned technology leader, located in Klagenfurt am Wörthersee in Austria and in California. Our company is leading in research and development of cutting-edge multimedia services like, e.g.,Dynamic Adaptive Streaming over HTTP (DASH)HTTP Live Streaming (HLS), multimedia systems and multimedia cloud services.
We believe our employees are the most valuable assets of our company. They drive Bitmovin’s success with their knowledge, experience and passion. Therefore we provide a high degree of freedom to our employees to initiate projects and take on responsibility, while paying a very competitive salary above the average.
Working at Bitmovin is international, fast-paced, fun and challenging. We’re looking for talented, passionate and inspired people who want to change the way media is consumed online. Join us to develop, sell and market the world’s fastest video encoding service.

Sales and Marketing

Software and Development

Admin and Finance

Wednesday, August 31, 2016

IEEE Computer "Social Computing" Column: Call for Papers

I’m looking for forward-looking and thought-provokening articles for the Social Computing column within the IEEE Computer magazine. As you know, IEEE Computer is the flagship publication of the IEEE Computer Society (CS) which is distributed to all members (CS is the biggest society within IEEE).

The topics are related to the Special Technical Community on Social Networking (STCSN) and please submit column articles directly to me! The guidelines see below and no specific template is required (just plain text in an editable Word file is fine).

An overview of previous columns can be found here. If you have any questions or comments, don’t hesitate to contact me.


Guidelines for Computer Column Contributions

We encourage column editors to include contributions solicited from their colleagues to provide the six installments for their bimonthly Computer columns.

The target length for each column is 2.0-2.5 magazine pages, or about 1,500-1,900 words. Each figure or table is counted as 300 words, and obviously we prefer to include appropriate graphic elements when they are available. Max. 2,200 (if no art).

Editors are asked to remind contributors that columns do not include a bibliography or an acknowledgments section. References or URLs can be inserted inline in the text if needed.

Submitted columns should include the article title, author(s) name(s) and affiliation(s) and a brief bio that also provides email contact information:

//First name/last name// is a //academic title, institution, or business title, company//. Contact him at //email address.//

Image guidelines

To ensure the quality needed for print publication, we need an editable vector art file-for example, Illustrator or Visio files-for each line drawing. For each photo, we need a 4-color electronic image at 300 dpi resolution, preferably in a .tif, .png, or .jpeg format. We cannot use derivative images or images embedded in a document.

In our article layouts, the figures are usually at least 4 inches (24picas) wide. If you prefer to send screenshots, they should be approximately 12 inches wide. Our production artist can reduce these low-resolution images to 4 inches in Photoshop and process them to achieve the required resolution. If your original images are smaller than 12 inches, using a large monitor set at its highest resolution will help achieve a better screenshot. No compression is necessary.


Tuesday, August 16, 2016

DASH-IF Academic Track


The MPEG-DASH standard has raised a huge momentum within both industry and academia. The DASH-IF provides – among others – interoperability guidelines and test vectors and closes the gap enabling interoperable deployments. In recent years, we have seen a tremendous amount of research papers addressing various issues in and around DASH and, thus, the DASH-IF establishes an academic track to:
  • identify research communities working in the area of DASH
  • create awareness of DASH-IF material and promote it within the academic community, and
  • solicit research within and collect results from the academic community
As a first step the DASH-IF created the “Excellence in DASH Award” at ACM MMSys 2016 and is proud to announce the result as follows. The excellence in DASH award was selected by members of the DASH-IF and instead of a first, second, and third place the DASH-IF concluded to give the first price to all three papers which are as follows: “ABMA+: lightweight and efficient algorithm for HTTP adaptive streaming” by Andrzej Beben, Piotr Wiśniewski, Jordi Mongay Batalla, Piotr Krawiec (Warsaw University of Technology, Poland ); “Delivering Stable High-Quality Video: An SDN Architecture with DASH Assisting Network Elements” by Jan Willem Martin Kleinrouweler, Sergio Cabrero, Pablo Cesar (Centrum Wiskunde & Informatica, Netherlands); and “SQUAD: A Spectrum-based Quality Adaptation for Dynamic Adaptive Streaming over HTTP” by Cong Wang, Amr Rizk, Michael Zink (University of Massachusetts Amherst, USA). (see pictures here).

For academics who want to join the DASH-IF Academic Track, please subscribe to the public email reflector dashifat@lists.aau.at via https://lists.aau.at/mailman/listinfo/dashifat.

Everyone is welcome - let's do something! For any comments or questions, please let me know.

Another related activity was the IEEE ICME 2016 Bitmovin Grand Challenge on DASH which is summarized below. We'd like to thank all authors who have submitted their work to the grand challenge and we'd like to congratulate the winner team!


Tuesday, August 2, 2016

Review of ACM MMSys 2016 & NOSSDAV, MoVid, and MMVE

The 7th ACM International Conference on Multimedia System (MMSys 2016) was successfully held in Klagenfurt am Wörthersee, Austria from May 10-13, 2016 (http://mmsys2016.itec.aau.at) with the co-located workshops NOSSDAV, MoVid, and MMVE.
We'd like to thank our Gold Sponsors: Adobe and YouTube.
The ACM Multimedia Systems Conference (MMSys) provides a forum for researchers to present and share their latest research findings in multimedia systems. While research about specific aspects of multimedia systems are regularly published in the various proceedings and transactions of the networking, operating system, real-time system, and database communities, MMSys aims to cut across these domains in the context of multimedia data types. This provides a unique opportunity to view the intersections and the inter-play of the various approaches and solutions developed across these domains to deal with multimedia data types.
This year’s MMSys introduced a new format referred to as overview talks which have been held on May 10 starting in the afternoon and concluding in the evening with a get together event at the conference venue. The following overview talks have been given at MMSys: “Using Games to solve Challenging Multimedia Problems” by Oge Marques, ACM Distinguished Speaker, FAU, USA ; “More Juice Less Bits: MediaMelon Content Aware Streaming” by Ali C. Begen, MediaMelon Inc., USA, Ozyegin University, Turkey, IEEE ComSoc Distinguished Lecturer, “MPEG-DASH Spatial Relationship Description” by Omar Aziz Niamut, TNO, The Netherlands, “Mulsemedia: Novelty or Reinvention?” by Gheorghita Ghinea, Brunel University, UK ; and “Smart Camera Systems” by Bernhard Rinner, Alpen-Adria-Universität Klagenfurt, Austria .
ACM MMSys typically comes with keynotes from experts and leaders in industry and academy. The first keynote was about “Ten Thousand Channels to Ten Million Viewers: Technologies for Scaling Video Delivery over IP” by Neill A. Kipp, Comcast VIPER, USA addressing issues with video delivery at scale whereas the second keynote entitled “Advances and Trends in Augmented Reality Systems“ by Dieter Schmalstieg, Graz University of Technology, Austria was related to one of the special session. The third keynote was about “5G enabling the Tactile Internet” by Frank Fitzek, Technische Universität Dresden, Germany providing insights about next generation mobile networks.
Best Paper Award
Best Paper Award
In general, ACM MMSys 2016 attracted 71 full paper submissions from which 20 got finally accepted in the program which has been carefully selected from our experienced members of the technical program committee. In addition to the full paper submissions, MMSys 2016 hosted two special sessions, one on augmented reality and another on media synchronization. A demo session provided researchers, engineers, and scientist to present the opportunity to showcase their research prototypes, systems, and applications to MMSys attendees. An important aspect of MMSys is the dataset track which enables reproducible research thanks to the availability of common datasets across different application areas. In particular, the dataset track is an opportunity for researchers and practitioners to make their work available and citable.
MMSYS26
ACM MMSys hosts three workshops: the 26th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV), the 8th ACM Workshop on Mobile Video (MoVid), and the 8th ACM Workshop on Massively Multiuser Virtual Environments (MMVE). The operate with their own committees and review process but benefit from a single registration fee for all events co-located with MMSys.
The 7th ACM MMSys issued the following awards:
  • a best paper award,
  • a best student paper award, and
  • for the first time the excellence in DASH award sponsored by the DASH-IF.
Best Student Paper
Best Student Paper
The best paper award goes to “Distributed Rate Allocation in Switch-Based Multiparty Videoconference” by Stefano D’bronco (EPFL), Sergio Mena (Cisco Systems), Pascal Frossard (EPFL) and the best student paper award goes to “Network-assisted Control for HTTP Adaptive Video Streaming” by Giuseppe Cofano (Politecnico di Bari, Italy), Luca De Cicco (Telecom SudParis, France), Thomas Zinner (University of Würzburg, Germany), Anh Nguyen-Ngoc (University of Würzburg, Germany), Phuoc Tran-Gia (University of Würzburg, Germany), Saverio Mascolo (Politecnico di Bari, Italy).
The excellence in DASH award was selected by members of the DASH-IF and instead of a first, second, and third place the DASH-IF concluded to give the first price to all three papers which are as follows: “ABMA+: lightweight and efficient algorithm for HTTP adaptive streaming” by Andrzej Beben, Piotr Wiśniewski, Jordi Mongay Batalla, Piotr Krawiec (Warsaw University of Technology, Poland ); “Delivering Stable High-Quality Video: An SDN Architecture with DASH Assisting Network Elements” by Jan Willem Martin Kleinrouweler, Sergio Cabrero, Pablo Cesar (Centrum Wiskunde & Informatica, Netherlands); and “SQUAD: A Spectrum-based Quality Adaptation for Dynamic Adaptive Streaming over HTTP” by Cong Wang, Amr Rizk, Michael Zink (University of Massachusetts Amherst, USA).
MMSYS35 MMSYS36 MMSYS34
We would like to congratulate all award winners of ACM MMSys 2016.
Finally, we would like to thank our gold sponsors Adobe and YouTube for their excellent support. In this context, it is worth mentioning the social events including coffee breaks, lunches, get together on the first evening, welcome BBQ on the second evening, and gala dinner on the third evening. These side events are as much as important as the technical papers, demos, and datasets and allow for networking, discussions, and possible future collaborations of conference attendees.
Finally, we’re happy to announce next year’s ACM MMSys (and NOSSDAV, MoVid, and MMVE) in Taiwan with Sheng-Wei (Kuan-Ta) Chen from Academia Sinica.

Wednesday, July 27, 2016

MPEG Survey on Virtual Reality

(c) Bitmovin
As mentioned in my previous blog post, virtual reality is becoming a hot topic across the industry (and also academia) which also reaches standards developing organizations like MPEG. MPEG established an Ad-hoc Group on MPEG-VR (open to everyone) which published a survey on virtual reality. The survey is still open until August 18, 2016 and available here...


In particular, MPEG is seeking feedback from content and service providers as well as device manufacturers. Please help to shape the future!

(c) Bitmovin

Within Bitmovin we're working on this topic in a Web context and a demo is available here. Additionally, here's a live 4K VR/360 demo using HEVC: http://bitmovin.com/public-demos/360andVR/ (Note: requires HEVC hardware support at your device).

Tuesday, July 19, 2016

MPEG news: a report from the 115th meeting, Geneva, Switzerland


The original blog post can be found at the Bitmovin Techblog and has been updated here to focus on and highlight research aspects. Additionally, this version of the blog post will be also posted at ACM SIGMM Records.
MPEG News Archive
The 115th MPEG meeting was held in Geneva, Switzerland and its press release highlights the following aspects:
  • MPEG issues Genomic Information Compression and Storage joint Call for Proposals in conjunction with ISO/TC 276/WG 5
  • Plug-in free decoding of 3D objects within Web browsers
  • MPEG-H 3D Audio AMD 3 reaches FDAM status
  • Common Media Application Format for Dynamic Adaptive Streaming Applications
  • 4th edition of AVC/HEVC file format
In this blog post, however, I will cover topics specifically relevant for adaptive media streaming, namely:
  • Recent developments in MPEG-DASH
  • Common media application format (CMAF)
  • MPEG-VR (virtual reality)
  • The MPEG roadmap/vision for the future.

MPEG-DASH Server and Network assisted DASH (SAND): ISO/IEC 23009-5

Part 5 of MPEG-DASH, referred to as SAND – server and network-assisted DASH – has reached FDIS. This work item started sometime ago at a public MPEG workshop during the 105th MPEG meeting in Vienna. The goal of this part of MPEG-DASH is to enhance the delivery of DASH content by introducing messages between DASH clients and network elements or between various network elements for the purpose of improving the efficiency of streaming sessions by providing information about real-time operational characteristics of networks, servers, proxies, caches, CDNs as well as DASH client’s performance and status. In particular, it defines the following:
  1. The SAND architecture which identifies the SAND network elements and the nature of SAND messages exchanged among them.
  2. The semantics of SAND messages exchanged between the network elements present in the SAND architecture.
  3. An encoding scheme for the SAND messages.
  4. The minimum to implement a SAND message delivery protocol.
The way that this information is to be utilized is deliberately not defined within the standard and left open for (industry) competition (or other standards developing organizations). In any case, there’s plenty of room for research activities around the topic of SAND, specifically:
  • A main issue is the evaluation of MPEG-DASH SAND in terms of qualitative and quantitative improvements with respect to QoS/QoE. Some papers are available already and have been published within ACM MMSys 2016.
  • Another topic of interest includes an analysis regarding scalability and possible overhead; in other words, I'm wondering whether it's worth using SAND to improve DASH.

MPEG-DASH with Server Push and WebSockets: ISO/IEC 23009-6

Part 6 of MPEG-DASH reached DIS stage and deals with server push and Web sockets, i.e., it specifies the carriage of MPEG-DASH media presentations over full duplex HTTP-compatible protocols, particularly HTTP/2 and WebSocket. The specification comes with a set of generic definitions for which bindings are defined allowing its usage in various formats. Currently, the specification supports HTTP/2 and WebSocket.

For the former it is required to define the push policy as an HTTP header extension whereas the latter requires the definition of a DASH subprotocol. Luckily, these are the preferred extension mechanisms for both HTTP/2 and WebSocket and, thus, interoperability is provided. The question of whether or not the industry will adopt these extensions cannot be answered right now but I would recommend keeping an eye on this and there are certainly multiple research topics worth exploring in the future.

An interesting aspect for the research community would be to quantify the utility of using push methods within dynamic adaptive environments in terms of QoE and start-up delay. Some papers provide preliminary answers but a comprehensive evaluation is missing.

To conclude the recent MPEG-DASH developments, the DASH-IF recently established the Excellence in DASH Award at ACM MMSys’16 and the winners are presented here (including some of the recent developments described in this blog post).

Common Media Application Format (CMAF): ISO/IEC 23000-19

The goal of CMAF is to enable application consortia to reference a single MPEG specification (i.e., a “common media format”) that would allow a single media encoding to use across many applications and devices. Therefore, CMAF defines the encoding and packaging of segmented media objects for delivery and decoding on end user devices in adaptive multimedia presentations. This sounds very familiar and reminds us a bit on what the DASH-IF is doing with their interoperability points. One of the goals of CMAF is to integrate HLS in MPEG-DASH which is backed up with this WWDC video where Apple announces the support of fragmented MP4 in HLS. The streaming of this announcement is only available in Safari and through the WWDC app but Bitmovin has shown that it also works on Mac iOS 10 and above, and for PC users all recent browser versions including Edge, FireFox, Chrome, and (of course) Safari.

MPEG Virtual Reality

Virtual reality is becoming a hot topic across the industry (and also academia) which also reaches standards developing organizations like MPEG. Therefore, MPEG established an ad-hoc group (with an email reflector) to develop a roadmap required for MPEG-VR. Others have also started working on this like DVB, DASH-IF, and QUALINET (and maybe many others: W3C, 3GPP). In any case, it shows that there’s a massive interest in this topic and Bitmovin has shown already what can be done in this area within today’s Web environments. Obviously, adaptive streaming is an important aspect for VR applications including a many research questions to be addressed in the (near) future. A first step towards a concrete solution is the Omnidirectional Media Application Format (OMAF) which is currently at working draft stage (details to be provided in a future blog post).

The research aspects covers a wide range activity including - but not limited to - content capturing, content representation, streaming/network optimization, consumption, and QoE.

MPEG roadmap/vision

At it’s 115th meeting, MPEG published a document that lays out its medium-term strategic standardization roadmap. The goal of this document is collecting feedback from anyone in professional and B2B industries dealing with media, specifically but not limited to broadcasting, content and service provision, media equipment manufacturing, and telecommunication industry. The roadmap is depicted below and further described in the document available here. Please note that “360 AV” in the figure below also refers to VR but unfortunately it’s not (yet) reflected in the figure. However, it points out the aspects to be addressed by MPEG in the future which would be relevant for both industry and academia.


The next MPEG meeting will be held in Chengdu, October 17-21, 2016.

Wednesday, December 9, 2015

Real-Time Entertainment now accounts for >70% of the Internet Traffic

Sandvine's Global Internet Phenomena Report (December 2015 edition) reveals that real-time entertainment (i.e., streaming video and audio) traffic now accounts for more than 70% of North American downstream traffic in the peak evening hours on fixed access networks (see Figure 1). Interestingly, five years ago it accounted only for less than 35%.

Netflix is mainly responsible for this with a share of >37% (i.e., more than the total five years ago) but already had a big share in 2011 (~32%) and didn't "improve" that much. Second biggest share is coming from YouTube with roughly 18%.

I'm using these figures within my slides to motivate that streaming video and audio is a huge market - opening a lot of opportunities for research and innovation - and it's interesting to see how the Internet is being used. In most of these cases, the Internet is used as is, without any bandwidth guarantees and clients adapt themselves to what's available in terms of bandwidth. Service providers offer the content in multiple versions (e.g., different bitrates, resolution, etc.) and each version is segmented to which clients can adapt both at the beginning and also during the session. This principle is known as over-the-top adaptive video streaming and a standardized representation format is available known as Dynamic Adaptive Streaming over HTTP (DASH) under ISO/IEC 23009. Note that the adaptation logic is not part of the standard and open a punch of possibilities in terms of research and engineering.

Both Netflix and YouTube adopted the DASH format which is now natively supported by modern Web browsers thanks to the HTML5 Media Source Extensions (MSE) and even digital rights management is possible due to Encrypted Media Extensions (EME). All one needs is a client implementation that is compliant to the standard - the easy part; the standard is freely available - and adapts to the dynamically changing usage context while maximizing the Quality of Experience (QoE) - the difficult part. That's why we at bitmovin thought to setup a grand challenge at IEEE ICME 2016 in Seattle, USA with the aim to solicit contributions addressing end-to-end delivery aspects which improve the QoE while optimally utilising the available network infrastructures and its associated costs. This includes the content preparation for DASH, the content delivery within existing networks, and the client implementations. Please feel free to contribute to this exciting problem and if you have further questions or comments, please contact us here.