Challenges:
- The video quality experienced by a client is determined by many conflicting performance metrics which is difficult to find a way to optimize all of them.
- All the algorithms now are derived according to high-level intuition, which do not take all comprehensive factors into consideration, yet only use one or few factors as their input.
Specific Method:
- To cope with these problems above, the writer proposed a machine learning adaptive streaming over HTTP (MLASH)
- The method trains a classification model for any existing rate adaptation algorithm to improve them(use them to label the best rate rather than predict).
- Unlike the previous methods that utilize the estimated information, this approach adopts the true information to train.
- The writer uses the latest k segments to train. The first k-1 segments are used as features and the true k-th segment’s rate is used as the corresponding result. As for the features considered in the model, the writer uses 1>estimated bandwidth(LSB,SAB,WAB and variation) 2> buffer size(current and maximal) 3> RTT 4>current video rate.
- Two things are worth noting here: 1> the model can be offline or online 2> to keep the model flexible, the server can use its preferable labeling algorithm(expected best rate)
- Labeling algorithm: 1>bandwidth-based 2>buffer-considered 3>smooth rate adaptation(absence of buffer-based)
Evaluation:
- The traces are provided by an article ” Measuring DASH streaming performance from the end users perspective using neubot”in MMSys. The buffer size is set to 10 seconds and the segment is about 2 seconds.
- The writer compares the MLASH with 4 methods: 1>bandwidth-based adaptation 2>buffer-based adaptation 3>rate smoothness-based adaptation 4>buffer-based adaptation(d sec). The first 3 methods use 3 measures SAB,LAB,WAB as the estimate of bandwidth.
- Because the buffer-based approach dose not consider the bandwidth, the MLASH alternately combines with the first 3 methods and compares it with the 4 methods above. (3 experiments)
- The metric are average rate, average prediction error, rebuffer rate and over estimation rate.
- The last experiment is the curve of prediction error caused by the training data of the 3 methods.(to explore the convergence of training process)
Conclusion:
This paper is organized clearly. The challenges and methods are all listed orderly. But the innovation is little I think.