Source(s) Of Funding (Name of the Call, Registration Number)
National Science Centre (GRIEG, 2019/34/H/ST6/00599)
Amount of Funding
4 253 150 PLN
Start-End Year
2020-2024
The project “Towards Better Understanding of Factors Influencing the QoE by More Ecologically-Valid Evaluation Standards” aimed to better understand which factors played a role when people used video services: why were some experiences positive while others were negative? Why was the quality sometimes considered “really bad” and in other cases very good? Which factors influenced this perception, and how did these factors interplay? How did users experience the quality of video services?
Everyone used video services, which were continuously evolving. A movie from the 80s or 90s played on today’s TV channel looked much worse than the advertisement shown during the break, even if the old movie had undergone an enhancement procedure. Many different evolutions and revolutions drove this development, and one of the critical technological improvements was better compression algorithms.
Research on video quality had a long history, and its main focus had been pixel quality improvement. This approach was reasonable since pixel quality primarily influenced our opinion about the quality of service. However, this was not the only reason people used a particular service. We did not focus solely on pixel quality and content quality. Other factors made us complain about the quality of an almost perfect movie watched in a home theatre, designed to display the highest quality possible, or nearly ignore quality problems when we were on vacation with poor internet access, but our favourite team was playing!

Contact Data
- AGH: dr hab. inż. Lucjan Janowski, ljanowsk@agh.edu.pl
- NTNU: prof. dr Katrien De Moor, katriend@ntnu.no
At that time, the way we received information from users was strongly related to pixel quality, excluding other factors—even when we explicitly asked people to ignore them! A typical subjective experiment involved showing small short sequences, often repeating content. The goal of this research was to change that. We included factors such as interest in the content by designing experiments where users selected the content they watched. Another critical dimension was related to the content creator; therefore, another planned experiment focused on the influence of the user’s relationship with the content creator. How different was the perception of quality if the content came from a family member or a stranger? We also studied the environment in which people used video services, running experiments on users’ mobile phones to allow for the most natural viewing experience possible.
The proposed experiment was new, and within the project, we developed a clear method description so that other laboratories could replicate the same or similar experiments. Having two laboratories involved in this process was especially crucial since comparing results between the two laboratories helped identify issues with the procedure description or the experiment itself.
In the project proposal, we described seven different experiments, the analysis of which allowed us to develop a model of the key influencing factors. The next phase of the project focused on “stressing” this model by designing new subjective experiments. For example, if the model predicted that interest in the content was a primary influencing factor, we planned a subjective experiment where users ranked content interest levels before the study. Collecting results for sequences with different levels of user involvement allowed us to confirm or reject the content involvement hypothesis. The data obtained from the final experiments were used to refine our model, which was the main outcome of the project.
In parallel, we conducted long-term studies, aiming for user cooperation lasting more than two years. From these observations, we concluded which factors mattered the most in the long run. Again, classical experiments ignored long-term effects, and we sought to determine how much insight was lost when asking users for feedback only once, compared to long and stable cooperation.
All the experiments we conducted had detailed descriptions, and we discussed both the procedures and data analysis with the scientific community. Our goal was to make these experimental procedures more popular and encourage other researchers to use them. The ultimate objective was to raise awareness of the newly discovered factors and, ultimately, contribute to the development of better video and other digital services for everyone.