Nowadays, we have many metrics for overall Quality of Experience (QoE), both Full-Reference (FR) ones, like Peak Signal–to–Noise Ratio (PSNR) or Structural Similarity (SSIM) and No-Reference (NR) ones, like Video Quality Indicators (VQI), successfully used in video processing systems for video quality evaluation. However, they are not appropriate for recognition tasks analytic in Target Recognition Video (TRV).
Therefore, the correct estimation of video processing pipeline performance is still a significant research challenge in Computer Vision (CV) tasks. There is a need for an objective video quality assessment method for recognition tasks.
As the response to this need, in this project, we show that it is possible to deliver the new proposal concept of an objective video quality assessment method for recognition tasks (implemented as a prototype software being a proof/demonstration). The method was trained and tested on a representative set of video sequences.
This paper is a description of the new innovative approach proposal used by the software:
Supported by Huawei Innovation Research Program (HIRP):