Great News!

As part of the Innovation Incubator 4.0 project, the results of the call for the implementation of pre-implementation works were announced.
The Investment Committee decided to grant support and co-financing the project under the leadership of Professor Mikołaj Leszczuk, No. WPP/4.0/2/2021 named “Advanced indicators of visual quality”.

What is it actually about?
One of the problems in transmitting a video signal is measuring the quality perceived by the recipients of the system, therefore it is very important for a content provider to quickly detect distortions. The first automatic image distortion detection systems are already on the market. Also as part of the work carried out at the AGH Institute of Telecommunications, algorithms detecting such distortions were developed (https://qoe.agh.edu.pl/indicators/).
However, the current attempts to license these algorithms in business realities are faced with problems related to the instability of the algorithms operation in the conditions of changing image resolution. Therefore, it is planned to prepare for the implementation of visual quality distortion indicators, which will be prepared for operation in production conditions and will be adapted to changes in image resolution.
Our current video quality metrics are available and licensed as standalone download and runtime software. We would like to be able to demonstrate the final version of the technology, taking into account the resolution changes, in the form of Software as a Service (SaaS). The user will be able to stream or upload video materials to our server, and we responded to the user (via a browser or a special API) with the results of the analysis in the form of indicator values.

Interested in video services use and interaction experience? Join our event!

Invitation

For the kick-off meeting of the TUFIQoE project

Interested in learning more about video services use and interaction experience? Considering a research or science-oriented career path? Wanting to learn more about the status of multimedia experiences research? If you answered yes to any of these questions, then this event is for you. 😊

We are a team of researchers from NTNU Norwegian University of Science and Technology and AGH University of Science and Technology. We focus on multimedia experiences research and related topics. Together, we work in a publicly funded project dedicated to just that. *

We would be happy if you joined us during our first public project kick-off meeting. 🚀

To join the meeting please sign up using this short form.

Please find the event details below.

Where: MS Teams

Date & time: Monday, Dec 6, 2021, 9 am to 1 pm.

Agenda

  • 9 am–9:30 am — Opening session.
  • 9:30 am–10:00 am — Open discussion.
  • 10:00 am–11:30 am — Project overview and an open discussion.
  • 11:30 am–12:00 pm — Break.
  • 12:00 pm–1:00 pm — Career in science: our reflections and recommendations.

*The project is entitled TUFIQoE, which stands for Towards Better Understanding of Factors Influencing the Quality of Experience by More Ecologically Valid Evaluation Standards. The project has received funding from Norwegian Financial Mechanism 2014-2021 under project 2019/34/H/ST6/00599.

Technologies Supporting the Summarization of Video Sequences

The AGH Department of Telecommunications obtained funding for the project “Technologies Supporting the Summarization of Video Sequences” under the Joint Venture of the National Center for Research and Development and the National Science Center to support the practical use of basic research results – TANGO.

The R&D (Research and Development) manager of the project is Mikołaj Leszczuk, DSc, and the other people involved were Michał Grega, PhD, Lucjan Janowski, DSc, and Jakub Nawała, MSc.

The main objective of the proposed project is to raise the technological readiness of the results of the “Access to Multilingual Information and Opinions” (AMIS) baseline project to the Technology Readiness Level (TRL) VII, where it is possible to demonstrate the technology prototype under operational conditions, and conducting pre-implementation conceptual works.

More precisely, the goal of R&D is to create/develop techniques (algorithms and their implementations) supporting the creation of video sequence summarization systems. The technologies created will be based primarily on the so-called Quality Indicators (QI), which analyze audio-video content (video sequences) and assign numerical values ​​(“weights”) to its individual fragments. Such QIs allow to determine to what extent a given fragment (e.g. shot) can be interesting for the user, as well as what is the assessment of its audiovisual quality. The project is expected to deliver summarization technologies that take QI values ​​into account during the process of summarizing (abstracting) the original content. The project assumes both the creation/development of the initial solutions of both individual QIs and the entire tools for creating visual summaries, previously created by the AGH Department of Telecommunications. The result of the AMIS project was a news summary system. The implementation of the project will allow for the creation of a technology of near commercialization adapted to the current requirements of audiovisual content – both as a whole and as individual selected components.

At the same time, the aim of the project is to carry out conceptual work aimed at determining the economic use of research results, conducting market analyzes, acquiring partners interested in research and development cooperation and implementation of the project results and developing a strategy of activities aimed at securing the rights to intellectual property protection of research results.

Towards Better Understanding of Factors Influencing the QoE by More Ecologically-Valid Evaluation Standards

The Project “Towards Better Understanding of Factors Influencing the QoE by More Ecologically-Valid Evaluation Standards” aims to understand better which factors play a role when people use video services: why are some experiences positive, while others are negative? Why is the quality sometimes considered “really bad” and in other cases as very good? Which factors play a role in this respect and how do these factors interplay? How users experience with video services the video services quality. We all use video services which are developed continuously. The movie from 80st or 90st played in today’s TV channel looks much worse than the advertising played in the brake, even if the old move goes through enhancing procedure. Many different evolutions and revolutions drive this development. One of the critical technological improvements is better compression algorithms.

The research on the video quality has a long history, and its main force is pixel quality improvement. It is reasonable since pixel quality has the primary influence on our opinion about the quality of service. Nevertheless, this is not the only reason we use a particular service. We are not focusing on the pixel quality and content quality, only. Other factors are making us complain about the quality of a nearly perfect movie watched in a home theatre, designed to watch the best quality you can get, or almost ignore quality problems when we are on vacation with poor internet access, but our favourite team is playing!

Currently, the way we receive information from users is strongly related to pixel quality. It does not include any other factors. Even we explicitly ask people to ignore them! Currently, a typical subjective experiment is showing small short sequences which content is often repeated. The goal of this research is to change it. We are going to include factors like interest in the content by experiments where users have chosen the content they watch. Another critical dimension is related to the content creator; therefore, other planned experiment targets the influence of relation to the content creator. How different is quality if it comes from our family or a stranger. We also target the place where people use the service running some experiments on the user’s mobile phones, which allow for as natural watching experience as possible. The proposed experiment is new, and within the project, we will work on the clear method description so another laboratory can repeat the same or similar experiment. Having two laboratories involved in this process is especially crucial since comparing results between the two laboratories can reveal some problems we have with the procedure description or the experiment itself.

In the proposal, we described seven different experiments which analysis allow us to propose a model of the critical factors. The next part of the project will be focusing on “stressing” this model by proposing new subjective experiments. For example, if the model predicts that the interest in the content has a primary influencing factor we are going to plane subjective experiment with a clear line of the content interest which is ordered by users before the study. Collection results for sequences which have a different level of involvement from a user point of view result in proving or rejecting the content involvement hypothesis. The data obtained from the final experiments will be used to finalize our model, which is the main result of the project.

In parallel, we run long term studies where we are targeting cooperation with users for more than two years. From those observations, we conclude what factors matter the most in the long term. Again the classical experiments ignore the long term effects, and we would like to know how much we lose by asking users only once compared to long and stable cooperation.

All the experiments we conduct have detail description, and we are going to discuss both the procedure and the data analysis with the scientific community. Our goal is to make those experiments procedures to become more popular and used by other researchers. Ultimate solution should be increasing awareness of the newly discovered factors and finally, the better video or other services created for all of us.

AGH UST Scientists Among the Beneficiaries of the GRIEG Competition for Polish-Norwegian Research Projects

The National Science Center has concluded the GRIEG competition, co-financed from Norwegian funds. Among the projects qualified for financing in the field of science, there are two led by scientists from the AGH University of Science and Technology and one in which AGH is a partner.

Projects qualified for financing, led by scientists from AGH:

  • Kinetics of Salt/Particulate Precipitation During CO2 Injection Into the Reservoir
    principal investigator: prof. dr hab. eng. Stanisław Nagy, Faculty of Drilling, Oil and Gas
    partner: University of Oslo
    co-financing grant amount: PLN 5,936,798 (EUR 1,393,647)
  • Towards Better Understanding of Factors Influencing the QoE by More Ecologically-Valid Evaluation Standards
    principal investigator: dr hab. eng. Lucjan Janowski, Faculty of Computer Science, Electronics and Telecommunications
    partner: Norwegian University of Science and Technology, Department of Information Security and Communication Technology
    co-financing grant amount: PLN 4,253,150 (EUR 998,415)

Project qualified for financing, in which AGH is a partner:

  • Study of Charm Production in Heavy Ion Collisions
    principal investigator: dr hab. Seweryn Cezary Kowalski, University of Silesia in Katowice
    partners: University of Bergen; Western Norway University of Applied Sciences; Jagiellonian University, Faculty of Physics, Astronomy and Applied Computer Science; J. Kochanowski University in Kielce; University of Wrocław, Faculty of Physics and Astronomy; Warsaw University of Technology, Faculty of Physics; University of Warsaw, Faculty of Physics; National Center for Nuclear Research; Institute of Nuclear Physics H. Niewodniczański PAN; AGH University of Science and Technology, Faculty of Computer Science, Electronics and Telecommunications; University of Oslo
    co-financing grant amount: PLN 6,212,680 (EUR 1,458,410)

In the competition, scientists submitted 305 applications for a total amount of over PLN 1.6 billion, of which experts sent 28 projects for funding: 12 in exact and technical sciences (ST) and 8 in life sciences (NZ) and humanities, Social and Art (HS). In total, all research teams will receive PLN 156,015,144, of which 85% is financed by the Norwegian Financial Mechanism for 2014–2021, and 15% is national co-financing. GRIEG covers all fields of science according to the NCN discipline panels, with particular emphasis on polar research and research in the field of social sciences.

The submitted applications were assessed by international panels of experts. Three panels were established for each group of sciences (HS, NZ, ST), and scientists from Poland and Norway could not evaluate the conclusions. Each project was assessed by three experts. The final list of winning projects was approved by the Fundamental Research Program Committee, and the funding decisions were made by the NCN director.

GRIEG is a competition for research projects carried out jointly by research teams from Poland and Norway. Teams must consist of at least one Polish partner acting as the partnership leader and at least one Norwegian partner. The manager of a project implemented in the GRIEG call may be a scientist holding at least a doctoral degree, employed in a Polish research institution, and the leader of the Norwegian part of the research team must be a research organization. The partnership may include research institutions, entrepreneurs and non-governmental organizations. Projects that will be implemented in the GRIEG call may last 24 or 36 months.

GRIEG is one of the three competitions financed under the 3rd edition of the EEA and Norway Grants for 2014–2021 under the “Research” program, in which the National Science Center acts as the operator responsible for basic research. The “Research” program, with an allocation of over EUR 129 million, is aimed at supporting Polish science and intensifying cooperation between science, business and society. 40% of the funds are earmarked for supporting basic research.

Objective Video Quality Assessment Method for Recognition Tasks

Nowadays, we have many metrics for overall Quality of Experience (QoE), both Full-Reference (FR) ones, like Peak Signal–to–Noise Ratio (PSNR) or Structural Similarity (SSIM) and No-Reference (NR) ones, like Video Quality Indicators (VQI), successfully used in video processing systems for video quality evaluation. However, they are not appropriate for recognition tasks analytic in Target Recognition Video (TRV).
Therefore, the correct estimation of video processing pipeline performance is still a significant research challenge in Computer Vision (CV) tasks. There is a need for an objective video quality assessment method for recognition tasks.

As the response to this need, in this project, we show that it is possible to deliver the new proposal concept of an objective video quality assessment method for recognition tasks (implemented as a prototype software being a proof/demonstration). The method was trained and tested on a representative set of video sequences.

This paper is a description of the new innovative approach proposal used by the software:

Kawa, Kamil; Leszczuk, Mikołaj; Boev, Atanas

Survey on the State-Of-The-Art Methods for Objective Video Quality Assessment in Recognition Tasks Proceedings Article

In: International Conference on Multimedia Communications, Services and Security, pp. 332–350, Springer, Cham 2020.

BibTeX

Supported by Huawei Innovation Research Program (HIRP):

Huawei Innovation Research Program (HIRP)

New Lip Sync Indicator

We are currently developing a new video quality indicator (VQI) called Lip Sync VQI.

Its main purpose is to detect a lack of synchronisation between audio and video streams. This de-synchronization usually manifests itself as a speech signal following (or lagging) lips movement (thus the name Lip Sync).

The development process is realised in a close cooperation with our business partner. This ensures a final product meets industry-class requirements. At every stage we are testing a performance of the solution in a realistic environment. By doing so we make sure to keep in-line with initial assumptions and performance requirements.

For more information you are encouraged to read:

Fernández, Ignacio Blanco; Leszczuk, Mikołaj

Monitoring of audio visual quality by key indicators Journal Article

In: Multimedia Tools and Applications, vol. 77, no. 2, pp. 2823–2848, 2018.

BibTeX

Or contact us by sending an e-mail to qoe@agh.edu.pl.

Security in Trusted Scada and Smart-Grids

HORIZON 2020 (H2020)

2015 – 2017

SCISSOR designs a new generation SCADA (supervisory control and data acquisition) security monitoring framework.

In traditional industrial control systems and critical infrastructures, security was implicitly assumed by the reliance on proprietary technologies (security by obscurity), physical access protection and disconnection from the Internet. The massive move, in the last decade, towards open standards and IP connectivity, the growing integration of Internet of Things technologies, and the disruptiveness of targeted cyber-attacks, calls for novel, designed-in, cyber security means. Taking an holistic approach, SCISSOR designs a new generation SCADA (supervisory control and data acquisition) security monitoring framework, comprising four layers:

  1. A monitoring layer supporting traffic probes providing programmable traffic analyses up to layer 7, new ultra low cost/energy pervasive sensing technologies, system and software integrity verification, and smart camera surveillance solutions for automatic detection and object classification;
  2. A control and coordination layer adaptively orchestrating remote probes/sensors, providing a uniform representation of monitoring data gathered from heterogeneous sources, and enforcing cryptographic data protection, including certificate-less identity/attribute-based encryption schemes;
  3. A decision and analysis layer in the form of an innovative SIEM (Security Information and Event Management) fed by both highly heterogeneous monitoring events as well as the native control processes’ signals, and supporting advanced correlation and detection methodologies;
  4. A human- machine layer devised to present in real time the system behavior to the human end user in a simple and usable manner. SCISSOR’s framework will leverage easy-to-deploy cloud-based development and integration, and will be designed with resilience and reliability in mind (no single point of failure).

SCISSOR will be assessed via:

  • An off-field SCADA platform, to highlight its ability to detect and thwart targeted threats, and
  • An on- field, real world deployment within a running operational smart grid, to showcase usability, viability and deployability.

‘Data and Multimedia Processing’ team, headed by Professor Andrzej Dziech, is leading the Work Package dedicated to monitoring layer technologies and solutions. The following components will be designed and developed by the team in this Work Package:

  • Mechanisms for acquisition and transmission of video data from multiple cameras of different types (CCTV, infrared)
  • Automatic recognition of objects and their characteristics and positions
  • Event Analysis System – the development of requirements and methods of data analysis and decision-making with regard to the integration of information from multiple sources
  • Privacy protection of processed and transmitted digital data using digital watermarks
  • Event Analysis System – integration of algorithms, implementation and evaluation of a module to generate data for visualization and decision-making;
  • An integrated system for processing and storing data.

AGH team will be also engaged in other activities, including:

  • System control and monitoring
  • Validation and demonstration performed on advanced SCADA platform in real world deployment

AGH teamwork on video surveillance has a significant practical interest, and will permit to intensify cooperation with the industry.

Partners:

  • ASSYSTEM ENGINEERING AND OPERATION SERVICES
  • UNIVERSITE PIERRE ET MARIE CURIE
  • SIXSQ SARL
  • CONSORZIO NAZIONALE INTERUNIVERSITARIO PER LE TELECOMUNICAZIONI
  • RADIO6ENSESRL
  • SALZBURG RESEARCH FORSCHUNGSGESELLSCHAFT M.B.H
  • KATHOLIEKE UNIVERSITEIT LEUVEN
  • SEA SOCIETÀ ELETTRICA DI FAVIGNANA SPA

Intelligent Multimedia System for Web and IPTV Archiving. Digital Analysis and Documentation of Multimedia Content

EUREKA

2013-2015

IMCOP improves current approaches in the field of digital preservation of IPTV and Internet Web-based content. It provides a comprehensive and extensive system for analysing, documenting and presenting dynamic Web-based content and complex, multimedia objects.

Partners:

  • DGT Sp. z o.o.
  • Viaccess – Orca
  • AGH University of Science and Technology, Department of Telecommunications
  • The University of Computer Engineering and Telecommunication

Next Generation Multimedia Efficient, Scalable and Robust Delivery

EUREKA

2013-2016

The objective of MITSU (next generation MultImedia efficienT, Scalable and robUst Delivery) is to study and develop the next generation of multimedia streaming systems to be used over wireless networks.

While considering state of the art technologies in that field, MITSU intends to study and implement this video interoperability while minimising complexity and power consumption.

Partners:

  • Institute of Bioorganic Chemistry, Polish Academy of Sciences, Poznań Supercomputing and Networking Center (PSNC)
  • AGH University of Science and Technology
  • Adam Mickiewicz University
  • Instituto Tecnológico de Aragón (ITA)
  • Ebesis, S.L.
  • Arantia 2010, SL
  • ADTEL Sistemas de Telecomunicación SL
  • Embou Nuevas Tecnologías, S.L.
  • ARGELA Yazılım ve Bilişim Teknolojileri San. ve Tic. A.Ş (ARGELA)
  • C Tech Bilişim Teknolojileri A.Ş.
  • TeamNet World Professional Services
  • AUTONOMOUS SYSTEMS SRL
en_USEnglish