Great News!

As part of the Innovation Incubator 4.0 project, the results of the call for the implementation of pre-implementation works were announced.
The Investment Committee decided to grant support and co-financing the project under the leadership of Professor Mikołaj Leszczuk, No. WPP/4.0/2/2021 named “Advanced indicators of visual quality”.

What is it actually about?
One of the problems in transmitting a video signal is measuring the quality perceived by the recipients of the system, therefore it is very important for a content provider to quickly detect distortions. The first automatic image distortion detection systems are already on the market. Also as part of the work carried out at the AGH Institute of Telecommunications, algorithms detecting such distortions were developed (https://qoe.agh.edu.pl/indicators/).
However, the current attempts to license these algorithms in business realities are faced with problems related to the instability of the algorithms operation in the conditions of changing image resolution. Therefore, it is planned to prepare for the implementation of visual quality distortion indicators, which will be prepared for operation in production conditions and will be adapted to changes in image resolution.
Our current video quality metrics are available and licensed as standalone download and runtime software. We would like to be able to demonstrate the final version of the technology, taking into account the resolution changes, in the form of Software as a Service (SaaS). The user will be able to stream or upload video materials to our server, and we responded to the user (via a browser or a special API) with the results of the analysis in the form of indicator values.

Technologies Supporting the Summarization of Video Sequences

The AGH Department of Telecommunications obtained funding for the project “Technologies Supporting the Summarization of Video Sequences” under the Joint Venture of the National Center for Research and Development and the National Science Center to support the practical use of basic research results – TANGO.

The R&D (Research and Development) manager of the project is Mikołaj Leszczuk, DSc, and the other people involved were Michał Grega, PhD, Lucjan Janowski, DSc, and Jakub Nawała, MSc.

The main objective of the proposed project is to raise the technological readiness of the results of the “Access to Multilingual Information and Opinions” (AMIS) baseline project to the Technology Readiness Level (TRL) VII, where it is possible to demonstrate the technology prototype under operational conditions, and conducting pre-implementation conceptual works.

More precisely, the goal of R&D is to create/develop techniques (algorithms and their implementations) supporting the creation of video sequence summarization systems. The technologies created will be based primarily on the so-called Quality Indicators (QI), which analyze audio-video content (video sequences) and assign numerical values ​​(“weights”) to its individual fragments. Such QIs allow to determine to what extent a given fragment (e.g. shot) can be interesting for the user, as well as what is the assessment of its audiovisual quality. The project is expected to deliver summarization technologies that take QI values ​​into account during the process of summarizing (abstracting) the original content. The project assumes both the creation/development of the initial solutions of both individual QIs and the entire tools for creating visual summaries, previously created by the AGH Department of Telecommunications. The result of the AMIS project was a news summary system. The implementation of the project will allow for the creation of a technology of near commercialization adapted to the current requirements of audiovisual content – both as a whole and as individual selected components.

At the same time, the aim of the project is to carry out conceptual work aimed at determining the economic use of research results, conducting market analyzes, acquiring partners interested in research and development cooperation and implementation of the project results and developing a strategy of activities aimed at securing the rights to intellectual property protection of research results.

Objective Video Quality Assessment Method for Recognition Tasks

Nowadays, we have many metrics for overall Quality of Experience (QoE), both Full-Reference (FR) ones, like Peak Signal–to–Noise Ratio (PSNR) or Structural Similarity (SSIM) and No-Reference (NR) ones, like Video Quality Indicators (VQI), successfully used in video processing systems for video quality evaluation. However, they are not appropriate for recognition tasks analytic in Target Recognition Video (TRV).
Therefore, the correct estimation of video processing pipeline performance is still a significant research challenge in Computer Vision (CV) tasks. There is a need for an objective video quality assessment method for recognition tasks.

As the response to this need, in this project, we show that it is possible to deliver the new proposal concept of an objective video quality assessment method for recognition tasks (implemented as a prototype software being a proof/demonstration). The method was trained and tested on a representative set of video sequences.

This paper is a description of the new innovative approach proposal used by the software:

Kawa, Kamil; Leszczuk, Mikołaj; Boev, Atanas

Survey on the State-Of-The-Art Methods for Objective Video Quality Assessment in Recognition Tasks Proceedings Article

In: International Conference on Multimedia Communications, Services and Security, pp. 332–350, Springer, Cham 2020.

BibTeX

Supported by Huawei Innovation Research Program (HIRP):

Huawei Innovation Research Program (HIRP)

New Lip Sync Indicator

We are currently developing a new video quality indicator (VQI) called Lip Sync VQI.

Its main purpose is to detect a lack of synchronisation between audio and video streams. This de-synchronization usually manifests itself as a speech signal following (or lagging) lips movement (thus the name Lip Sync).

The development process is realised in a close cooperation with our business partner. This ensures a final product meets industry-class requirements. At every stage we are testing a performance of the solution in a realistic environment. By doing so we make sure to keep in-line with initial assumptions and performance requirements.

For more information you are encouraged to read:

Fernández, Ignacio Blanco; Leszczuk, Mikołaj

Monitoring of audio visual quality by key indicators Journal Article

In: Multimedia Tools and Applications, vol. 77, no. 2, pp. 2823–2848, 2018.

BibTeX

Or contact us by sending an e-mail to qoe@agh.edu.pl.

Security in Trusted Scada and Smart-Grids

HORIZON 2020 (H2020)

2015 – 2017

SCISSOR designs a new generation SCADA (supervisory control and data acquisition) security monitoring framework.

In traditional industrial control systems and critical infrastructures, security was implicitly assumed by the reliance on proprietary technologies (security by obscurity), physical access protection and disconnection from the Internet. The massive move, in the last decade, towards open standards and IP connectivity, the growing integration of Internet of Things technologies, and the disruptiveness of targeted cyber-attacks, calls for novel, designed-in, cyber security means. Taking an holistic approach, SCISSOR designs a new generation SCADA (supervisory control and data acquisition) security monitoring framework, comprising four layers:

  1. A monitoring layer supporting traffic probes providing programmable traffic analyses up to layer 7, new ultra low cost/energy pervasive sensing technologies, system and software integrity verification, and smart camera surveillance solutions for automatic detection and object classification;
  2. A control and coordination layer adaptively orchestrating remote probes/sensors, providing a uniform representation of monitoring data gathered from heterogeneous sources, and enforcing cryptographic data protection, including certificate-less identity/attribute-based encryption schemes;
  3. A decision and analysis layer in the form of an innovative SIEM (Security Information and Event Management) fed by both highly heterogeneous monitoring events as well as the native control processes’ signals, and supporting advanced correlation and detection methodologies;
  4. A human- machine layer devised to present in real time the system behavior to the human end user in a simple and usable manner. SCISSOR’s framework will leverage easy-to-deploy cloud-based development and integration, and will be designed with resilience and reliability in mind (no single point of failure).

SCISSOR will be assessed via:

  • An off-field SCADA platform, to highlight its ability to detect and thwart targeted threats, and
  • An on- field, real world deployment within a running operational smart grid, to showcase usability, viability and deployability.

‘Data and Multimedia Processing’ team, headed by Professor Andrzej Dziech, is leading the Work Package dedicated to monitoring layer technologies and solutions. The following components will be designed and developed by the team in this Work Package:

  • Mechanisms for acquisition and transmission of video data from multiple cameras of different types (CCTV, infrared)
  • Automatic recognition of objects and their characteristics and positions
  • Event Analysis System – the development of requirements and methods of data analysis and decision-making with regard to the integration of information from multiple sources
  • Privacy protection of processed and transmitted digital data using digital watermarks
  • Event Analysis System – integration of algorithms, implementation and evaluation of a module to generate data for visualization and decision-making;
  • An integrated system for processing and storing data.

AGH team will be also engaged in other activities, including:

  • System control and monitoring
  • Validation and demonstration performed on advanced SCADA platform in real world deployment

AGH teamwork on video surveillance has a significant practical interest, and will permit to intensify cooperation with the industry.

Partners:

  • ASSYSTEM ENGINEERING AND OPERATION SERVICES
  • UNIVERSITE PIERRE ET MARIE CURIE
  • SIXSQ SARL
  • CONSORZIO NAZIONALE INTERUNIVERSITARIO PER LE TELECOMUNICAZIONI
  • RADIO6ENSESRL
  • SALZBURG RESEARCH FORSCHUNGSGESELLSCHAFT M.B.H
  • KATHOLIEKE UNIVERSITEIT LEUVEN
  • SEA SOCIETÀ ELETTRICA DI FAVIGNANA SPA

Intelligent Multimedia System for Web and IPTV Archiving. Digital Analysis and Documentation of Multimedia Content

EUREKA

2013-2015

IMCOP improves current approaches in the field of digital preservation of IPTV and Internet Web-based content. It provides a comprehensive and extensive system for analysing, documenting and presenting dynamic Web-based content and complex, multimedia objects.

Partners:

  • DGT Sp. z o.o.
  • Viaccess – Orca
  • AGH University of Science and Technology, Department of Telecommunications
  • The University of Computer Engineering and Telecommunication

Next Generation Multimedia Efficient, Scalable and Robust Delivery

EUREKA

2013-2016

The objective of MITSU (next generation MultImedia efficienT, Scalable and robUst Delivery) is to study and develop the next generation of multimedia streaming systems to be used over wireless networks.

While considering state of the art technologies in that field, MITSU intends to study and implement this video interoperability while minimising complexity and power consumption.

Partners:

  • Institute of Bioorganic Chemistry, Polish Academy of Sciences, Poznań Supercomputing and Networking Center (PSNC)
  • AGH University of Science and Technology
  • Adam Mickiewicz University
  • Instituto Tecnológico de Aragón (ITA)
  • Ebesis, S.L.
  • Arantia 2010, SL
  • ADTEL Sistemas de Telecomunicación SL
  • Embou Nuevas Tecnologías, S.L.
  • ARGELA Yazılım ve Bilişim Teknolojileri San. ve Tic. A.Ş (ARGELA)
  • C Tech Bilişim Teknolojileri A.Ş.
  • TeamNet World Professional Services
  • AUTONOMOUS SYSTEMS SRL
en_USEnglish