SURVANT is an innovative video archive investigation system that aims to drastically reduce the time required to examine large amounts of video content. It can collect the videos relevant to a specific case from heterogeneous repositories in a seamless manner. SURVANT employs Deep Learning technologies to extract inter/intra-camera video analytics, including object recognition, inter/intra-camera tracking, and activity detection. The identified entities are semantically indexed enabling search and retrieval of visual characteristics. Semantic reasoning and inference mechanisms based on visual concepts and spatio-temporal metadata allows users to identify hidden correlations and discard outliers. SURVANT offers the user a unified GIS-based search interface to unearth the required information using natural language query expressions and a plethora of filtering options. An intuitive interface with a relaxed learning curve assists the user to create specific queries and receive accurate results using advanced visual analytics tools. GDPR compliant management of personal data collected from surveillance videos is integrated in the system design.

Vella, G., Dimou, A., Gutierrez-Perez, D., Toti, D., Nicoletti, T., La Mattina, E., Grassi, F., Ciapetti, A., Mcelligott, M., Shahid, N., Daras, P., SURVANT: An Innovative Semantics-Based Surveillance Video Archives Investigation Assistant, in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), (Virtuale, 10-11 January 2021), Springer Science and Business Media Deutschland GmbH, Berlin 2021:<<LECTURE NOTES IN COMPUTER SCIENCE>>,12667 611-626. [10.1007/978-3-030-68787-8_44] [http://hdl.handle.net/10807/178515]

SURVANT: An Innovative Semantics-Based Surveillance Video Archives Investigation Assistant

Toti, Daniele;
2021

Abstract

SURVANT is an innovative video archive investigation system that aims to drastically reduce the time required to examine large amounts of video content. It can collect the videos relevant to a specific case from heterogeneous repositories in a seamless manner. SURVANT employs Deep Learning technologies to extract inter/intra-camera video analytics, including object recognition, inter/intra-camera tracking, and activity detection. The identified entities are semantically indexed enabling search and retrieval of visual characteristics. Semantic reasoning and inference mechanisms based on visual concepts and spatio-temporal metadata allows users to identify hidden correlations and discard outliers. SURVANT offers the user a unified GIS-based search interface to unearth the required information using natural language query expressions and a plethora of filtering options. An intuitive interface with a relaxed learning curve assists the user to create specific queries and receive accurate results using advanced visual analytics tools. GDPR compliant management of personal data collected from surveillance videos is integrated in the system design.
2021
Inglese
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
25th International Conference on Pattern Recognition Workshops, ICPR 2020
Virtuale
10-gen-2021
11-gen-2021
978-3-030-68786-1
Springer Science and Business Media Deutschland GmbH
Vella, G., Dimou, A., Gutierrez-Perez, D., Toti, D., Nicoletti, T., La Mattina, E., Grassi, F., Ciapetti, A., Mcelligott, M., Shahid, N., Daras, P., SURVANT: An Innovative Semantics-Based Surveillance Video Archives Investigation Assistant, in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), (Virtuale, 10-11 January 2021), Springer Science and Business Media Deutschland GmbH, Berlin 2021:<<LECTURE NOTES IN COMPUTER SCIENCE>>,12667 611-626. [10.1007/978-3-030-68787-8_44] [http://hdl.handle.net/10807/178515]
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10807/178515
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact