scispace - formally typeset
Search or ask a question
Author

Vincenzo Dentamaro

Other affiliations: Georgia Institute of Technology, IBM
Bio: Vincenzo Dentamaro is an academic researcher from University of Bari. The author has contributed to research in topics: Computer science & Deep learning. The author has an hindex of 5, co-authored 19 publications receiving 76 citations. Previous affiliations of Vincenzo Dentamaro include Georgia Institute of Technology & IBM.

Papers
More filters
Journal ArticleDOI
28 Nov 2019-Sensors
TL;DR: This research provides a comparative analysis of state-of-the-art object detectors, visual features, and classification models useful to implement traffic state estimations and demonstrates that the deep learning method is the most accurately performing one reaching an accuracy of 99.9% for binary traffic state classification and 98.6% for multiclass classification.
Abstract: Automatic traffic flow classification is useful to reveal road congestions and accidents. Nowadays, roads and highways are equipped with a huge amount of surveillance cameras, which can be used for real-time vehicle identification, and thus providing traffic flow estimation. This research provides a comparative analysis of state-of-the-art object detectors, visual features, and classification models useful to implement traffic state estimations. More specifically, three different object detectors are compared to identify vehicles. Four machine learning techniques are successively employed to explore five visual features for classification aims. These classic machine learning approaches are compared with the deep learning techniques. This research demonstrates that, when methods and resources are properly implemented and tested, results are very encouraging for both methods, but the deep learning method is the most accurately performing one reaching an accuracy of 99.9% for binary traffic state classification and 98.6% for multiclass classification.

29 citations

Journal ArticleDOI
TL;DR: The hypothesis is that the kinematic theory of rapid human movements, originally developed to describe handwriting patterns, and used in conjunction with other spatio-temporal features, can discriminate neurodegenerative diseases patterns, especially in early stages, while analyzing human gait with 2D cameras.
Abstract: Neurodegenerative diseases are particular diseases whose decline can partially or completely compromise the normal course of life of a human being. In order to increase the quality of patient’s life, a timely diagnosis plays a major role. The analysis of neurodegenerative diseases, and their stage, is also carried out by means of gait analysis. Performing early stage neurodegenerative disease assessment is still an open problem. In this paper, the focus is on modeling the human gait movement pattern by using the kinematic theory of rapid human movements and its sigma-lognormal model. The hypothesis is that the kinematic theory of rapid human movements, originally developed to describe handwriting patterns, and used in conjunction with other spatio-temporal features, can discriminate neurodegenerative diseases patterns, especially in early stages, while analyzing human gait with 2D cameras. The thesis empirically demonstrates its effectiveness in describing neurodegenerative patterns, when used in conjunction with state-of-the-art pose estimation and feature extraction techniques. The solution developed achieved 99.1% of accuracy using velocity-based, angle-based and sigma-lognormal features and left walk orientation.

27 citations

Journal ArticleDOI
TL;DR: AUCO ResNet as discussed by the authors is a biologically inspired deep neural network especially designed for sound classification and more specifically for Covid-19 recognition from audio tracks of coughs and breaths, which can be trained end-to-end thus optimizing (with gradient descent) all the modules of the learning algorithm: mel-like filter design, feature extraction, feature selection, dimensionality reduction and prediction.

20 citations

Patent
02 Oct 2014
TL;DR: In this article, the authors proposed a method for indoor localization of a user equipped with a localization device having electromagnetic signal receiver means and means for detecting the orientation in a predetermined spatial reference system.
Abstract: The present invention relates to a method for indoor localization of a user equipped with a localization device having electromagnetic signal receiver means and means for detecting the orientation in a predetermined spatial reference system, wherein the indoor space is divided up into a plurality of rooms each of which includes a plurality of spatial volumes or areas, nodes, which are connected together in a directed-graph arrangement, and wherein a plurality of radio transmitters each designed to emit a respective localization signal are arranged inside this space. The method is based on the synergic use of three localization methods, i.e. fingerprinting, inertial navigation with intelligent step recognition and proximity localization.

20 citations

Journal ArticleDOI
TL;DR: A novel generative deep learning architecture for time series analysis, inspired by the Google DeepMind’ Wavenet network, called TrafficWave, is proposed and applied to traffic prediction problem and shows that the proposed system performs a valuable MAPE error rate reduction when compared with other state of art techniques.
Abstract: Vehicular traffic flow prediction for a specific day of the week in a specific time span is valuable information. Local police can use this information to preventively control the traffic in more critical areas and improve the viability by decreasing, also, the number of accidents. In this paper, a novel generative deep learning architecture for time series analysis, inspired by the Google DeepMind’ Wavenet network, called TrafficWave, is proposed and applied to traffic prediction problem. The technique is compared with the most performing state-of-the-art approaches: stacked auto encoders, long–short term memory and gated recurrent unit. Results show that the proposed system performs a valuable MAPE error rate reduction when compared with other state of art techniques.

19 citations


Cited by
More filters
01 Jan 1979
TL;DR: This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis and addressing interesting real-world computer Vision and multimedia applications.
Abstract: In the real world, a realistic setting for computer vision or multimedia recognition problems is that we have some classes containing lots of training data and many classes contain a small amount of training data. Therefore, how to use frequent classes to help learning rare classes for which it is harder to collect the training data is an open question. Learning with Shared Information is an emerging topic in machine learning, computer vision and multimedia analysis. There are different level of components that can be shared during concept modeling and machine learning stages, such as sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, etc. Regarding the specific methods, multi-task learning, transfer learning and deep learning can be seen as using different strategies to share information. These learning with shared information methods are very effective in solving real-world large-scale problems. This special issue aims at gathering the recent advances in learning with shared information methods and their applications in computer vision and multimedia analysis. Both state-of-the-art works, as well as literature reviews, are welcome for submission. Papers addressing interesting real-world computer vision and multimedia applications are especially encouraged. Topics of interest include, but are not limited to: • Multi-task learning or transfer learning for large-scale computer vision and multimedia analysis • Deep learning for large-scale computer vision and multimedia analysis • Multi-modal approach for large-scale computer vision and multimedia analysis • Different sharing strategies, e.g., sharing generic object parts, sharing attributes, sharing transformations, sharing regularization parameters and sharing training examples, • Real-world computer vision and multimedia applications based on learning with shared information, e.g., event detection, object recognition, object detection, action recognition, human head pose estimation, object tracking, location-based services, semantic indexing. • New datasets and metrics to evaluate the benefit of the proposed sharing ability for the specific computer vision or multimedia problem. • Survey papers regarding the topic of learning with shared information. Authors who are unsure whether their planned submission is in scope may contact the guest editors prior to the submission deadline with an abstract, in order to receive feedback.

1,758 citations

01 Jan 2016
TL;DR: The the radon transform and some of its applications is universally compatible with any devices to read and an online access to it is set as public so you can get it instantly.
Abstract: the radon transform and some of its applications is available in our book collection an online access to it is set as public so you can get it instantly. Our digital library spans in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the the radon transform and some of its applications is universally compatible with any devices to read.

212 citations

Patent
30 Jun 2016
TL;DR: In this paper, the authors use a mobile device to detect the peaks of beacon signals corresponding to the mobile device traveling through the traffic choke points, and thus determine accurately the position and speed of the mobile devices in the transport corridor between the choke points.
Abstract: Systems and methods to position beacons at traffic choke points, use a mobile device to detect the peaks of beacon signals corresponding to the mobile device traveling through the traffic "choke points", and thus determine accurately the position and speed of the mobile device in the transport corridor between the choke points. The determined position and speed of the mobile device can be used to improve the performance of other location determination technologies, such as radio frequency fingerprint-based location estimate and/or inertial guidance location estimate.

44 citations

Journal ArticleDOI
TL;DR: A novel feature selection and extraction approach for anomaly-based IDS that is superior and competent with a very high 99.98% classification accuracy is proposed and compared with other state-of-the-art studies.
Abstract: The Internet of Things (IoT) ecosystem has experienced significant growth in data traffic and consequently high dimensionality. Intrusion Detection Systems (IDSs) are essential self-protective tools against various cyber-attacks. However, IoT IDS systems face significant challenges due to functional and physical diversity. These IoT characteristics make exploiting all features and attributes for IDS self-protection difficult and unrealistic. This paper proposes and implements a novel feature selection and extraction approach (i.e., our method) for anomaly-based IDS. The approach begins with using two entropy-based approaches (i.e., information gain (IG) and gain ratio (GR)) to select and extract relevant features in various ratios. Then, mathematical set theory (union and intersection) is used to extract the best features. The model framework is trained and tested on the IoT intrusion dataset 2020 (IoTID20) and NSL-KDD dataset using four machine learning algorithms: Bagging, Multilayer Perception, J48, and IBk. Our approach has resulted in 11 and 28 relevant features (out of 86) using the intersection and union, respectively, on IoTID20 and resulted 15 and 25 relevant features (out of 41) using the intersection and union, respectively, on NSL-KDD. We have further compared our approach with other state-of-the-art studies. The comparison reveals that our model is superior and competent, scoring a very high 99.98% classification accuracy.

42 citations

Journal ArticleDOI
TL;DR: In this paper , the authors explored how artificial intelligence (AI) is being used in the smart city concept and examined 133 articles (97% of Scopus and 73% of WoS) in healthcare, education, environment and waste management, agriculture, mobility and smart transportation, risk management, and security.
Abstract: Recently, the population density in cities has increased at a higher pace. According to the United Nations Population Fund, cities accommodated 3.3 billion people (54%) of the global population in 2014. By 2050, around 5 billion people (68%) will be residing in cities. In order to make lifestyles in cities more comfortable and cost-effective, the city must be smart and intelligent. It is mainly accomplished through an intelligent decision-making process using computational intelligence-based technologies. This paper explored how artificial intelligence (AI) is being used in the smart city concept. From 2014 to 2021, we examined 133 articles (97% of Scopus and 73% of WoS) in healthcare, education, environment and waste management, agriculture, mobility and smart transportation, risk management, and security. Moreover, we observed that the healthcare (23% impact), mobility (19% impact), privacy and security (11% impact), and energy sectors (10% impact) have a more significant influence on AI adoption in smart cities. Since the epidemic hit cities in 2019, the healthcare industry has intensified its AI-based advances by 60%. According to the analysis, AI algorithms such as ANN, RNN/LSTM, CNN/R-CNN, DNN, and SVM/LS-SVM have a higher impact on the various smart city domains.

38 citations