scispace - formally typeset
Search or ask a question
Author

Nazli Farajidavar

Other affiliations: King's College London
Bio: Nazli Farajidavar is an academic researcher from University of Surrey. The author has contributed to research in topics: Smart city & Cluster analysis. The author has an hindex of 7, co-authored 14 publications receiving 269 citations. Previous affiliations of Nazli Farajidavar include King's College London.

Papers
More filters
Journal ArticleDOI
TL;DR: The CityPulse framework supports smart city service creation by means of a distributed system for semantic discovery, data analytics, and interpretation of large-scale (near-)real-time Internet of Things data and social media data streams to break away from silo applications and enable cross-domain data integration.
Abstract: Our world and our lives are changing in many ways. Communication, networking, and computing technologies are among the most influential enablers that shape our lives today. Digital data and connected worlds of physical objects, people, and devices are rapidly changing the way we work, travel, socialize, and interact with our surroundings, and they have a profound impact on different domains, such as healthcare, environmental monitoring, urban systems, and control and management applications, among several other areas. Cities currently face an increasing demand for providing services that can have an impact on people’s everyday lives. The CityPulse framework supports smart city service creation by means of a distributed system for semantic discovery, data analytics, and interpretation of large-scale (near-)real-time Internet of Things data and social media data streams. To goal is to break away from silo applications and enable cross-domain data integration. The CityPulse framework integrates multimodal, mixed quality, uncertain and incomplete data to create reliable, dependable information and continuously adapts data processing techniques to meet the quality of information requirements from end users. Different than existing solutions that mainly offer unified views of the data, the CityPulse framework is also equipped with powerful data analytics modules that perform intelligent data aggregation, event detection, quality assessment, contextual filtering, and decision support. This paper presents the framework, describes its components, and demonstrates how they interact to support easy development of custom-made applications for citizens. The benefits and the effectiveness of the framework are demonstrated in a use-case scenario implementation presented in this paper.

199 citations

Journal ArticleDOI
TL;DR: This work proposes a novel framework with an efficient semantic data processing pipeline, allowing for real-time observation of the pulse of a city and investigates the optimization of the semantic data discovery and integration based on the proposed stream quality analysis and data aggregation techniques.
Abstract: An increasing number of cities are confronted with challenges resulting from the rapid urbanization and new demands that a rapidly growing digital economy imposes on current applications and information systems. Smart city applications enable city authorities to monitor, manage, and provide plans for public resources and infrastructures in city environments, while offering citizens and businesses to develop and use intelligent services in cities. However, providing such smart city applications gives rise to several issues, such as semantic heterogeneity and trustworthiness of data sources, and extracting up-to-date information in real time from large-scale dynamic data streams. In order to address these issues, we propose a novel framework with an efficient semantic data processing pipeline, allowing for real-time observation of the pulse of a city. The proposed framework enables efficient semantic integration of data streams, and complex event processing on top of real-time data aggregation and quality analysis in a semantic Web environment. To evaluate our system, we use real-time sensor observations that have been published via an open platform called Open Data Aarhus by the City of Aarhus. We examine the framework utilizing symbolic aggregate approximation to reduce the size of data streams, and perform quality analysis taking into account both single and multiple data streams. We also investigate the optimization of the semantic data discovery and integration based on the proposed stream quality analysis and data aggregation techniques.

55 citations

Proceedings ArticleDOI
01 Nov 2011
TL;DR: It is shown that if a classification system can analyze the unlabeled test data in order to adapt its models, a significant performance improvement can be achieved and transductive transfer learning methods for action classification is applied.
Abstract: This paper investigates the application of transductive transfer learning methods for action classification. The application scenario is that of off-line video annotation for retrieval. We show that if a classification system can analyze the unlabeled test data in order to adapt its models, a significant performance improvement can be achieved. We applied it for action classification in tennis games for train and test videos of different nature. Actions are described using HOG3D features and for transfer we used a method based on feature re-weighting and a novel method based on feature translation and scaling.

31 citations

Proceedings ArticleDOI
01 Sep 2014
TL;DR: This paper exploits the fact that it is often possible to gather unlabeled samples from a test/target domain in order to improve the model built from the training source set, and proposes Adaptive Transductive Transfer Machines, which approach this problem by combining four types of adaptation.
Abstract: Classification methods traditionally work under the assumption that the training and test sets are sampled from similar distributions (domains). However, when such methods are deployed in practise, the conditions in which test data is acquired do not exactly match those of the training set. In this paper, we exploit the fact that it is often possible to gather unlabeled samples from a test/target domain in order to improve the model built from the training source set. We propose Adaptive Transductive Transfer Machines, which approach this problem by combining four types of adaptation: a lower dimensional space that is shared between the two domains, a set of local transformations to further increase the domain similarity, a classifier parameter adaptation method which modifies the learner for the new domain and a set of class-conditional transformations aiming to increase the similarity between the posterior probability of samples in the source and target sets. We show that our pipeline leads to an improvement over the state-of-the-art in cross-domain image classification datasets, using raw images or basic features.

17 citations

Book ChapterDOI
01 Nov 2014
TL;DR: A pipeline for transductive transfer learning is proposed and it is shown that this combination leads to an improvement over the state-of-the-art in cross-domain image classification datasets, using raw images or basic features and a simple one-nearest-neighbour classifier.
Abstract: We propose a pipeline for transductive transfer learning and demonstrate it in computer vision tasks. In pattern classification, methods for transductive transfer learning (also known as unsupervised domain adaptation) are designed to cope with cases in which one cannot assume that training and test sets are sampled from the same distribution, i.e., they are from different domains. However, some unlabelled samples that belong to the same domain as the test set (i.e. the target domain) are available, enabling the learner to adapt its parameters. We approach this problem by combining three methods that transform the feature space. The first finds a lower dimensional space that is shared between source and target domains. The second uses local transformations applied to each source sample to further increase the similarity between the marginal distributions of the datasets. The third applies one transformation per class label, aiming to increase the similarity between the posterior probability of samples in the source and target sets. We show that this combination leads to an improvement over the state-of-the-art in cross-domain image classification datasets, using raw images or basic features and a simple one-nearest-neighbour classifier.

15 citations


Cited by
More filters
Proceedings ArticleDOI
01 Dec 2013
TL;DR: JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference.
Abstract: Transfer learning is established as an effective technology in computer vision for leveraging rich labeled data in the source domain to build an accurate classifier for the target domain. However, most prior methods have not simultaneously reduced the difference in both the marginal distribution and conditional distribution between domains. In this paper, we put forward a novel transfer learning approach, referred to as Joint Distribution Adaptation (JDA). Specifically, JDA aims to jointly adapt both the marginal distribution and conditional distribution in a principled dimensionality reduction procedure, and construct new feature representation that is effective and robust for substantial distribution difference. Extensive experiments verify that JDA can significantly outperform several state-of-the-art methods on four types of cross-domain image classification problems.

1,542 citations

Journal ArticleDOI
TL;DR: This article assesses the different machine learning methods that deal with the challenges in IoT data by considering smart cities as the main use case and presents a taxonomy of machine learning algorithms explaining how different techniques are applied to the data in order to extract higher level information.

690 citations

Posted Content
TL;DR: An overview of domain adaptation and transfer learning with a specific view on visual applications and the methods that go beyond image categorization, such as object detection or image segmentation, video analyses or learning visual attributes are overviewed.
Abstract: The aim of this paper is to give an overview of domain adaptation and transfer learning with a specific view on visual applications. After a general motivation, we first position domain adaptation in the larger transfer learning problem. Second, we try to address and analyze briefly the state-of-the-art methods for different types of scenarios, first describing the historical shallow methods, addressing both the homogeneous and the heterogeneous domain adaptation methods. Third, we discuss the effect of the success of deep convolutional architectures which led to new type of domain adaptation methods that integrate the adaptation within the deep architecture. Fourth, we overview the methods that go beyond image categorization, such as object detection or image segmentation, video analyses or learning visual attributes. Finally, we conclude the paper with a section where we relate domain adaptation to other machine learning solutions.

454 citations

Journal ArticleDOI
TL;DR: In this article, the authors present a taxonomy of machine learning algorithms that can be applied to the data in order to extract higher level information, and a use case of applying Support Vector Machine (SVM) on Aarhus Smart City traffic data is presented for more detailed exploration.
Abstract: Rapid developments in hardware, software, and communication technologies have allowed the emergence of Internet-connected sensory devices that provide observation and data measurement from the physical world. By 2020, it is estimated that the total number of Internet-connected devices being used will be between 25 and 50 billion. As the numbers grow and technologies become more mature, the volume of data published will increase. Internet-connected devices technology, referred to as Internet of Things (IoT), continues to extend the current Internet by providing connectivity and interaction between the physical and cyber worlds. In addition to increased volume, the IoT generates Big Data characterized by velocity in terms of time and location dependency, with a variety of multiple modalities and varying data quality. Intelligent processing and analysis of this Big Data is the key to developing smart IoT applications. This article assesses the different machine learning methods that deal with the challenges in IoT data by considering smart cities as the main use case. The key contribution of this study is presentation of a taxonomy of machine learning algorithms explaining how different techniques are applied to the data in order to extract higher level information. The potential and challenges of machine learning for IoT data analytics will also be discussed. A use case of applying Support Vector Machine (SVM) on Aarhus Smart City traffic data is presented for a more detailed exploration.

375 citations

Journal ArticleDOI
01 Nov 2018-Cities
TL;DR: In this paper, a systematic review of the literature on smart cities, focusing on those aimed at conceptual development and providing empirical evidence base, is presented, where the authors identify three types of drivers of smart cities: community, technology, and policy.

296 citations