scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computational Intelligence Magazine in 2020"


Journal ArticleDOI
TL;DR: A stacked ensemble method for predicting the degree of intensity for emotion and sentiment by combining the outputs obtained from several deep learning and classical feature-based models using a multi-layer perceptron network is proposed.
Abstract: Emotions and sentiments are subjective in nature. They differ on a case-to-case basis. However, predicting only the emotion and sentiment does not always convey complete information. The degree or level of emotions and sentiments often plays a crucial role in understanding the exact feeling within a single class (e.g., `good' versus `awesome'). In this paper, we propose a stacked ensemble method for predicting the degree of intensity for emotion and sentiment by combining the outputs obtained from several deep learning and classical feature-based models using a multi-layer perceptron network. We develop three deep learning models based on convolutional neural network, long short-term memory and gated recurrent unit and one classical supervised model based on support vector regression. We evaluate our proposed technique for two problems, i.e., emotion analysis in the generic domain and sentiment analysis in the financial domain. The proposed model shows impressive results for both the problems. Comparisons show that our proposed model achieves improved performance over the existing state-of-the-art systems.

184 citations


Journal ArticleDOI
TL;DR: Deep reinforcement learning (DRL)-based strategies are introduced and neural networks-based approaches are utilized to efficiently realize the DRL strategies for system procedures such as spectrum access and spectrum sensing.
Abstract: The explosive growth of wireless devices motivates the development of the internet-of-things (IoT), which is capable of interconnecting massive and diverse "things" via wireless communications. This is also called massive machine type communications (mMTC) as a part of the undergoing fifth generation (5G) mobile networks. It is envisioned that more sophisticated devices would be connected to form a hyperconnected world with the aids of the sixth generation (6G) mobile networks. To enable wireless accesses of such IoT networks, artificial intelligence (AI) can play an important role. In this article, the frameworks of centralized and distributed AI-enabled IoT networks are introduced. Key technical challenges, including random access and spectrum sharing (spectrum access and spectrum sensing), are analyzed for different network architectures. Deep reinforcement learning (DRL)-based strategies are introduced and neural networks-based approaches are utilized to efficiently realize the DRL strategies for system procedures such as spectrum access and spectrum sensing. Different types of neural networks that could be used in IoT networks to conduct DRL are also discussed.

100 citations


Journal ArticleDOI
TL;DR: This work provides a comprehensive survey on the existing works that incorporate differential privacy with machine learning, so- called differentially private machine learning and categorizes them into two broad categories as per different differential privacy mechanisms: the Laplace/ Gaussian/exponential mechanism and the output/objective perturbation mechanism.
Abstract: Hitherto, most of the existing machine learning models are known to implicitly memorize many details of training datasets during training and inadvertently reveal privacy during model prediction. It is paramount to improve the non -private machine learning methods for non experts on privacy especially for those who majored in information-critical domains. Throughout this paper, we give a comprehensive review of privacy preserving in machine learning under the unified framework of differential privacy. We provide an intuitive handle for the operator to gracefully balance between utility and privacy, through which more users can benefit from machine learning models built on their sensitive data. And fi nally, we discuss major challenges and promising research directions in the fi eld of differentially private machine learning.

57 citations


Journal ArticleDOI
TL;DR: A DNN architecture is proposed by incorporating an embedding layer to project different types of raw data to a latent space and utilize a regression or classification function to predict the mobile access pattern, which outperforms the best traditional machine learning algorithm significantly.
Abstract: Wireless big data contain valuable information on users' behaviors and preferences, which can drive the design and optimization for wireless systems. The fundamental issue is how to mine mobile intelligence and further incorporate them into wireless systems. To this end, this article discusses two challenges on big data based wireless system design and optimization, and proposes a unified framework to tackle them with the help of Deep Neural Networks (DNNs) and online learning techniques. In particular, we propose a DNN architecture by incorporating an embedding layer to project different types of raw data to a latent space and utilize a regression or classification function to predict the mobile access pattern. It outperforms the best traditional machine learning algorithm (76% vs. 63%) significantly. Moreover, combining the proposed DNN architecture with online learning techniques, we show two cases on how to apply the mobile intelligence for wireless video applications, including video adaption and video pre-fetching. In the former case, we utilize the proposed DNN method to predict the dynamics of user count within the coverage of base stations, and adaptively adjust the bitrate for video streaming to improve the video watching experience. In the latter one, we utilize the proposed method to predict the user trajectory, i.e., the associated base stations, and conduct content prefetching to reduce the access latency. Evaluating the performance with a real wireless dataset, we show that the perceived video QoE and cache hit ratio are greatly improved (0.7db and 25% respectively).

38 citations


Journal ArticleDOI
TL;DR: A new genetic programming-based feature learning approach is developed to automatically select and combine five existing well-developed descriptors to extract high-level features for image classification.
Abstract: Being able to extract effective features from different images is very important for image classification, but it is challenging due to high variations across images. By integrating existing well-developed feature descriptors into learning algorithms, it is possible to automatically extract informative high-level features for image classification. As a learning algorithm with a flexible representation and good global search ability, genetic programming can achieve this. In this paper, a new genetic programming-based feature learning approach is developed to automatically select and combine five existing well-developed descriptors to extract high-level features for image classification. The new approach can automatically learn various numbers of global and/or local features from different types of images. The results show that the new approach achieves significantly better classification performance in almost all the comparisons on eight data sets of varying difficulty. Further analysis reveals the effectiveness of the new approach to finding the most effective feature descriptors or combinations of them to extract discriminative features for different classification tasks.

34 citations


Journal ArticleDOI
TL;DR: This survey aims to assemble and summarize the latest developments and insights in transforming computational intelligence approaches, such as machine learning, evolutionary computation, soft computing, and big data analytics, into practical applications for fighting COVID-19.
Abstract: Computational intelligence has been used in many applications in the fields of health sciences and epidemiology. In particular, owing to the sudden and massive spread of COVID-19, many researchers around the globe have devoted intensive efforts into the development of computational intelligence methods and systems for combating the pandemic. Although there have been more than 200,000 scholarly articles on COVID-19, SARS-CoV-2, and other related coronaviruses, these articles did not specifically address in-depth the key issues for applying computational intelligence to combat COVID-19. Hence, it would be exhausting to filter and summarize those studies conducted in the field of computational intelligence from such a large number of articles. Such inconvenience has hindered the development of effective computational intelligence technologies for fighting COVID-19. To fill this gap, this survey focuses on categorizing and reviewing the current progress of computational intelligence for fighting this serious disease. In this survey, we aim to assemble and summarize the latest developments and insights in transforming computational intelligence approaches, such as machine learning, evolutionary computation, soft computing, and big data analytics, into practical applications for fighting COVID-19. We also explore some potential research issues on computational intelligence for defeating the pandemic.

30 citations


Journal ArticleDOI
TL;DR: GCOP is established as a new standard to define different search algorithms within one unified model and a taxonomy is defined to distinguish several widely used terminologies in automated algorithm design, namely automated algorithm composition, configuration and selection.
Abstract: i»?This paper defines a new combinatorial optimization problem, namely General Combinatorial Optimization Problem (GCOP), whose decision variables are a set of parametric algorithmic components, i.e. algorithm design decisions. The solutions of GCOP, i.e. compositions of algorithmic components, thus represent different generic search algorithms. The objective of GCOP is to find the optimal algorithmic compositions for solving the given optimization problems. Solving the GCOP is thus equivalent to automatically designing the best algorithms for optimization problems. Despite recent advances, the evolutionary computation and optimization research communities are yet to embrace formal standards that underpin automated algorithm design. In this position paper, we establish GCOP as a new standard to define different search algorithms within one unified model. We demonstrate the new GCOP model to standardize various search algorithms as well as selection hyperheuristics. A taxonomy is defined to distinguish several widely used terminologies in automated algorithm design, namely automated algorithm composition, configuration and selection. We would like to encourage a new line of exciting research directions addressing several challenging research issues including algorithm generality, algorithm reusability, and automated algorithm design.

30 citations


Journal ArticleDOI
TL;DR: This study is the first to use the green protocols of LoRa and ZigBee to establish an ad hoc network and solve the problem of energy efficiency, and proposes a unique initialization mechanism that automatically schedules node clustering and throughput optimization.
Abstract: With the exponential expansion of the number of Internet of Things (IoT) devices, many state-of-the-art communication technologies are being developed to use the lowerpower but extensively deployed devices. Due to the limits of pure channel characteristics, most protocols cannot allow an IoT network to be simultaneously large-scale and energy-efficient, especially in hybrid architectures. However, different from the original intention to pursue faster and broader connectivity, the daily operation of IoT devices only requires stable and low-cost links. Thus, our design goal is to develop a comprehensive solution for intelligent green IoT networking to satisfy the modern requirements through a data-driven mechanism, so that the IoT networks use computational intelligence to realize self-regulation of composition, size minimization, and throughput optimization. To the best of our knowledge, this study is the first to use the green protocols of LoRa and ZigBee to establish an ad hoc network and solve the problem of energy efficiency. First, we propose a unique initialization mechanism that automatically schedules node clustering and throughput optimization. Then, each device executes a procedure to manage its own energy consumption to optimize switching in and out of sleep mode, which relies on AI-controlled service usage habit prediction to learn the future usage trend. Finally, our new theory is corroborated through real-world deployment and numerical comparisons. We believe that our new type of network organization and control system could improve the performance of all green-oriented IoT services and even change human lifestyle habits.

29 citations


Journal ArticleDOI
TL;DR: This method analyzes information on the bibliographic details of published journal papers, which includes title, authors, author address, journals and citations, extracted from the Science and Social Science Citation Indices in the Web of Science (WoS) database for the last 20 years.
Abstract: Fuzzy Sets and Systems is an area of computational intelligence, pioneered by Lotfi Zadeh over 50 years ago in a seminal paper in Information and Control. Fuzzy Sets (FSs) deal with uncertainty in our knowledge of a particular situation. Research and applications in FSs have grown steadily over 50 years. More recently, we have seen a growth in Type-2 Fuzzy Set (T2 FS) related papers, where T2 FSs are utilized to handle uncertainty in realworld problems. In this paper, we have used bibliometric methods to obtain a broad overview of the area of T2 FSs. This method analyzes information on the bibliographic details of published journal papers, which includes title, authors, author address, journals and citations, extracted from the Science and Social Science Citation Indices in the Web of Science (WoS) database for the last 20 years (1997-2017). We have compared the growth of publications in the field of FSs, and its subset T2 FSs, identified highly cited papers in T2 FSs, highly cited authors, key institutions, and main countries with researchers involved in T2 FS related research.

26 citations


Journal ArticleDOI
TL;DR: This work designs different attention-based multi-task architectures that concurrently regress/classify both depression level and emotion intensity using text data, and shows that substantial performance improvements can be achieved when compared to emotion-unaware single-task and multitask approaches.
Abstract: Depression is considered a serious medical condition and a large number of people around the world are suffering from it. Within this context, a lot of studies have been proposed to estimate the degree of depression based on different features and modalities, specific to depression. Supported by medical studies that show how depression is a disorder of impaired emotion regulation, we propose a different approach, which relies on the rationale that the estimation of depression level can benefit from the concurrent learning of emotion intensity. To test this hypothesis, we design different attention-based multi-task architectures that concurrently regress/classify both depression level and emotion intensity using text data. Experiments based on two benchmark datasets, namely, the Distress Analysis Interview Corpus - a Wizard of Oz (DAIC-WOZ), and the CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) show that substantial performance improvements can be achieved when compared to emotion-unaware single-task and multitask approaches.

25 citations


Journal ArticleDOI
TL;DR: This article investigates existing ant colony optimization algorithms specifically designed for combinatorial optimization problems with a dynamic environment, classified into two frameworks: evaporation-based and population-based.
Abstract: Ant colony optimization is a swarm intelligence metaheuristic inspired by the foraging behavior of some ant species. Ant colony optimization has been successfully applied to challenging optimization problems. This article investigates existing ant colony optimization algorithms specifically designed for combinatorial optimization problems with a dynamic environment. The investigated algorithms are classified into two frameworks: evaporation-based and population-based. A case study of using these algorithms to solve the dynamic traveling salesperson problem is described. Experiments are systematically conducted using a proposed dynamic benchmark framework to analyze the effect of important ant colony optimization features on numerous test cases. Different performance measures are used to evaluate the adaptation capabilities of the investigated algorithms, indicating which features are the most important when designing ant colony optimization algorithms in dynamic environments.

Journal ArticleDOI
TL;DR: A novel systematic platform for prediction of the future number of confirmed cases of COVID-19 is proposed, based on several factors such as transmission rate, temperature, and humidity, and derives systematically a set of appropriate features for training Recurrent Neural Networks (RNN).
Abstract: The number of confirmed cases of COVID-19 has been ever increasing worldwide since its outbreak in Wuhan, China. As such, many researchers have sought to predict the dynamics of the virus spread in different parts of the globe. In this paper, a novel systematic platform for prediction of the future number of confirmed cases of COVID-19 is proposed, based on several factors such as transmission rate, temperature, and humidity. The proposed strategy derives systematically a set of appropriate features for training Recurrent Neural Networks (RNN). To that end, the number of confirmed cases (CC) of COVID-19 in three states of India (Maharashtra, Tamil Nadu and Gujarat) is taken as a case study. It has been noted that stationary and nonstationary parts of the features improved the prediction of the stationary and non-stationary trends of the number of confirmed cases, respectively. The new platform has general application and can be used for pandemic time series forecasting.

Journal ArticleDOI
TL;DR: A novel way to map a high-dimensional Pareto-optimal front (points or data-set) into two-and-half dimensions by revealing functional features of points that may be of great interest to DMs is proposed.
Abstract: To represent a many-objective Pareto-optimal front having four or more dimensions of the objective space, a large number of points are necessary. However, for choosing a single preferred point from a large set is problematic and time-consuming, as they provide a large cognitive burden on the part of the decision-makers (DMs). Hence, many-objective optimization and decision-making researchers and practitioners have been interested in effective visualization methods to filter down a few critical points for further analysis. While some ideas are borrowed from data analytics and visualization literature, they are generic and do not exploit the functionalities that DMs are usually interested. In this paper, we outline some such functionalities: a point's trade-off among conflicting objectives in its neighborhood, closeness of a point to the boundary or core of the high-dimensional Pareto set, specific desired geometric properties of points, spatial distance of one point to another, closeness of a point to constraint boundary, and others, in developing a new visualization technique. We propose a novel way to map a high-dimensional Pareto-optimal front (points or data-set) into two-and-half dimensions by revealing functional features of points that may be of great interest to DMs. As a proof-of-principle demonstration, we apply our proposed palette visualization (PaletteViz) technique to a number of different structures of Pareto-optimal data-sets and discuss how the proposed technique is different from a few popularly used visualization techniques.

Journal ArticleDOI
TL;DR: In this paper, a Bayesian updating approach for estimating epidemiological parameters using observable information to assess the impacts of different intervention strategies is proposed, and a data assimilation framework is developed to estimate these parameters including constructing an observation function and developing a bayesian updating scheme.
Abstract: Epidemic models play a key role in understanding and responding to the emerging COVID-19 pandemic. Widely used compartmental models are static and are of limited use to evaluate intervention strategies of combatting the pandemic. Applying the technology of data assimilation, we propose a Bayesian updating approach for estimating epidemiological parameters using observable information to assess the impacts of different intervention strategies. We adopt a concise renewal model and propose new parameters by disentangling the reduction of instantaneous reproduction number Rt into mitigation and suppression factors to quantify intervention impacts at a finer granularity. A data assimilation framework is developed to estimate these parameters including constructing an observation function and developing a Bayesian updating scheme. A statistical analysis framework is built to quantify the impacts of intervention strategies by monitoring the evolution of the estimated parameters. We reveal the intervention impacts in European countries and Wuhan and the resurgence risk in the United States.

Journal ArticleDOI
TL;DR: The model of governance and ethical review, incorporated and defined within MIDAS, also addresses the complex privacy and ethical issues that the developing pandemic has highlighted, allowing oversight and scrutiny of more and richer data sources by users of the system.
Abstract: With the rapid spread of the COVID-19 pandemic, the novel Meaningful Integration of Data Analytics and Services (MIDAS) platform quickly demonstrates its value, relevance and transferability to this new global crisis. The MIDAS platform enables the connection of a large number of isolated heterogeneous data sources, and combines rich datasets including open and social data, ingesting and preparing these for the application of analytics, monitoring and research tools. These platforms will assist public health author ities in: (i) better understanding the disease and its impact; (ii) monitoring the different aspects of the evolution of the pandemic across a diverse range of groups; (iii) contributing to improved resilience against the impacts of this global crisis; and (iv) enhancing preparedness for future public health emergencies. The model of governance and ethical review, incorporated and defined within MIDAS, also addresses the complex privacy and ethical issues that the developing pandemic has highlighted, allowing oversight and scrutiny of more and richer data sources by users of the system.

Journal ArticleDOI
TL;DR: An intelligent optimization method to develop diversified TCM prevention programs for community residents and demonstrates the computational efficiency of the proposed method, and reports the application results in TCM-based prevention of COVID-19 in 12 communities in Zhejiang province, China, during the peak of the pandemic.
Abstract: Traditional Chinese medicine (TCM) has played an important role in the prevention and control of the novel coronavirus pneumonia (COVID-19), and community prevention has become the most essential part in reducing the risk of spread and protecting public health. However, most communities use a unified TCM prevention program for all residents, which violates the "treatment based on syndrome differentiation" principle of TCM and limits the effectiveness of prevention. In this paper, we propose an intelligent optimization method to develop diversified TCM prevention programs for community residents. First, we use a fuzzy clustering method to divide the population based on both modern medicine and TCM health characteristics; we then use an interactive optimization method, in which TCM experts develop different TCM prevention programs for different clusters, and a heuristic algorithm is used to optimize the programs under the resource constraints. We demonstrate the computational efficiency of the proposed method, and report the application results of the method in TCM-based prevention of COVID-19 in 12 communities in Zhejiang province, China, during the peak of the pandemic.

Journal ArticleDOI
TL;DR: Two measures are designed to numerically evaluate the robustness of structurally balanced networks and a multiobjective evolutionary algorithm, MOEA/D-RSB, is developed to successfully solve this problem.
Abstract: The aim of network structural balance is to find proper partitions of nodes that guarantee equilibrium in the system, which has attracted considerable attention in recent decades. Most of existing studies focus on reducing imbalanced components in complex networks without considering the tolerance of these balanced networks against attacks and failures. However, as indicated by some recent studies, the robustness of structurally balanced networks is also important in real applications, which should be emphasized in balancing processes. Currently, it remains challenging to define suitable robustness measures for signed networks, and few performance enhancement strategies have been designed. In this paper, two measures are designed to numerically evaluate the robustness of structurally balanced networks. Furthermore, the simultaneous enhancement on these two measures is modeled as a multiobjective optimization problem, and a multiobjective evolutionary algorithm, MOEA/D-RSB, is developed to successfully solve this problem. Experiments on synthetic and real-world networks demonstrate the good performance of MOEA/D-RSB in finding robust balanced candidates. In addition, the features of partitions with different robustness performances are analyzed to show the impact of different balancing strategies on network robustness. The obtained results are valuable in dealing with some problems arising in social and natural dynamics.

Journal ArticleDOI
TL;DR: This work presents a parameter setting mechanism for a rule-based evolutionary machine learning system that is capable of finding the adequate parameter value for a wide variety of synthetic classification problems with binary attributes and with/without added noise.
Abstract: The success of any machine learning technique depends on the correct setting of its parameters and, when it comes to large-scale datasets, hand-tuning these parameters becomes impractical. However, very large-datasets can be pre-processed in order to distil information that could help in appropriately setting various systems parameters. In turn, this makes sophisticated machine learning methods easier to use to end-users. Thus, by modelling the performance of machine learning algorithms as a function of the structure inherent in very large datasets one could, in principle, detect "hotspots" in the parameters' space and thus, auto-tune machine learning algorithms for better dataset-specific performance. In this work we present a parameter setting mechanism for a rule-based evolutionary machine learning system that is capable of finding the adequate parameter value for a wide variety of synthetic classification problems with binary attributes and with/without added noise. Moreover, in the final validation stage our automated mechanism is able to reduce the computational time of preliminary experiments up to 71% for a challenging real-world bioinformatics dataset.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a system named EvoMSA, which is a classifier based on genetic programming that works by combining the output of different text classifiers to produce the final prediction.
Abstract: Sentiment analysis (SA) is a task related to understanding people's feelings in written text; the starting point would be to identify the polarity level (positive, neutral or negative) of a given text, moving on to identify emotions or whether a text is humorous or not. This task has been the subject of several research competitions in a number of languages, e.g., English, Spanish, and Arabic, among others. In this contribution, we propose an SA system, namely EvoMSA, that unifies our participating systems in various SA competitions, making it domain-independent and multilingual by processing text using only language-independent techniques. EvoMSA is a classifier, based on Genetic Programming that works by combining the output of different text classifiers to produce the final prediction. We analyzed EvoMSA on different SA competitions to provide a global overview of its performance. The results indicated that EvoMSA is competitive obtaining top rankings in several SA competitions. Furthermore, we performed an analysis of EvoMSA's components to measure their contribution to the performance; the aim was to facilitate a practitioner or newcomer to implement a competitive SA classifier. Finally, it is worth to mention that EvoMSA is available as open-source software.

Journal ArticleDOI
TL;DR: This paper provides a consistent survey of recent popular machine learning methods that address off-line mode dataset shift problems, focusing on the main characteristics of unlabeled data shifts.
Abstract: Dataset shifts are present in many real-world applications, since data generation is not always fully controlled and is subject to noise, degradation, and other natural variations. In machine learning, the lack of regularity in data can degrade performance by breaching error constraints. Different methods have been proposed to solve shifting problems; however, shifts in off-line learning mode are not as well examined. Off-line shifts consist of problems where drifts occur only with unlabeled data. Most methods aimed at dataset shifts consider that new labeled data can be received after training, which is not always the case. Here, a review on dataset shift characteristics and causes is presented as a tool for the analysis and implementation of machine learning methods targeting off-line mode dataset shift problems. In this context, a relationship between statistical learning risk functions and error degradation due to variation in data distribution was straightforwardly derived. Moreover, this paper provides a consistent survey of recent popular machine learning methods that address off-line mode dataset shift problems, focusing on the main characteristics of unlabeled data shifts.

Journal ArticleDOI
An-Jen Liu1, Ti-Rong Wu1, I-Chen Wu1, Hung Guei1, Ting-Han Wei1 
TL;DR: To the best knowledge, this result is state-ofthe- art in terms of the range of strengths in Elo rating while maintaining a controllable relationship between the strength and a strength index.
Abstract: This paper proposes an approach to strength adjustment and assessment for Monte-Carlo tree search based game-playing programs. We modify an existing softmax policy with a strength index to choose moves. The most important modification is a mechanism which filters low-quality moves by excluding those that have a lower simulation count than a pre-defined threshold ratio of the maximum simulation count. Through theoretical analysis, we show that the adjusted policy is guaranteed to choose moves exceeding a lower bound in strength by using a threshold ratio. Experimental results show that the strength index is highly correlated to the empirical strength. With an index value between ?2, we can cover a strength range of about 800 Elo ratings. The strength adjustment and assessment methods were also tested in real-world scenarios with human players, ranging from professionals (strongest) to kyu rank amateurs (weakest). For amateur levels, we tested our mechanism on two popular Go online platforms - Fox Weiqi and Tygem. The result shows that our method can adjust program strength to different ranks stably. In terms of strength assessment, we proposed a new dynamic strength adjustment method, then used it to evaluate human professionals, predicting reliably their playing strengths within 15 games. Lastly, we collected survey responses asking players about strength perception, entertainment, and general comments for different aspects of analysis. To our best knowledge, this result is state-ofthe- art in terms of the range of strengths in Elo rating while maintaining a controllable relationship between the strength and a strength index.

Journal ArticleDOI
TL;DR: This research proposes xTML, a novel unified heterogeneous transfer metric learning framework, to improve the distance estimation of the domains of interest when limited label information, complementary with extensive unlabeled data, is provisioned for model training.
Abstract: Owing to the continual growth of multimodal data (or feature spaces), we have seen a rising interest in multimedia applications (e.g., object classification and searching) over these heterogeneous data. However, the accuracy of classification and searching tasks is highly dependent on the distance estimation between data samples, and simple Euclidean (EU) distance has been proven to be inadequate. Previous research has focused on learning a robust distance metric to quantify the relationships among data samples. In this context, existing distance metric learning (DML) algorithms mainly leverage on label information in the target domain for model training and may fail when the label information is scarce. As an improvement, transfer metric learning (TML) approaches are proposed to leverage information from other related domains. However, current TML algorithms assume that different domains explore the same representation; thus, they are not applicable in heterogeneous settings where the data representations of different domains vary. In this research, we propose xTML, a novel unified heterogeneous transfer metric learning framework, to improve the distance estimation of the domains of interest (i.e., the target domains in classification and searching tasks) when limited label information, complementary with extensive unlabeled data, is provisioned for model training. We further illustrate how our proposed framework can be applied to a selected list of multimedia applications, including opinion mining, deception detection and online product searching.