scispace - formally typeset
Search or ask a question

Showing papers by "The Chinese University of Hong Kong published in 2014"


Book ChapterDOI
06 Sep 2014
TL;DR: This work proposes a deep learning method for single image super-resolution (SR) that directly learns an end-to-end mapping between the low/high-resolution images and shows that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network.
Abstract: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.

4,445 citations


Journal ArticleDOI
09 Oct 2014-Nature
TL;DR: The results suggest that, in addition to mitigating primary particulate emissions, reducing the emissions of secondary aerosol precursors from fossil fuel combustion and biomass burning is likely to be important for controlling China’s PM2.5 levels and for reducing the environmental, economic and health impacts resulting from particulate pollution.
Abstract: Rapid industrialization and urbanization in developing countries has led to an increase in air pollution, along a similar trajectory to that previously experienced by the developed nations. In China, particulate pollution is a serious environmental problem that is influencing air quality, regional and global climates, and human health. In response to the extremely severe and persistent haze pollution experienced by about 800 million people during the first quarter of 2013 (refs 4, 5), the Chinese State Council announced its aim to reduce concentrations of PM2.5 (particulate matter with an aerodynamic diameter less than 2.5 micrometres) by up to 25 per cent relative to 2012 levels by 2017 (ref. 6). Such efforts however require elucidation of the factors governing the abundance and composition of PM2.5, which remain poorly constrained in China. Here we combine a comprehensive set of novel and state-of-the-art offline analytical approaches and statistical techniques to investigate the chemical nature and sources of particulate matter at urban locations in Beijing, Shanghai, Guangzhou and Xi'an during January 2013. We find that the severe haze pollution event was driven to a large extent by secondary aerosol formation, which contributed 30-77 per cent and 44-71 per cent (average for all four cities) of PM2.5 and of organic aerosol, respectively. On average, the contribution of secondary organic aerosol (SOA) and secondary inorganic aerosol (SIA) are found to be of similar importance (SOA/SIA ratios range from 0.6 to 1.4). Our results suggest that, in addition to mitigating primary particulate emissions, reducing the emissions of secondary aerosol precursors from, for example, fossil fuel combustion and biomass burning is likely to be important for controlling China's PM2.5 levels and for reducing the environmental, economic and health impacts resulting from particulate pollution.

3,372 citations


Journal ArticleDOI
TL;DR: The AWGS consensus report is believed to promote more Asian sarcopenia research, and most important of all, to focus on sarc Openia intervention studies and the implementation of sarcopenian in clinical practice to improve health care outcomes of older people in the communities and the healthcare settings in Asia.

2,976 citations


Posted Content
TL;DR: Zhang et al. as mentioned in this paper proposed a novel deep learning framework for attribute prediction in the wild, which cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently.
Abstract: Predicting face attributes in the wild is challenging due to complex face variations. We propose a novel deep learning framework for attribute prediction in the wild. It cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. LNet is pre-trained by massive general object categories for face localization, while ANet is pre-trained by massive face identities for attribute prediction. This framework not only outperforms the state-of-the-art with a large margin, but also reveals valuable facts on learning face representation. (1) It shows how the performances of face localization (LNet) and attribute prediction (ANet) can be improved by different pre-training strategies. (2) It reveals that although the filters of LNet are fine-tuned only with image-level attribute tags, their response maps over entire images have strong indication of face locations. This fact enables training LNet for face localization with only image-level annotations, but without face bounding boxes or landmarks, which are required by all attribute recognition works. (3) It also demonstrates that the high-level hidden neurons of ANet automatically discover semantic concepts after pre-training with massive face identities, and such concepts are significantly enriched after fine-tuning with attribute tags. Each attribute can be well explained with a sparse linear combination of these concepts.

2,822 citations


Proceedings ArticleDOI
23 Jun 2014
TL;DR: A novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter is proposed and significantly outperforms state-of-the-art methods on this dataset.
Abstract: Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.

2,417 citations


Proceedings ArticleDOI
23 Jun 2014
TL;DR: It is argued that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set.
Abstract: This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45% verification accuracy on LFW is achieved with only weakly aligned faces.

2,026 citations


Posted Content
TL;DR: This work proposes a deep learning method for single image super-resolution (SR) that directly learns an end-to-end mapping between the low/high-resolution images, represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one.
Abstract: We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.

1,593 citations


Proceedings Article
08 Dec 2014
TL;DR: This paper shows that the face identification-verification task can be well solved with deep learning and using both face identification and verification signals as supervision, and the error rate has been significantly reduced.
Abstract: The key challenge of face recognition is to develop effective feature representations for reducing intra-personal variations while enlarging inter-personal differences. In this paper, we show that it can be well solved with deep learning and using both face identification and verification signals as supervision. The Deep IDentification-verification features (DeepID2) are learned with carefully designed deep convolutional networks. The face identification task increases the inter-personal variations by drawing DeepID2 features extracted from different identities apart, while the face verification task reduces the intra-personal variations by pulling DeepID2 features extracted from the same identity together, both of which are essential to face recognition. The learned DeepID2 features can be well generalized to new identities unseen in the training data. On the challenging LFW dataset [11], 99.15% face verification accuracy is achieved. Compared with the best previous deep learning result [20] on LFW, the error rate has been significantly reduced by 67%.

1,590 citations


Book ChapterDOI
06 Sep 2014
TL;DR: A novel tasks-constrained deep model is formulated, with task-wise early stopping to facilitate learning convergence and reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model.
Abstract: Facial landmark detection has long been impeded by the problems of occlusion and pose variation. Instead of treating the detection task as a single and independent problem, we investigate the possibility of improving detection robustness through multi-task learning. Specifically, we wish to optimize facial landmark detection together with heterogeneous but subtly correlated tasks, e.g. head pose estimation and facial attribute inference. This is non-trivial since different tasks have different learning difficulties and convergence rates. To address this problem, we formulate a novel tasks-constrained deep model, with task-wise early stopping to facilitate learning convergence. Extensive evaluations show that the proposed task-constrained learning (i) outperforms existing methods, especially in dealing with faces with severe occlusion and pose variation, and (ii) reduces model complexity drastically compared to the state-of-the-art method based on cascaded deep model [21].

1,457 citations


Journal ArticleDOI
TL;DR: Prevalence of sarcopenia is substantial in most geriatric settings, and well-designed, standardised studies evaluating exercise or nutrition interventions are needed before treatment guidelines can be developed.
Abstract: OBJECTIVE: to examine the clinical evidence reporting the prevalence of sarcopenia and the effect of nutrition and exercise interventions from studies using the consensus definition of sarcopenia proposed by the European Working Group on Sarcopenia in Older People (EWGSOP).METHODS: PubMed and Dialog databases were searched (January 2000-October 2013) using pre-defined search terms. Prevalence studies and intervention studies investigating muscle mass plus strength or function outcome measures using the EWGSOP definition of sarcopenia, in well-defined populations of adults aged ≥50 years were selected.RESULTS: prevalence of sarcopenia was, with regional and age-related variations, 1-29% in community-dwelling populations, 14-33% in long-term care populations and 10% in the only acute hospital-care population examined. Moderate quality evidence suggests that exercise interventions improve muscle strength and physical performance. The results of nutrition interventions are equivocal due to the low number of studies and heterogeneous study design. Essential amino acid (EAA) supplements, including ∼2.5 g of leucine, and β-hydroxy β-methylbutyric acid (HMB) supplements, show some effects in improving muscle mass and function parameters. Protein supplements have not shown consistent benefits on muscle mass and function.CONCLUSION: prevalence of sarcopenia is substantial in most geriatric settings. Well-designed, standardised studies evaluating exercise or nutrition interventions are needed before treatment guidelines can be developed. Physicians should screen for sarcopenia in both community and geriatric settings, with diagnosis based on muscle mass and function. Supervised resistance exercise is recommended for individuals with sarcopenia. EAA (with leucine) and HMB may improve muscle outcomes.

1,415 citations


Journal ArticleDOI
TL;DR: The present guidelines are the most recent data on postoperative nausea and vomiting (PONV) and an update on the 2 previous sets of guidelines published in 2003 and 2007.
Abstract: The present guidelines are the most recent data on postoperative nausea and vomiting (PONV) and an update on the 2 previous sets of guidelines published in 2003 and 2007. These guidelines were compiled by a multidisciplinary international panel of individuals with interest and expertise in PONV under the auspices of the Society for Ambulatory Anesthesia. The panel members critically and systematically evaluated the current medical literature on PONV to provide an evidence-based reference tool for the management of adults and children who are undergoing surgery and are at increased risk for PONV. These guidelines identify patients at risk for PONV in adults and children; recommend approaches for reducing baseline risks for PONV; identify the most effective antiemetic single therapy and combination therapy regimens for PONV prophylaxis, including nonpharmacologic approaches; recommend strategies for treatment of PONV when it occurs; provide an algorithm for the management of individuals at increased risk for PONV as well as steps to ensure PONV prevention and treatment are implemented in the clinical setting.

Journal ArticleDOI
TL;DR: Developing new therapies that can improve HBsAg clearance and virological cure is warranted because long-term antiviral treatment can reverse cirrhosis and reduce hepatocellular carcinoma.

Journal ArticleDOI
TL;DR: The latest trend and challenges in engineering and applications of nanomaterials-enhanced surface plasmon resonance sensors for detecting "hard-to-identify" biological and chemical analytes are reviewed and discussed.
Abstract: The main challenge for all electrical, mechanical and optical sensors is to detect low molecular weight (less than 400 Da) chemical and biological analytes under extremely dilute conditions. Surface plasmon resonance sensors are the most commonly used optical sensors due to their unique ability for real-time monitoring the molecular binding events. However, their sensitivities are insufficient to detect trace amounts of small molecular weight molecules such as cancer biomarkers, hormones, antibiotics, insecticides, and explosive materials which are respectively important for early-stage disease diagnosis, food quality control, environmental monitoring, and homeland security protection. With the rapid development of nanotechnology in the past few years, nanomaterials-enhanced surface plasmon resonance sensors have been developed and used as effective tools to sense hard-to-detect molecules within the concentration range between pmol and amol. In this review article, we reviewed and discussed the latest trend and challenges in engineering and applications of nanomaterials-enhanced surface plasmon resonance sensors (e.g., metallic nanoparticles, magnetic nanoparticles, carbon-based nanomaterials, latex nanoparticles and liposome nanoparticles) for detecting “hard-to-identify” biological and chemical analytes. Such information will be viable in terms of providing a useful platform for designing future ultrasensitive plasmonic nanosensors.

Journal ArticleDOI
Anubha Mahajan1, Min Jin Go, Weihua Zhang2, Jennifer E. Below3  +392 moreInstitutions (104)
TL;DR: In this paper, the authors aggregated published meta-analyses of genome-wide association studies (GWAS), including 26,488 cases and 83,964 controls of European, east Asian, south Asian and Mexican and Mexican American ancestry.
Abstract: To further understanding of the genetic basis of type 2 diabetes (T2D) susceptibility, we aggregated published meta-analyses of genome-wide association studies (GWAS), including 26,488 cases and 83,964 controls of European, east Asian, south Asian and Mexican and Mexican American ancestry. We observed a significant excess in the directional consistency of T2D risk alleles across ancestry groups, even at SNPs demonstrating only weak evidence of association. By following up the strongest signals of association from the trans-ethnic meta-analysis in an additional 21,491 cases and 55,647 controls of European ancestry, we identified seven new T2D susceptibility loci. Furthermore, we observed considerable improvements in the fine-mapping resolution of common variant association signals at several T2D susceptibility loci. These observations highlight the benefits of trans-ethnic GWAS for the discovery and characterization of complex trait loci and emphasize an exciting opportunity to extend insight into the genetic architecture and pathogenesis of human diseases across populations of diverse ancestry.

Proceedings Article
08 Dec 2014
TL;DR: This work develops a deep convolutional neural network to capture the characteristics of degradation, establishing the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts.
Abstract: Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.

Journal ArticleDOI
TL;DR: In this review, the developments in the field of (plasmonic metal)/semiconductor hybrid nanostructures are comprehensively described and possible future research in this burgeoning field is discussed.
Abstract: Hybrid nanostructures composed of semiconductor and plasmonic metal components are receiving extensive attention. They display extraordinary optical characteristics that are derived from the simultaneous existence and close conjunction of localized surface plasmon resonance and semiconduction, as well as the synergistic interactions between the two components. They have been widely studied for photocatalysis, plasmon-enhanced spectroscopy, biotechnology, and solar cells. In this review, the developments in the field of (plasmonic metal)/semiconductor hybrid nanostructures are comprehensively described. The preparation of the hybrid nanostructures is first presented according to the semiconductor type, as well as the nanostructure morphology. The plasmonic properties and the enabled applications of the hybrid nanostructures are then elucidated. Lastly, possible future research in this burgeoning field is discussed.

Journal ArticleDOI
03 Jun 2014-ACS Nano
TL;DR: In this article, it was shown that the energy levels of lead sulfide QDs, measured by ultraviolet photoelectron spectroscopy, shift by up to 0.9 eV between different chemical ligand treatments.
Abstract: The electronic properties of colloidal quantum dots (QDs) are critically dependent on both QD size and surface chemistry. Modification of quantum confinement provides control of the QD bandgap, while ligand-induced surface dipoles present a hitherto underutilized means of control over the absolute energy levels of QDs within electronic devices. Here, we show that the energy levels of lead sulfide QDs, measured by ultraviolet photoelectron spectroscopy, shift by up to 0.9 eV between different chemical ligand treatments. The directions of these energy shifts match the results of atomistic density functional theory simulations and scale with the ligand dipole moment. Trends in the performance of photovoltaic devices employing ligand-modified QD films are consistent with the measured energy level shifts. These results identify surface-chemistry-mediated energy level shifts as a means of predictably controlling the electronic properties of colloidal QD films and as a versatile adjustable parameter in the perfo...

Journal ArticleDOI
TL;DR: The miRTarBase database (http://mirtarbase.mbc.nctu.edu.tw/) provides the most current and comprehensive information of experimentally validated miRNA-target interactions, with a 14-fold increase to mi RNA-target interaction entries and recent improvements.
Abstract: MicroRNAs (miRNAs) are small non-coding RNA molecules capable of negatively regulating gene expression to control many cellular mechanisms. The miRTarBase database (http://mirtarbase.mbc.nctu.edu.tw/) provides the most current and comprehensive information of experimentally validated miRNA-target interactions. The database was launched in 2010 with data sources for >100 published studies in the identification of miRNA targets, molecular networks of miRNA targets and systems biology, and the current release (2013, version 4) includes significant expansions and enhancements over the initial release (2010, version 1). This article reports the current status of and recent improvements to the database, including (i) a 14-fold increase to miRNA-target interaction entries, (ii) a miRNA-target network, (iii) expression profile of miRNA and its target gene, (iv) miRNA target-associated diseases and (v) additional utilities including an upgrade reminder and an error reporting/user feedback system.

Journal ArticleDOI
TL;DR: Among adults undergoing noncardiac surgery, MINS was an independent predictor of 30-day mortality and had the highest population-attributable risk of the perioperative complications.
Abstract: Background Myocardial injury after noncardiac surgery (MINS) was defined as prognostically relevant myocardial injury due to ischemia that occurs during or within 30 days after noncardiac surgery. The study's four objectives were to determine the diagnostic criteria, characteristics, predictors, and 30-day outcomes of MINS. Methods In this international, prospective cohort study of 15,065 patients aged 45 yr or older who underwent in-patient noncardiac surgery, troponin T was measured during the first 3 postoperative days. Patients with a troponin T level of 0.04 ng/ml or greater (elevated "abnormal" laboratory threshold) were assessed for ischemic features (i.e., ischemic symptoms and electrocardiography findings). Patients adjudicated as having a nonischemic troponin elevation (e.g., sepsis) were excluded. To establish diagnostic criteria for MINS, the authors used Cox regression analyses in which the dependent variable was 30-day mortality (260 deaths) and independent variables included preoperative variables, perioperative complications, and potential MINS diagnostic criteria. Results An elevated troponin after noncardiac surgery, irrespective of the presence of an ischemic feature, independently predicted 30-day mortality. Therefore, the authors' diagnostic criterion for MINS was a peak troponin T level of 0.03 ng/ml or greater judged due to myocardial ischemia. MINS was an independent predictor of 30-day mortality (adjusted hazard ratio, 3.87; 95% CI, 2.96-5.08) and had the highest population-attributable risk (34.0%, 95% CI, 26.6-41.5) of the perioperative complications. Twelve hundred patients (8.0%) suffered MINS, and 58.2% of these patients would not have fulfilled the universal definition of myocardial infarction. Only 15.8% of patients with MINS experienced an ischemic symptom. Conclusion Among adults undergoing noncardiac surgery, MINS is common and associated with substantial mortality.

Journal ArticleDOI
TL;DR: This paper aims to provide an overview of four emerging unobtrusive and wearable technologies, which are essential to the realization of pervasive health information acquisition, including: 1) unobTrusive sensing methods, 2) smart textile technology, 3) flexible-stretchable-printable electronics, and 4) sensor fusion.
Abstract: The aging population, prevalence of chronic diseases, and outbreaks of infectious diseases are some of the major challenges of our present-day society. To address these unmet healthcare needs, especially for the early prediction and treatment of major diseases, health informatics, which deals with the acquisition, transmission, processing, storage, retrieval, and use of health information, has emerged as an active area of interdisciplinary research. In particular, acquisition of health-related information by unobtrusive sensing and wearable technologies is considered as a cornerstone in health informatics. Sensors can be weaved or integrated into clothing, accessories, and the living environment, such that health information can be acquired seamlessly and pervasively in daily living. Sensors can even be designed as stick-on electronic tattoos or directly printed onto human skin to enable long-term health monitoring. This paper aims to provide an overview of four emerging unobtrusive and wearable technologies, which are essential to the realization of pervasive health information acquisition, including: 1) unobtrusive sensing methods, 2) smart textile technology, 3) flexible-stretchable-printable electronics, and 4) sensor fusion, and then to identify some future directions of research.

Journal ArticleDOI
TL;DR: Two datasets of postero-anterior chest radiographs available to foster research in computer-aided diagnosis of pulmonary diseases with a special focus on pulmonary tuberculosis are made.
Abstract: The U.S. National Library of Medicine has made two datasets of postero-anterior (PA) chest radiographs available to foster research in computer-aided diagnosis of pulmonary diseases with a special focus on pulmonary tuberculosis (TB). The radiographs were acquired from the Department of Health and Human Services, Montgomery County, Maryland, USA and Shenzhen No.3 People’s Hospital in China. Both datasets contain normal and abnormal chest X-rays with manifestations of TB and include associated radiologist readings.

Proceedings ArticleDOI
23 Jun 2014
TL;DR: This paper proposes a novel approach of learning mid-level filters from automatically discovered patch clusters for person re-identification that is complementary to existing handcrafted low-level features, and improves the best Rank-1 matching rate on the VIPeR dataset by 14%.
Abstract: In this paper, we propose a novel approach of learning mid-level filters from automatically discovered patch clusters for person re-identification. It is well motivated by our study on what are good filters for person re-identification. Our mid-level filters are discriminatively learned for identifying specific visual patterns and distinguishing persons, and have good cross-view invariance. First, local patches are qualitatively measured and classified with their discriminative power. Discriminative and representative patches are collected for filter learning. Second, patch clusters with coherent appearance are obtained by pruning hierarchical clustering trees, and a simple but effective cross-view training strategy is proposed to learn filters that are view-invariant and discriminative. Third, filter responses are integrated with patch matching scores in RankSVM training. The effectiveness of our approach is validated on the VIPeR dataset and the CUHK01 dataset. The learned mid-level features are complementary to existing handcrafted low-level features, and improve the best Rank-1 matching rate on the VIPeR dataset by 14%.

Journal ArticleDOI
TL;DR: This paper studies a probabilistically robust transmit optimization problem under imperfect channel state information at the transmitter and under the multiuser multiple-input single-output (MISO) downlink scenario, and develops two novel approximation methods using probabilistic techniques.
Abstract: In this paper, we study a probabilistically robust transmit optimization problem under imperfect channel state information (CSI) at the transmitter and under the multiuser multiple-input single-output (MISO) downlink scenario. The main issue is to keep the probability of each user's achievable rate outage as caused by CSI uncertainties below a given threshold. As is well known, such rate outage constraints present a significant analytical and computational challenge. Indeed, they do not admit simple closed-form expressions and are unlikely to be efficiently computable in general. Assuming Gaussian CSI uncertainties, we first review a traditional robust optimization-based method for approximating the rate outage constraints, and then develop two novel approximation methods using probabilistic techniques. Interestingly, these three methods can be viewed as implementing different tractable analytic upper bounds on the tail probability of a complex Gaussian quadratic form, and they provide convex restrictions, or safe tractable approximations, of the original rate outage constraints. In particular, a feasible solution from any one of these methods will automatically satisfy the rate outage constraints, and all three methods involve convex conic programs that can be solved efficiently using off-the-shelf solvers. We then proceed to study the performance-complexity tradeoffs of these methods through computational complexity and comparative approximation performance analyses. Finally, simulation results are provided to benchmark the three convex restriction methods against the state of the art in the literature. The results show that all three methods offer significantly improved solution quality and much lower complexity.

Proceedings ArticleDOI
01 Jan 2014
TL;DR: A customized Convolutional Neural Networks with shallow convolution layer to classify lung image patches with interstitial lung disease and the same architecture can be generalized to perform other medical image or texture classification tasks.
Abstract: Image patch classification is an important task in many different medical imaging applications. In this work, we have designed a customized Convolutional Neural Networks (CNN) with shallow convolution layer to classify lung image patches with interstitial lung disease (ILD). While many feature descriptors have been proposed over the past years, they can be quite complicated and domain-specific. Our customized CNN framework can, on the other hand, automatically and efficiently learn the intrinsic image features from lung image patches that are most suitable for the classification purpose. The same architecture can be generalized to perform other medical image or texture classification tasks.

Book ChapterDOI
06 Sep 2014
TL;DR: A new framework to filter images with the complete control of detail smoothing under a scale measure is proposed, based on a rolling guidance implemented in an iterative manner that converges quickly and achieves realtime performance and produces artifact-free results.
Abstract: Images contain many levels of important structures and edges. Compared to masses of research to make filters edge preserving, finding scale-aware local operations was seldom addressed in a practical way, albeit similarly vital in image processing and computer vision. We propose a new framework to filter images with the complete control of detail smoothing under a scale measure. It is based on a rolling guidance implemented in an iterative manner that converges quickly. Our method is simple in implementation, easy to understand, fully extensible to accommodate various data operations, and fast to produce results. Our implementation achieves realtime performance and produces artifact-free results in separating different scale structures. This filter also introduces several inspiring properties different from previous edge-preserving ones.

Proceedings ArticleDOI
23 Jun 2014
TL;DR: A hybrid method that combines gradient based and stochastic optimization methods to achieve fast convergence and good accuracy is proposed and presented, making it the first system that achieves such robustness, accuracy, and speed simultaneously.
Abstract: We present a realtime hand tracking system using a depth sensor. It tracks a fully articulated hand under large viewpoints in realtime (25 FPS on a desktop without using a GPU) and with high accuracy (error below 10 mm). To our knowledge, it is the first system that achieves such robustness, accuracy, and speed simultaneously, as verified on challenging real data. Our system is made of several novel techniques. We model a hand simply using a number of spheres and define a fast cost function. Those are critical for realtime performance. We propose a hybrid method that combines gradient based and stochastic optimization methods to achieve fast convergence and good accuracy. We present new finger detection and hand initialization methods that greatly enhance the robustness of tracking.

Journal ArticleDOI
TL;DR: This paper proposes Dekey, a new construction in which users do not need to manage any keys on their own but instead securely distribute the convergent key shares across multiple servers and demonstrates that Dekey incurs limited overhead in realistic environments.
Abstract: Data deduplication is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. Promising as it is, an arising challenge is to perform secure deduplication in cloud storage. Although convergent encryption has been extensively adopted for secure deduplication, a critical issue of making convergent encryption practical is to efficiently and reliably manage a huge number of convergent keys. This paper makes the first attempt to formally address the problem of achieving efficient and reliable key management in secure deduplication. We first introduce a baseline approach in which each user holds an independent master key for encrypting the convergent keys and outsourcing them to the cloud. However, such a baseline key management scheme generates an enormous number of keys with the increasing number of users and requires users to dedicatedly protect the master keys. To this end, we propose Dekey , a new construction in which users do not need to manage any keys on their own but instead securely distribute the convergent key shares across multiple servers. Security analysis demonstrates that Dekey is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement Dekey using the Ramp secret sharing scheme and demonstrate that Dekey incurs limited overhead in realistic environments.

Journal ArticleDOI
TL;DR: Understanding the mechanisms by which inflammation drives renal fibrosis is necessary to facilitate the development of therapeutics to halt the progression of chronic kidney disease.
Abstract: Many types of kidney injury induce inflammation as a protective response. However, unresolved inflammation promotes progressive renal fibrosis, which can culminate in end-stage renal disease. Kidney inflammation involves cells of the immune system as well as activation of intrinsic renal cells, with the consequent production and release of profibrotic cytokines and growth factors that drive the fibrotic process. In glomerular diseases, the development of glomerular inflammation precedes interstitial fibrosis; although the mechanisms linking these events are poorly understood, an important role for tubular epithelial cells in mediating this link is gaining support. Data have implicated macrophages in promoting both glomerular and interstitial fibrosis, whereas limited evidence suggests that CD4(+) T cells and mast cells are involved in interstitial fibrosis. However, macrophages can also promote renal repair when the cause of renal injury can be resolved, highlighting their plasticity. Understanding the mechanisms by which inflammation drives renal fibrosis is necessary to facilitate the development of therapeutics to halt the progression of chronic kidney disease.

Journal ArticleDOI
TL;DR: The present “white paper” catalogs the recommendations of the meeting, at which a consensus was reached that incorporation of molecular information into the next WHO classification of central nervous system tumors should follow a set of provided “ISN‐Haarlem” guidelines.
Abstract: Major discoveries in the biology of nervous system tumors have raised the question of how non-histological data such as molecular information can be incorporated into the next World Health Organization (WHO) classification of central nervous system tumors. To address this question, a meeting of neuropathologists with expertise in molecular diagnosis was held in Haarlem, the Netherlands, under the sponsorship of the International Society of Neuropathology (ISN). Prior to the meeting, participants solicited input from clinical colleagues in diverse neuro-oncological specialties. The present "white paper" catalogs the recommendations of the meeting, at which a consensus was reached that incorporation of molecular information into the next WHO classification should follow a set of provided "ISN-Haarlem" guidelines. Salient recommendations include that (i) diagnostic entities should be defined as narrowly as possible to optimize interobserver reproducibility, clinicopathological predictions and therapeutic planning; (ii) diagnoses should be "layered" with histologic classification, WHO grade and molecular information listed below an "integrated diagnosis"; (iii) determinations should be made for each tumor entity as to whether molecular information is required, suggested or not needed for its definition; (iv) some pediatric entities should be separated from their adult counterparts; (v) input for guiding decisions regarding tumor classification should be solicited from experts in complementary disciplines of neuro-oncology; and (iv) entity-specific molecular testing and reporting formats should be followed in diagnostic reports. It is hoped that these guidelines will facilitate the forthcoming update of the fourth edition of the WHO classification of central nervous system tumors.

Journal ArticleDOI
TL;DR: Analyses of the computational QM/MM model reveal that the novel mechanism behind the AIE of THBDBA and BDBA is the restriction of intramolecular vibration (RIV).
Abstract: Aggregation-induced emission (AIE) has been harnessed in many systems through the principle of restriction of intramolecular rotations (RIR) based on mechanistic understanding from archetypal AIE molecules such as tetraphenylethene (TPE). However, as the family of AIE-active molecules grows, the RIR model cannot fully explain some AIE phenomena. Here, we report a broadening of the AIE mechanism through analysis of 10,10',11,11'-tetrahydro-5,5'-bidibenzo[a,d][7]annulenylidene (THBDBA), and 5,5'-bidibenzo[a,d][7]annulenylidene (BDBA). Analyses of the computational QM/MM model reveal that the novel mechanism behind the AIE of THBDBA and BDBA is the restriction of intramolecular vibration (RIV). A more generalized mechanistic understanding of AIE results by combining RIR and RIV into the principle of restriction of intramolecular motions (RIM).