scispace - formally typeset
Search or ask a question

Showing papers by "Hong Kong Polytechnic University published in 2016"


Journal ArticleDOI
Daniel J. Klionsky1, Kotb Abdelmohsen2, Akihisa Abe3, Joynal Abedin4  +2519 moreInstitutions (695)
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.

5,187 citations


Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a denoising convolutional neural network (DnCNN) to handle Gaussian denoizing with unknown noise level, which implicitly removes the latent clean image in the hidden layers.
Abstract: Discriminative model learning for image denoising has been recently attracting considerable attentions due to its favorable denoising performance. In this paper, we take one step forward by investigating the construction of feed-forward denoising convolutional neural networks (DnCNNs) to embrace the progress in very deep architecture, learning algorithm, and regularization method into image denoising. Specifically, residual learning and batch normalization are utilized to speed up the training process as well as boost the denoising performance. Different from the existing discriminative denoising models which usually train a specific model for additive white Gaussian noise (AWGN) at a certain noise level, our DnCNN model is able to handle Gaussian denoising with unknown noise level (i.e., blind Gaussian denoising). With the residual learning strategy, DnCNN implicitly removes the latent clean image in the hidden layers. This property motivates us to train a single DnCNN model to tackle with several general image denoising tasks such as Gaussian denoising, single image super-resolution and JPEG image deblocking. Our extensive experiments demonstrate that our DnCNN model can not only exhibit high effectiveness in several general image denoising tasks, but also be efficiently implemented by benefiting from GPU computing.

1,446 citations


Journal ArticleDOI
TL;DR: In this article, the authors classify the literature on the application of big data business analytics (BDBA) on logistics and supply chain management (LSCM) based on the nature of analytics (descriptive, predictive, prescriptive) and the focus of the LSCM (strategy and operations).

938 citations


Journal ArticleDOI
TL;DR: Modification to produce engineered/designer biochar is likely to enhance the sorption capacity of biochar and its potential applications for environmental remediation.

905 citations


Journal ArticleDOI
TL;DR: The formation of abundant new bone at peripheral cortical sites after intramedullary implantation of a pin containing ultrapure magnesium into the intact distal femur in rats suggests the therapeutic potential of this ion in orthopedics.
Abstract: Orthopedic implants containing biodegradable magnesium have been used for fracture repair with considerable efficacy; however, the underlying mechanisms by which these implants improve fracture healing remain elusive. Here we show the formation of abundant new bone at peripheral cortical sites after intramedullary implantation of a pin containing ultrapure magnesium into the intact distal femur in rats. This response was accompanied by substantial increases of neuronal calcitonin gene-related polypeptide-α (CGRP) in both the peripheral cortex of the femur and the ipsilateral dorsal root ganglia (DRG). Surgical removal of the periosteum, capsaicin denervation of sensory nerves or knockdown in vivo of the CGRP-receptor-encoding genes Calcrl or Ramp1 substantially reversed the magnesium-induced osteogenesis that we observed in this model. Overexpression of these genes, however, enhanced magnesium-induced osteogenesis. We further found that an elevation of extracellular magnesium induces magnesium transporter 1 (MAGT1)-dependent and transient receptor potential cation channel, subfamily M, member 7 (TRPM7)-dependent magnesium entry, as well as an increase in intracellular adenosine triphosphate (ATP) and the accumulation of terminal synaptic vesicles in isolated rat DRG neurons. In isolated rat periosteum-derived stem cells, CGRP induces CALCRL- and RAMP1-dependent activation of cAMP-responsive element binding protein 1 (CREB1) and SP7 (also known as osterix), and thus enhances osteogenic differentiation of these stem cells. Furthermore, we have developed an innovative, magnesium-containing intramedullary nail that facilitates femur fracture repair in rats with ovariectomy-induced osteoporosis. Taken together, these findings reveal a previously undefined role of magnesium in promoting CGRP-mediated osteogenic differentiation, which suggests the therapeutic potential of this ion in orthopedics.

593 citations


Journal ArticleDOI
TL;DR: In this paper, a brief overview on the architecture and functional modules of smart HEMS is presented, and various home appliance scheduling strategies to reduce the residential electricity cost and improve the energy efficiency from power generation utilities are also investigated.
Abstract: With the arrival of smart grid era and the advent of advanced communication and information infrastructures, bidirectional communication, advanced metering infrastructure, energy storage systems and home area networks would revolutionize the patterns of electricity usage and energy conservation at the consumption premises. Coupled with the emergence of vehicle-to-grid technologies and massive distributed renewable energy, there is a profound transition for the energy management pattern from the conventional centralized infrastructure towards the autonomous responsive demand and cyber-physical energy systems with renewable and stored energy sources. Under the sustainable smart grid paradigm, the smart house with its home energy management system (HEMS) plays an important role to improve the efficiency, economics, reliability, and energy conservation for distribution systems. In this paper, a brief overview on the architecture and functional modules of smart HEMS is presented. Then, the advanced HEMS infrastructures and home appliances in smart houses are thoroughly analyzed and reviewed. Furthermore, the utilization of various building renewable energy resources in HEMS, including solar, wind, biomass and geothermal energies, is surveyed. Lastly, various home appliance scheduling strategies to reduce the residential electricity cost and improve the energy efficiency from power generation utilities are also investigated.

565 citations


Posted Content
TL;DR: A multi-level framework that integrates the notion of research context and cross-context theorizing with the theory evaluation framework to synthesize the existing UTAUT extensions across both the dimensions and the levels of the research context is proposed.
Abstract: The unified theory of acceptance and use of technology (UTAUT) is a little over a decade old and has been used extensively in information systems (IS) and other fields, as the large number of citations to the original paper that introduced the theory evidences. In this paper, we review and synthesize the IS literature on UTAUT from September 2003 until December 2014, perform a theoretical analysis of UTAUT and its extensions, and chart an agenda for research going forward. Based on Weber’s (2012) framework of theory evaluation, we examined UTAUT and its extensions along two sets of quality dimensions; namely, the parts of a theory and the theory as a whole. While our review identifies many merits to UTAUT, we also found that the progress related to this theory has hampered further theoretical development in research into technology acceptance and use. To chart an agenda for research that will enable significant future work, we analyze the theoretical contributions of UTAUT using Whetten’s (2009) notion of cross-context theorizing. Our analysis reveals several limitations that lead us to propose a multi-level framework that can serve as the theoretical foundation for future research. Specifically, this framework integrates the notion of research context and cross-context theorizing with the theory evaluation framework to: (1) synthesize the existing UTAUT extensions across both the dimensions and the levels of the research context and (2) highlight promising research directions. We conclude with recommendations for future UTAUT-related research using the proposed framework.

525 citations


Journal ArticleDOI
TL;DR: An efficient strategy for achieving large-area single-crystalline graphene by letting a single nucleus evolve into a monolayer at a fast rate is demonstrated by locally feeding carbon precursors to a desired position of a substrate composed of an optimized Cu-Ni alloy.
Abstract: Wafer-scale single-crystalline graphene monolayers are highly sought after as an ideal platform for electronic and other applications. At present, state-of-the-art growth methods based on chemical vapour deposition allow the synthesis of one-centimetre-sized single-crystalline graphene domains in ∼12 h, by suppressing nucleation events on the growth substrate. Here we demonstrate an efficient strategy for achieving large-area single-crystalline graphene by letting a single nucleus evolve into a monolayer at a fast rate. By locally feeding carbon precursors to a desired position of a substrate composed of an optimized Cu-Ni alloy, we synthesized an ∼1.5-inch-large graphene monolayer in 2.5 h. Localized feeding induces the formation of a single nucleus on the entire substrate, and the optimized alloy activates an isothermal segregation mechanism that greatly expedites the growth rate. This approach may also prove effective for the synthesis of wafer-scale single-crystalline monolayers of other two-dimensional materials.

510 citations


Journal ArticleDOI
13 Oct 2016-Chem
TL;DR: In this paper, a rational design principle based on intrinsic molecular-structure engineering was proposed to tune the aromatic subunits in arylphenones to achieve a balanced lifetime and efficiency.

502 citations


Journal ArticleDOI
TL;DR: In comparison with typical CH3NH3PbI3-based devices, these solar cells without encapsulation show greatly improved stability in humid air, which is attributed to the incorporation of thiocyanate ions in the crystal lattice.
Abstract: Poor stability of organic-inorganic halide perovskite materials in humid condition has hindered the success of perovskite solar cells in real applications since controlled atmosphere is required for device fabrication and operation, and there is a lack of effective solutions to this problem until now. Here we report the use of lead (II) thiocyanate (Pb(SCN)2) precursor in preparing perovskite solar cells in ambient air. High-quality CH3NH3PbI3-x(SCN)x perovskite films can be readily prepared even when the relative humidity exceeds 70%. Under optimized processing conditions, we obtain devices with an average power conversion efficiency of 13.49% and the maximum efficiency over 15%. In comparison with typical CH3NH3PbI3-based devices, these solar cells without encapsulation show greatly improved stability in humid air, which is attributed to the incorporation of thiocyanate ions in the crystal lattice. The findings pave a way for realizing efficient and stable perovskite solar cells in ambient atmosphere.

465 citations


Posted Content
TL;DR: This work introduces several ways of regularizing the objective, which can dramatically stabilize the training of GAN models and shows that these regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.
Abstract: Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.

Journal ArticleDOI
TL;DR: In this article, a two-level empirical analysis was conducted to explore factors that affect the value of reviews on TripAdvisor, and the results indicated that both text readability and reviewer characteristics affect the perceived value of the reviews.

Journal ArticleDOI
TL;DR: In this paper, the authors used adaptive structuration theory as a lens to identify a number of spillover effects from smartphone use in everyday life into travel, and the results of this study offer several important implications for both research and practice as well as future directions for mobile technology in tourism.
Abstract: The smartphone penetrates many facets of everyday life, including travel. As such, this article argues that since travel can be considered a special stage of technology use, understanding how the smartphone shapes the tourist experience cannot be separated from the way it is used in one’s everyday life. On the basis of a study of American travelers, this study uses adaptive structuration theory as a lens to identify a number of spillover effects from smartphone use in everyday life into travel. The results of this study offer several important implications for both research and practice as well as future directions for the study of mobile technology in tourism.

Journal ArticleDOI
TL;DR: Platinum disulfide (PtS2), a new member of the group-10 transition-metal dichalcogenides, is studied experimentally and theoretically and can be explained by strongly interlayer interaction from the pz orbital hybridization of S atoms.
Abstract: Platinum disulfide (PtS2 ), a new member of the group-10 transition-metal dichalcogenides, is studied experimentally and theoretically. The indirect bandgap of PtS2 can be drastically tuned from 1.6 eV (monolayer) to 0.25 eV (bulk counterpart), and the interlayer mechanical coupling is almost isotropic. It can be explained by strongly interlayer interaction from the pz orbital hybridization of S atoms.

Proceedings ArticleDOI
27 Jun 2016
TL;DR: This work proposes a joint learning frame-work to unify SIR and CIR using convolutional neural network (CNN), and finds that the representations learned with pairwise comparison and triplet comparison objectives can be combined to improve matching performance.
Abstract: Person re-identification has been usually solved as either the matching of single-image representation (SIR) or the classification of cross-image representation (CIR). In this work, we exploit the connection between these two categories of methods, and propose a joint learning frame-work to unify SIR and CIR using convolutional neural network (CNN). Specifically, our deep architecture contains one shared sub-network together with two sub-networks that extract the SIRs of given images and the CIRs of given image pairs, respectively. The SIR sub-network is required to be computed once for each image (in both the probe and gallery sets), and the depth of the CIR sub-network is required to be minimal to reduce computational burden. Therefore, the two types of representation can be jointly optimized for pursuing better matching accuracy with moderate computational cost. Furthermore, the representations learned with pairwise comparison and triplet comparison objectives can be combined to improve matching performance. Experiments on the CUHK03, CUHK01 and VIPeR datasets show that the proposed method can achieve favorable accuracy while compared with state-of-the-arts.

Journal ArticleDOI
01 Jan 2016-Small
TL;DR: This review summarizes the state-of-the-art of the production of 2D nanomaterials using liquid-based direct exfoliation (LBE), a very promising and highly scalable wet approach for synthesizing high quality 2D nmaterials in mild conditions.
Abstract: Tremendous efforts have been devoted to the synthesis and application of two-dimensional (2D) nanomaterials due to their extraordinary and unique properties in electronics, photonics, catalysis, etc., upon exfoliation from their bulk counterparts. One of the greatest challenges that scientists are confronted with is how to produce large quantities of 2D nanomaterials of high quality in a commercially viable way. This review summarizes the state-of-the-art of the production of 2D nanomaterials using liquid-based direct exfoliation (LBE), a very promising and highly scalable wet approach for synthesizing high quality 2D nanomaterials in mild conditions. LBE is a collection of methods that directly exfoliates bulk layered materials into thin flakes of 2D nanomaterials in liquid media without any, or with a minimum degree of, chemical reactions, so as to maintain the high crystallinity of 2D nanomaterials. Different synthetic methods are categorized in the following, in which material characteristics including dispersion concentration, flake thickness, flake size and some applications are discussed in detail. At the end, we provide an overview of the advantages and disadvantages of such synthetic methods of LBE and propose future perspectives.


Journal ArticleDOI
TL;DR: The average value of the figure of merit ZT, of more than 1.17, is measured from 300 K to 800 K along the crystallographic b-axis of 3 at% Na-doped SnSe, with the maximum ZT reaching a value of 2 at 800 K as mentioned in this paper.
Abstract: Excellent thermoelectric performance is obtained over a broad temperature range from 300 K to 800 K by doping single crystals of SnSe. The average value of the figure of merit ZT, of more than 1.17, is measured from 300 K to 800 K along the crystallographic b-axis of 3 at% Na-doped SnSe, with the maximum ZT reaching a value of 2 at 800 K. The room temperature value of the power factor for the same sample and in the same direction is 2.8 mW mK−2, which is an order of magnitude higher than that of the undoped crystal. Calculations show that Na doping lowers the Fermi level and increases the number of carrier pockets in SnSe, leading to a collaborative optimization of the Seebeck coefficient and the electrical conductivity. The resultant optimized carrier concentration and the increased number of carrier pockets near the Fermi level in Na-doped samples are believed to be the key factors behind the spectacular enhancement of the average ZT.

Journal ArticleDOI
TL;DR: This article adopts an energy-efficient architecture for Industrial IoT, which consists of a sense entities domain, RESTful service hosted networks, a cloud server, and user applications, and a sleep scheduling and wake-up protocol, supporting the prediction of sleep intervals.
Abstract: The Internet of Things (IoT) can support collaboration and communication between objects automatically. However, with the increasing number of involved devices, IoT systems may consume substantial amounts of energy. Thus, the relevant energy efficiency issues have recently been attracting much attention from both academia and industry. In this article we adopt an energy-efficient architecture for Industrial IoT (IIoT), which consists of a sense entities domain, RESTful service hosted networks, a cloud server, and user applications. Under this architecture, we focus on the sense entities domain where huge amounts of energy are consumed by a tremendous number of nodes. The proposed framework includes three layers: the sense layer, the gateway layer, and the control layer. This hierarchical framework balances the traffic load and enables a longer lifetime of the whole system. Based on this deployment, a sleep scheduling and wake-up protocol is designed, supporting the prediction of sleep intervals. The shifts of states support the use of the entire system resources in an energy-efficient way. Simulation results demonstrate the significant advantages of our proposed architecture in resource utilization and energy consumption.

Journal ArticleDOI
TL;DR: The proposed level set method can be directly applied to simultaneous segmentation and bias correction for 3 and 7T magnetic resonance images and demonstrates the superiority of the proposed method over other representative algorithms.
Abstract: It is often a difficult task to accurately segment images with intensity inhomogeneity, because most of representative algorithms are region-based that depend on intensity homogeneity of the interested object. In this paper, we present a novel level set method for image segmentation in the presence of intensity inhomogeneity. The inhomogeneous objects are modeled as Gaussian distributions of different means and variances in which a sliding window is used to map the original image into another domain, where the intensity distribution of each object is still Gaussian but better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying a bias field with the original signal within the window. A maximum likelihood energy functional is then defined on the whole image region, which combines the bias field, the level set function, and the piecewise constant function approximating the true image signal. The proposed level set method can be directly applied to simultaneous segmentation and bias correction for 3 and 7T magnetic resonance images. Extensive evaluation on synthetic and real-images demonstrate the superiority of the proposed method over other representative algorithms.

Proceedings ArticleDOI
10 Apr 2016
TL;DR: This paper provides an energy-efficient dynamic offloading and resource scheduling (eDors) policy to reduce energy consumption and shorten application completion time and demonstrates that the eDors algorithm can effectively reduce the EEC by optimally adjusting the CPU clock frequency of SMDs based on the dynamic voltage and frequency scaling (DVFS) technique in local computing, and adapting the transmission power for the wireless channel conditions in cloud computing.
Abstract: Mobile cloud computing (MCC) as an emerging and prospective computing paradigm, can significantly enhance computation capability and save energy of smart mobile devices (SMDs) by offloading computation-intensive tasks from resource-constrained SMDs onto the resource-rich cloud. However, how to achieve energy-efficient computation offloading under the hard constraint for application completion time remains a challenge issue. To address such a challenge, in this paper, we provide an energy-efficient dynamic offloading and resource scheduling (eDors) policy to reduce energy consumption and shorten application completion time. We first formulate the eDors problem into the energy-efficiency cost (EEC) minimization problem while satisfying the task-dependency requirements and the completion time deadline constraint. To solve the optimization problem, we then propose a distributed eDors algorithm consisting of three subalgorithms of computation offloading selection, clock frequency control and transmission power allocation. More importantly, we find that the computation offloading selection depends on not only the computing workload of a task, but also the maximum completion time of its immediate predecessors and the clock frequency and transmission power of the mobile device. Finally, our experimental results in a real testbed demonstrate that the eDors algorithm can effectively reduce the EEC by optimally adjusting the CPU clock frequency of SMDs based on the dynamic voltage and frequency scaling (DVFS) technique in local computing, and adapting the transmission power for the wireless channel conditions in cloud computing.

Journal ArticleDOI
TL;DR: It is shown that single-crystal graphene can be grown on copper foils with a growth rate of 60 μm-1, and single- Crystal graphene domains with a lateral size of 0.3 mm are able to be grown with just 5 s.
Abstract: Single-crystal graphene can be grown on a copper foil at a rate of 60 μm s-1 by using an adjacent oxide substrate that continuously supplies oxygen to the surface of the copper catalyst. Graphene has a range of unique physical properties1,2 and could be of use in the development of a variety of electronic, photonic and photovoltaic devices3,4,5. For most applications, large-area high-quality graphene films are required and chemical vapour deposition (CVD) synthesis of graphene on copper surfaces has been of particular interest due to its simplicity and cost effectiveness6,7,8,9,10,11,12,13,14,15. However, the rates of growth for graphene by CVD on copper are less than 0.4 μm s–1, and therefore the synthesis of large, single-crystal graphene domains takes at least a few hours. Here, we show that single-crystal graphene can be grown on copper foils with a growth rate of 60 μm s–1. Our high growth rate is achieved by placing the copper foil above an oxide substrate with a gap of ∼15 μm between them. The oxide substrate provides a continuous supply of oxygen to the surface of the copper catalyst during the CVD growth, which significantly lowers the energy barrier to the decomposition of the carbon feedstock and increases the growth rate. With this approach, we are able to grow single-crystal graphene domains with a lateral size of 0.3 mm in just 5 s.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the presence of unreacted PbI2 results in an intrinsic instability of the perovskite film under illumination, leading to the film degradation under inert atmosphere and faster degradation upon exposure to illumination and humidity.
Abstract: Unreacted lead iodide is commonly believed to be beneficial to the efficiency of methylammonium lead iodide perovskite based solar cells, since it has been proposed to passivate the defects in perovskite grain boundaries. However, it is shown here that the presence of unreacted PbI2 results in an intrinsic instability of the film under illumination, leading to the film degradation under inert atmosphere and faster degradation upon exposure to illumination and humidity. The perovskite films without lead iodide have improved stability, but lower efficiency due to inferior film morphology (smaller grain size, the presence of pinholes). Optimization of the deposition process resulted in PbI2-free perovskite films giving comparable efficiency to those with excess PbI2 (14.2 ± 1.3% compared to 15.1 ± 0.9%) Thus, optimization of the deposition process for PbI2-free films leads to dense, pinhole-free, large grain size perovskite films which result in cells with high efficiency without detrimental effects on the film photostability caused by excess PbI2. However, it should be noted that for encapsulated devices illuminated through the substrate (fluorine-doped tin oxide glass, TiO2 film), film photostability is not a key factor in the device degradation.

Journal ArticleDOI
TL;DR: The authors explored the effect of sharing economy penetration at a macro-economic level and revealed both positive and negative effect of Airbnb on tourism employment in Idaho State, and collected 657 Airbnb houses and industry data of 44 counties of Idaho State.

Journal ArticleDOI
TL;DR: An overview of the potential, recent advances, and challenges of optical security and encryption using free space optics is presented, highlighting the need for more specialized hardware and image processing algorithms.
Abstract: Information security and authentication are important challenges facing society. Recent attacks by hackers on the databases of large commercial and financial companies have demonstrated that more research and development of advanced approaches are necessary to deny unauthorized access to critical data. Free space optical technology has been investigated by many researchers in information security, encryption, and authentication. The main motivation for using optics and photonics for information security is that optical waveforms possess many complex degrees of freedom such as amplitude, phase, polarization, large bandwidth, nonlinear transformations, quantum properties of photons, and multiplexing that can be combined in many ways to make information encryption more secure and more difficult to attack. This roadmap article presents an overview of the potential, recent advances, and challenges of optical security and encryption using free space optics. The roadmap on optical security is comprised of six categories that together include 16 short sections written by authors who have made relevant contributions in this field. The first category of this roadmap describes novel encryption approaches, including secure optical sensing which summarizes double random phase encryption applications and flaws [Yamaguchi], the digital holographic encryption in free space optical technique which describes encryption using multidimensional digital holography [Nomura], simultaneous encryption of multiple signals [Perez-Cabre], asymmetric methods based on information truncation [Nishchal], and dynamic encryption of video sequences [Torroba]. Asymmetric and one-way cryptosystems are analyzed by Peng. The second category is on compression for encryption. In their respective contributions, Alfalou and Stern propose similar goals involving compressed data and compressive sensing encryption. The very important area of cryptanalysis is the topic of the third category with two sections: Sheridan reviews phase retrieval algorithms to perform different attacks, whereas Situ discusses nonlinear optical encryption techniques and the development of a rigorous optical information security theory. The fourth category with two contributions reports how encryption could be implemented at the nano- or micro-scale. Naruse discusses the use of nanostructures in security applications and Carnicer proposes encoding information in a tightly focused beam. In the fifth category, encryption based on ghost imaging using single-pixel detectors is also considered. In particular, the authors [Chen, Tajahuerce] emphasize the need for more specialized hardware and image processing algorithms. Finally, in the sixth category, Mosk and Javidi analyze in their corresponding papers how quantum imaging can benefit optical encryption systems. Sources that use few photons make encryption systems much more difficult to attack, providing a secure method for authentication.

Journal ArticleDOI
TL;DR: A new 7-dimensional model of self-reported ways of being independent or interdependent is developed and validated across cultures and will allow future researchers to test more accurately the implications of cultural models of selfhood for psychological processes in diverse ecocultural contexts.
Abstract: Markus and Kitayama’s (1991) theory of independent and interdependent self-construals had a major influence on social, personality, and developmental psychology by highlighting the role of culture in psychological processes. However, research has relied excessively on contrasts between North American and East Asian samples, and commonly used self-report measures of independence and interdependence frequently fail to show predicted cultural differences. We revisited the conceptualization and measurement of independent and interdependent self-construals in 2 large-scale multinational surveys, using improved methods for cross-cultural research. We developed (Study 1: N = 2924 students in 16 nations) and validated across cultures (Study 2: N = 7279 adults from 55 cultural groups in 33 nations) a new 7-dimensional model of self-reported ways of being independent or interdependent. Patterns of global variation support some of Markus and Kitayama’s predictions, but a simple contrast between independence and interdependence does not adequately capture the diverse models of selfhood that prevail in different world regions. Cultural groups emphasize different ways of being both independent and interdependent, depending on individualism-collectivism, national socioeconomic development, and religious heritage. Our 7-dimensional model will allow future researchers to test more accurately the implications of cultural models of selfhood for psychological processes in diverse ecocultural contexts. (PsycINFO Database Record (c) 2016 APA, all rights reserved)

Journal ArticleDOI
TL;DR: Textile triboelectric nanogenerators for human respiratory monitoring with machine washability are developed through loom weaving of Cu-PET and PI-Cu-PET yarns by integrating into a chest strap, human respiratory rate and depth can be monitored.
Abstract: Textile triboelectric nanogenerators for human respiratory monitoring with machine washability are developed through loom weaving of Cu-PET and PI-Cu-PET yarns. Triboelectric charges are generated at the yarn crisscross intersections to achieve a maximum short circuit current density of 15.50 mA m-2 . By integrating into a chest strap, human respiratory rate and depth can be monitored.

Journal ArticleDOI
TL;DR: This article evaluated bibliometric studies in tourism, depicts emerging themes, and offers critical discussions for theory development and future research, concluding that paucity still exists, particularly in relational bibliometrics in tourism.

Journal ArticleDOI
TL;DR: In this article, an accelerated carbonation technique was employed to strengthen the quality of recycled concrete aggregates (RCAs) in order to improve the mechanical properties of new concrete, and the experimental results showed that the properties of RCAs were improved after the carbonation treatment.
Abstract: An accelerated carbonation technique was employed to strengthen the quality of recycled concrete aggregates (RCAs) in this study. The properties of the carbonated RCAs and their influence on the mechanical properties of new concrete were then evaluated. Two types of RCAs, an old type of RCAs sourced from demolished old buildings and a new type of RCAs derived from a designed concrete mixture, were used. The chosen RCAs were firstly carbonated for 24 h in a carbonation chamber with a 100% CO2 concentration at a pressure level of 0.1 Bar and 5.0 Bar, respectively. The experimental results showed that the properties of RCAs were improved after the carbonation treatment. This resulted in performance enhancement of the new concrete prepared with the carbonated RCAs, especially an obvious increase of the mechanical strengths for the concrete prepared with the 100% carbonated new RCAs. Moreover, the replacement percentage of natural aggregates by the carbonated RCAs can be increased to 60% with an insignificant reduction in the mechanical properties of the new concrete.

Journal ArticleDOI
TL;DR: An alternating direction method (ADM)-based nonnegative latent factor (ANLF) model is proposed, which ensures fast convergence and high prediction accuracy, as well as the maintenance of nonnegativity constraints.
Abstract: Nonnegative matrix factorization (NMF)-based models possess fine representativeness of a target matrix, which is critically important in collaborative filtering (CF)-based recommender systems. However, current NMF-based CF recommenders suffer from the problem of high computational and storage complexity, as well as slow convergence rate, which prevents them from industrial usage in context of big data. To address these issues, this paper proposes an alternating direction method (ADM)-based nonnegative latent factor (ANLF) model. The main idea is to implement the ADM-based optimization with regard to each single feature, to obtain high convergence rate as well as low complexity. Both computational and storage costs of ANLF are linear with the size of given data in the target matrix, which ensures high efficiency when dealing with extremely sparse matrices usually seen in CF problems. As demonstrated by the experiments on large, real data sets, ANLF also ensures fast convergence and high prediction accuracy, as well as the maintenance of nonnegativity constraints. Moreover, it is simple and easy to implement for real applications of learning systems.