scispace - formally typeset
Search or ask a question

Showing papers by "Stevens Institute of Technology published in 2016"


Journal ArticleDOI
TL;DR: This paper presents a review of recent research articles related to defining and quantifying resilience in various disciplines, with a focus on engineering systems and provides a classification scheme to the approaches, focusing on qualitative and quantitative approaches and their subcategories.

1,072 citations


Book ChapterDOI
01 Jan 2016
TL;DR: A review of the contributions to LFW for which the authors have provided results to the curators and the cross cutting topic of alignment and how it is used in various methods is reviewed.
Abstract: In 2007, Labeled Faces in the Wild was released in an effort to spur research in face recognition, specifically for the problem of face verification with unconstrained images. Since that time, more than 50 papers have been published that improve upon this benchmark in some respect. A remarkably wide variety of innovative methods have been developed to overcome the challenges presented in this database. As performance on some aspects of the benchmark approaches 100 % accuracy, it seems appropriate to review this progress, derive what general principles we can from these works, and identify key future challenges in face recognition. In this survey, we review the contributions to LFW for which the authors have provided results to the curators (results found on the LFW results web page). We also review the cross cutting topic of alignment and how it is used in various methods. We end with a brief discussion of recent databases designed to challenge the next generation of face recognition algorithms.

464 citations


Journal ArticleDOI
15 Apr 2016-Science
TL;DR: This architectural map explains the vast majority of the electron density of the scaffold, and concludes that despite obvious differences in morphology and composition, the higher-order structure of the inner and outer rings is unexpectedly similar.
Abstract: Nuclear pore complexes (NPCs) are 110-megadalton assemblies that mediate nucleocytoplasmic transport. NPCs are built from multiple copies of ~30 different nucleoporins, and understanding how these nucleoporins assemble into the NPC scaffold imposes a formidable challenge. Recently, it has been shown how the Y complex, a prominent NPC module, forms the outer rings of the nuclear pore. However, the organization of the inner ring has remained unknown until now. We used molecular modeling combined with cross-linking mass spectrometry and cryo-electron tomography to obtain a composite structure of the inner ring. This architectural map explains the vast majority of the electron density of the scaffold. We conclude that despite obvious differences in morphology and composition, the higher-order structure of the inner and outer rings is unexpectedly similar.

261 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a global picture of the liquid slip on structured surfaces to assist in rational design of superhydrophobic surfaces for drag reduction and discuss the recent efforts to prevent its loss.
Abstract: A gas in between micro- or nanostructures on a submerged superhydrophobic (SHPo) surface allows the liquid on the structures to flow with an effective slip. If large enough, this slippage may entail a drag reduction appreciable for many flow systems. However, the large discrepancies among the slippage levels reported in the literature have led to a widespread misunderstanding on the drag-reducing ability of SHPo surfaces. Today we know that the amount of slip, generally quantified with a slip length, is mainly determined by the structural features of SHPo surfaces, such as the pitch, solid fraction, and pattern type, and further affected by secondary factors, such as the state of the liquid–gas interface. Reviewing the experimental data of laminar flows in the literature comprehensively and comparing them with the theoretical predictions, we provide a global picture of the liquid slip on structured surfaces to assist in rational design of SHPo surfaces for drag reduction. Because the trapped gas, called plastron, vanishes along with its slippage effect in most application conditions, lastly we discuss the recent efforts to prevent its loss. This review is limited to laminar flows, for which the SHPo drag reduction is reasonably well understood.

214 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluate alternative measures of the tone of financial narrative and find that word-frequency tone measures based on domain-specific wordlists better predict the market reaction to earnings announcements, have greater statistical power in short-window event studies, and exhibit more economically consistent post-announcement drift.
Abstract: This study evaluates alternative measures of the tone of financial narrative. We present evidence that word-frequency tone measures based on domain-specific wordlists—compared to general wordlists—better predict the market reaction to earnings announcements, have greater statistical power in short-window event studies, and exhibit more economically consistent post-announcement drift. Further, inverse document frequency weighting, advocated in Loughran and McDonald (2011), provides little improvement to the alternative approach of equal weighting. We also provide evidence that word-frequency tone measures are as powerful as the Naive Bayesian machine-learning tone measure from Li (2010) in a regression of future earnings on MD&A tone. Overall, although more complex techniques are potentially advantageous in certain contexts, equal-weighted, domain-specific, word-frequency tone measures are generally just as powerful in the context of financial disclosure and capital markets. Such measures are als...

196 citations


Proceedings ArticleDOI
24 Oct 2016
TL;DR: Results show that VoiceLive is robust to different phone placements and is compatible to different sampling rates and phone models, and uses such unique TDoA dynamic which doesn't exist under replay attacks for liveness detection.
Abstract: Voice authentication is drawing increasing attention and becomes an attractive alternative to passwords for mobile authentication. Recent advances in mobile technology further accelerate the adoption of voice biometrics in an array of diverse mobile applications. However, recent studies show that voice authentication is vulnerable to replay attacks, where an adversary can spoof a voice authentication system using a pre-recorded voice sample collected from the victim. In this paper, we propose VoiceLive, a practical liveness detection system for voice authentication on smartphones. VoiceLive detects a live user by leveraging the user's unique vocal system and the stereo recording of smartphones. In particular, with the phone closely placed to a user's mouth, it captures time-difference-of-arrival (TDoA) changes in a sequence of phoneme sounds to the two microphones of the phone, and uses such unique TDoA dynamic which doesn't exist under replay attacks for liveness detection. VoiceLive is practical as it doesn't require additional hardware but two-channel stereo recording that is supported by virtually all smartphones. Our experimental evaluation with 12 participants and different types of phones shows that VoiceLive achieves over 99% detection accuracy at around 1% Equal Error Rate (EER). Results also show that VoiceLive is robust to different phone placements and is compatible to different sampling rates and phone models.

166 citations


Journal ArticleDOI
TL;DR: This paper considers the line spectral estimation problem and proposes an iterative reweighted method which jointly estimates the sparse signals and the unknown parameters associated with the true dictionary, and achieves super resolution and outperforms other state-of-the-art methods in many cases of practical interest.
Abstract: Conventional compressed sensing theory assumes signals have sparse representations in a known dictionary. Nevertheless, in many practical applications such as line spectral estimation, the sparsifying dictionary is usually characterized by a set of unknown parameters in a continuous domain. To apply the conventional compressed sensing technique to such applications, the continuous parameter space has to be discretized to a finite set of grid points, based on which a “nominal dictionary” is constructed for sparse signal recovery. Discretization, however, inevitably incurs errors since the true parameters do not necessarily lie on the discretized grid. This error, also referred to as grid mismatch, leads to deteriorated recovery performance. In this paper, we consider the line spectral estimation problem and propose an iterative reweighted method which jointly estimates the sparse signals and the unknown parameters associated with the true dictionary. The proposed algorithm is developed by iteratively decreasing a surrogate function majorizing a given log-sum objective function, leading to a gradual and interweaved iterative process to refine the unknown parameters and the sparse signal. A simple yet effective scheme is developed for adaptively updating the regularization parameter that controls the tradeoff between the sparsity of the solution and the data fitting error. Theoretical analysis is conducted to justify the proposed method. Simulation results show that the proposed algorithm achieves super resolution and outperforms other state-of-the-art methods in many cases of practical interest.

150 citations


Journal ArticleDOI
TL;DR: In this paper, an approach to exploit TV white space (TVWS) for device-to-device (D2D) communications with the aid of the existing cellular infrastructure is presented.
Abstract: This paper presents a systematic approach to exploiting TV white space (TVWS) for device-to-device (D2D) communications with the aid of the existing cellular infrastructure. The goal is to build a location-specific TVWS database, which provides a lookup table service for any D2D link to determine its maximum permitted emission power (MPEP) in an unlicensed digital TV (DTV) band. To achieve this goal, the idea of mobile crowd sensing is first introduced to collect active spectrum measurements from massive personal mobile devices. Considering the incompleteness of crowd measurements, we formulate the problem of unknown measurements recovery as a matrix completion problem and apply a powerful fixed point continuation algorithm to reconstruct the unknown elements from the known elements. By joint exploitation of the big spectrum data in its vicinity, each cellular base station further implements a nonlinear support vector machine algorithm to perform irregular coverage boundary detection of a licensed DTV transmitter. With the knowledge of the detected coverage boundary, an opportunistic spatial reuse algorithm is developed for each D2D link to determine its MPEP. Simulation results show that the proposed approach can successfully enable D2D communications in TVWS while satisfying the interference constraint from the licensed DTV services. In addition, to our best knowledge, this is the first try to explore and exploit TVWS inside the DTV protection region resulted from the shadowing effect. Potential application scenarios include communications between internet of vehicles in the underground parking and D2D communications in hotspots such as subway, game stadiums, and airports.

150 citations


Journal ArticleDOI
TL;DR: It is found that even limited injections of fake reviews can have a significant effect and the factors that contribute to this can be explored.
Abstract: Extant research has focused on the detection of fake reviews on online review platforms, motivated by the well-documented impact of customer reviews on the users’ purchase decisions. The problem is typically approached from the perspective of protecting the credibility of review platforms, as well as the reputation and revenue of the reviewed firms. However, there is little examination of the vulnerability of individual businesses to fake review attacks. This study focuses on formalizing the visibility of a business to the customer base and on evaluating its vulnerability to fake review attacks. We operationalize visibility as a function of the features that a business can cover and its position in the platform’s review-based ranking. Using data from over 2.3 million reviews of 4,709 hotels from 17 cities, we study how visibility can be impacted by different attack strategies. We find that even limited injections of fake reviews can have a significant effect and explore the factors that contribute to this...

140 citations


Proceedings ArticleDOI
30 May 2016
TL;DR: The Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence, which is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information.
Abstract: The proliferation of wearable devices, eg, smartwatches and activity trackers, with embedded sensors has already shown its great potential on monitoring and inferring human daily activities This paper reveals a serious security breach of wearable devices in the context of divulging secret information (ie, key entries) while people accessing key-based security systems Existing methods of obtaining such secret information relies on installations of dedicated hardware (eg, video camera or fake keypad), or training with labeled data from body sensors, which restrict use cases in practical adversary scenarios In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user's fine-grained hand movements, which enable attackers to reproduce the trajectories of the user's hand and further to recover the secret key entries In particular, our system confirms the possibility of using embedded sensors in wearable devices, ie, accelerometers, gyroscopes, and magnetometers, to derive the moving distance of the user's hand between consecutive key entries regardless of the pose of the hand Our Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence Extensive experiments are conducted with over 5000 key entry traces collected from 20 adults for key-based security systems (ie ATM keypads and regular keyboards) through testing on different kinds of wearables Results demonstrate that such a technique can achieve 80% accuracy with only one try and more than 90% accuracy with three tries, which to our knowledge, is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information

134 citations


Journal ArticleDOI
TL;DR: From the experiments, it can be argued that touchstroke dynamics can be quite competitive, at least when compared to similar results obtained from keystroke evaluation studies.
Abstract: Keystroke dynamics is a well-investigated behavioural biometric based on the way and rhythm in which someone interacts with a keyboard or keypad when typing characters. This paper explores the potential of this modality but for touchscreen-equipped smartphones. The main research question posed is whether 'touchstroking' can be effective in building the biometric profile of a user, in terms of typing pattern, for future authentication. To reach this goal, we implemented a touchstroke system in the Android platform and executed different scenarios under disparate methodologies to estimate its effectiveness in authenticating the end-user. Apart from typical classification features used in legacy keystroke systems, we introduce two novel ones, namely, speed and distance. From the experiments, it can be argued that touchstroke dynamics can be quite competitive, at least when compared to similar results obtained from keystroke evaluation studies. As far as we are aware of, this is the first time this newly arisen behavioural trait is put into focus. Copyright © 2014 John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: In this paper, the optimal antenna selection (OAS) and suboptimal antenna selection schemes were proposed to improve the security of source-destination transmissions in a multiple-input-multiple-output (MIMO) system consisting of one source, one destination, and one eavesdropper.
Abstract: In this paper, we consider a multiple-input-multiple-output (MIMO) system consisting of one source, one destination, and one eavesdropper, where each node is equipped with an arbitrary number of antennas. To improve the security of source-destination transmissions, we investigate the antenna selection at the source and propose the optimal antenna selection (OAS) and suboptimal antenna selection (SAS) schemes, depending on whether the source node has the global channel state information (CSI) of both the main link (from source to destination) and the wiretap link (from source to eavesdropper). Moreover, the traditional space-time transmission (STT) is studied as a benchmark. We evaluate the secrecy performance of STT, SAS, and OAS schemes in terms of the probability of zero secrecy capacity. Furthermore, we examine the generalized secrecy diversity of the STT, SAS, and OAS schemes through an asymptotic analysis of the probability of zero secrecy capacity as the ratio between the average gains of the main and wiretap channels tends to infinity. This is different from the conventional secrecy diversity that assumes an infinite signal-to-noise ratio (SNR) received at the destination under the condition that the eavesdropper has a finite received SNR. It is shown that the generalized secrecy diversity orders of the STT, SAS, and OAS schemes are the product of the number of antennas at source and destination. Additionally, numerical results show that the proposed OAS scheme strictly outperforms both the STT and the SAS schemes in terms of the probability of zero secrecy capacity.

Posted Content
TL;DR: A CANDECOMP/PARAFAC (CP) decomposition-based method for channel parameter estimation (including angles of arrival/departure, time delays, and fading coefficients) and reveals that the uniqueness of the CP decomposition can be guaranteed even when the size of the tensor is small.
Abstract: We consider the problem of downlink channel estimation for millimeter wave (mmWave) MIMO-OFDM systems, where both the base station (BS) and the mobile station (MS) employ large antenna arrays for directional precoding/beamforming. Hybrid analog and digital beamforming structures are employed in order to offer a compromise between hardware complexity and system performance. Different from most existing studies that are concerned with narrowband channels, we consider estimation of wideband mmWave channels with frequency selectivity, which is more appropriate for mmWave MIMO-OFDM systems. By exploiting the sparse scattering nature of mmWave channels, we propose a CANDECOMP/PARAFAC (CP) decomposition-based method for channel parameter estimation (including angles of arrival/departure, time delays, and fading coefficients). In our proposed method, the received signal at the BS is expressed as a third-order tensor. We show that the tensor has the form of a low-rank CP decomposition, and the channel parameters can be estimated from the associated factor matrices. Our analysis reveals that the uniqueness of the CP decomposition can be guaranteed even when the size of the tensor is small. Hence the proposed method has the potential to achieve substantial training overhead reduction. We also develop Cramer-Rao bound (CRB) results for channel parameters, and compare our proposed method with a compressed sensing-based method. Simulation results show that the proposed method attains mean square errors that are very close to their associated CRBs, and presents a clear advantage over the compressed sensing-based method in terms of both estimation accuracy and computational complexity.

Proceedings ArticleDOI
01 Oct 2016
TL;DR: This work proposes a novel approach to predict mutual information (MI) using Bayesian optimization, and demonstrates that the proposed method provides not only computational efficiency and rapid map entropy reduction, but also robustness in comparison with competing approaches.
Abstract: We consider an autonomous exploration problem in which a mobile robot is guided by an information-based controller through an a priori unknown environment, choosing to collect its next measurement at the location estimated to be most informative within its current field of view. We propose a novel approach to predict mutual information (MI) using Bayesian optimization. Over several iterations, candidate sensing actions are suggested by Bayesian optimization and added to a committee that repeatedly trains a Gaussian process (GP). The GP estimates MI throughout the robot's action space, serving as the basis for an acquisition function used to select the next candidate. The best sensing action in the committee is executed by the robot. This approach is compared over several environments with two batch methods, one which chooses the most informative action from a set of pseudo-random samples whose MI is explicitly evaluated, and one that applies GP regression to this sample set. Our computational results demonstrate that the proposed method provides not only computational efficiency and rapid map entropy reduction, but also robustness in comparison with competing approaches.

Journal ArticleDOI
TL;DR: The technical content in the proposed work introduces a new model that allows, for the first time, to compare the system resilience among systems by introducing a new dimension to system resilience models, called stress, to mimic the definition of resilience in material science.

Proceedings ArticleDOI
12 Sep 2016
TL;DR: It is proposed that multi-modal sensing (in-ear audio plus head and wrist motion) can be used to more accurately classify food type, as audio and motion features provide complementary information and knowing food type is critical for estimating amount consumed in combination with sensor data.
Abstract: Determining when an individual is eating can be useful for tracking behavior and identifying patterns, but to create nutrition logs automatically or provide real-time feedback to people with chronic disease, we need to identify both what they are consuming and in what quantity. However, food type and amount have mainly been estimated using image data (requiring user involvement) or acoustic sensors (tested with a restricted set of foods rather than representative meals). As a result, there is not yet a highly accurate automated nutrition monitoring method that can be used with a variety of foods. We propose that multi-modal sensing (in-ear audio plus head and wrist motion) can be used to more accurately classify food type, as audio and motion features provide complementary information. Further, we propose that knowing food type is critical for estimating amount consumed in combination with sensor data. To test this we use data from people wearing audio and motion sensors, with ground truth annotated from video and continuous scale data. With data from 40 unique foods we achieve a classification accuracy of 82.7% with a combination of sensors (versus 67.8% for audio alone and 76.2% for head and wrist motion). Weight estimation error was reduced from a baseline of 127.3% to 35.4% absolute relative error. Ultimately, our estimates of food type and amount can be linked to food databases to provide automated calorie estimates from continuously-collected data.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a layered pilot transmission scheme and a CANDECOMP/PARAFAC decomposition-based method for joint estimation of the channels from multiple users (i.e., MSs) to the BS.
Abstract: We consider the problem of uplink channel estimation for millimeter wave (mmWave) systems, where the base station (BS) and mobile stations (MSs) are equipped with large antenna arrays to provide sufficient beamforming gain for outdoor wireless communications. Hybrid analog and digital beamforming structures are employed by both the BS and the MS due to hardware constraints. We propose a layered pilot transmission scheme and a CANDECOMP/PARAFAC (CP) decomposition-based method for joint estimation of the channels from multiple users (i.e., MSs) to the BS. The proposed method exploits the intrinsic low-rank structure of the multiway data collected from multiple modes, where the low-rank structure is a result of the sparse scattering nature of the mmWave channel. The uniqueness of the CP decomposition is studied, and the sufficient conditions for essential uniqueness are obtained. The conditions shed light on the design of the beamforming matrix, the combining matrix, and the pilot sequences, and meanwhile provide general guidelines for choosing system parameters. Our analysis reveals that our proposed method can achieve a substantial training overhead reduction by leveraging the low-rank structure of the received signal. Simulation results show that the proposed method presents a clear advantage over a compressed sensing-based method in terms of both estimation accuracy and computational complexity.

Journal ArticleDOI
TL;DR: Results suggest that a weighted flow capacity rate, which accounts for both the contribution of an edge to maximum network flow and the extent to which the edge is a bottleneck in the network, shows most promise across four instances of varying network sizes and densities.

Journal ArticleDOI
TL;DR: An expectation-maximization based estimator, as well as a modified cross-correlation (MCC) estimator that is a computationally simpler solution resulting from an approximation of the former and the Cramér-Rao lower bound for the estimation problem are proposed.
Abstract: We consider the problem of joint delay-Doppler estimation of a moving target in a passive radar that employs a non-cooperative illuminator of opportunity (IO) for target illumination, a reference channel (RC) steered to the IO to obtain a reference signal, and a surveillance channel (SC) for target monitoring. We consider a practically motivated scenario, where the RC receives a noise-contaminated copy of the IO signal and the SC observation is polluted by a direct-path interference that is usually neglected by prior studies. We develop a data model without discretizing the parameter space, which may lead to a straddle loss, by treating both the delay and Doppler as continuous parameters. We propose an expectation-maximization based estimator, as well as a modified cross-correlation (MCC) estimator that is a computationally simpler solution resulting from an approximation of the former. In addition, we derive the Cramer-Rao lower bound for the estimation problem. Simulation results are presented to illustrate the performance of the proposed estimators and the widely used CC estimator.

Journal ArticleDOI
TL;DR: In this article, the authors investigate how fairness concerns influence supply-chain decision-making, while examining the effect of private production-cost information and touching on issues related to bounded rationality.

Journal ArticleDOI
TL;DR: An accurate vehicle speed estimation system, SenSpeed, is proposed, which senses natural driving conditions in urban environments including making turns, stopping, and passing through uneven road surfaces, to derive reference points and further eliminates the speed estimation deviations caused by acceleration errors.
Abstract: Acquiring instant vehicle speed is desirable and a corner stone to many important vehicular applications. This paper utilizes smartphone sensors to estimate the vehicle speed, especially when GPS is unavailable or inaccurate in urban environments. In particular, we estimate the vehicle speed by integrating the accelerometer’s readings over time and find the acceleration errors can lead to large deviations between the estimated speed and the real one. Further analysis shows that the changes of acceleration errors are very small over time which can be corrected at some points, called reference points , where the true vehicle speed can be estimated. Recognizing this observation, we propose an accurate vehicle speed estimation system, SenSpeed, which senses natural driving conditions in urban environments including making turns , stopping , and passing through uneven road surfaces , to derive reference points and further eliminates the speed estimation deviations caused by acceleration errors. Extensive experiments demonstrate that SenSpeed is accurate and robust in real driving environments. On average, the real-time speed estimation error on local road is $2.1\,\mathrm {km/h}$ , and the offline speed estimation error is as low as $1.21$ km/h. Whereas the average error of GPS is $5.0$ and $4.5$ km/h, respectively.

Proceedings Article
01 Jan 2016
TL;DR: It is shown that attackers can find hidden information, such as CPI’s SafeStacks, in seconds—by means of thread spraying and it is found that it is hard to remove all sensitive information from a program and how residual sensitive information allows attackers to bypass defenses completely.
Abstract: In the absence of hardware-supported segmentation, many state-of-the-art defenses resort to “hiding” sensitive information at a random location in a very large address space. This paper argues that information hiding is a weak isolation model and shows that attackers can find hidden information, such as CPI’s SafeStacks, in seconds—by means of thread spraying. Thread spraying is a novel attack technique which forces the victim program to allocate many hidden areas. As a result, the attacker has a much better chance to locate these areas and compromise the defense. We demonstrate the technique by means of attacks on Firefox, Chrome, and MySQL. In addition, we found that it is hard to remove all sensitive information (such as pointers to the hidden region) from a program and show how residual sensitive information allows attackers to bypass defenses completely. We also show how we can harden information hiding techniques by means of an Authenticating Page Mapper (APM) which builds on a user-level page-fault handler to authenticate arbitrary memory reads/writes in the virtual address space. APM bootstraps protected applications with a minimum-sized safe area. Every time the program accesses this area, APM authenticates the access operation, and, if legitimate, expands the area on demand. We demonstrate that APM hardens information hiding significantly while increasing the overhead, on average, 0.3% on baseline SPEC CPU 2006, 0.0% on SPEC with SafeStack and 1.4% on SPEC with CPI.

Proceedings ArticleDOI
16 May 2016
TL;DR: A novel algorithm to produce descriptive online 3D occupancy maps using Gaussian processes, which may serve both as an improved-accuracy classifier, and as a predictive tool to support autonomous navigation.
Abstract: We present a novel algorithm to produce descriptive online 3D occupancy maps using Gaussian processes (GPs). GP regression and classification have met with recent success in their application to robot mapping, as GPs are capable of expressing rich correlation among map cells and sensor data. However, the cubic computational complexity has limited its application to large-scale mapping and online use. In this paper we address this issue first by proposing test-data octrees, octrees within blocks of the map that prune away nodes of the same state, condensing the number of test data used in a regression, in addition to allowing fast data retrieval. We also propose a nested Bayesian committee machine which, after new sensor data is partitioned among several GP regressions, fuses the result and updates the map with greatly reduced complexity. Finally, by adjusting the range of influence of the training data and tuning a variance threshold implemented in our method's binary classification step, we are able to control the richness of inference achieved by GPs - and its tradeoff with classification accuracy. The performance of the proposed approach is evaluated with both simulated and real data, demonstrating that the method may serve both as an improved-accuracy classifier, and as a predictive tool to support autonomous navigation.

Journal ArticleDOI
TL;DR: A flood hazard assessment is presented that improves confidence in the understanding of the region's present-day potential for flooding, by separately including the contribution of tropical cyclones (TCs) and extratropical cyclone (ETCs) and validating the modeling study at multiple stages against historical observations.
Abstract: Recent studies of flood risk at New York Harbor (NYH) have shown disparate results for the 100-year storm tide, providing an uncertain foundation for the flood mitigation response after Hurricane Sandy Here, we present a flood hazard assessment that improves confidence in our understanding of the region's present-day potential for flooding, by separately including the contribution of tropical cyclones (TCs) and extratropical cyclones (ETCs), and validating our modeling study at multiple stages against historical observations The TC assessment is based on a climatology of 606 synthetic storms developed from a statistical-stochastic model of North Atlantic TCs The ETC assessment is based on simulations of historical storms with many random tide scenarios Synthetic TC landfall rates and the final TC and ETC flood exceedance curves are all shown to be consistent with curves computed using historical data, within 95% confidence ranges Combining the ETC and TC results together, the 100-year return period storm tide at NYH is 270 m (251-292 at 95% confidence), and Hurricane Sandy's storm tide of 338 m was a 260-year (170-420) storm tide Deeper analyses of historical flood reports from estimated Category-3 hurricanes in 1788 and 1821 lead to new estimates and reduced uncertainties for their floods, and show that Sandy's storm tide was the largest at NYH back to at least 1700 The flood exceedance curves for ETCs and TCs have sharply different slopes due to their differing meteorology and frequency, warranting separate treatment in hazard assessments

Posted Content
TL;DR: In this paper, the authors proposed a new framework that makes it possible to re-write or compress the content of any number of blocks in decentralized services exploiting the blockchain technology, which can support applications requiring rewritable storage, to the right to be forgotten.
Abstract: We put forward a new framework that makes it possible to re-write or compress the content of any number of blocks in decentralized services exploiting the blockchain technology. As we argue, there are several reasons to prefer an editable blockchain, spanning from the necessity to remove inappropriate content and the possibility to support applications requiring re-writable storage, to "the right to be forgotten." Our approach generically leverages so-called chameleon hash functions (Krawczyk and Rabin, NDSS '00), which allow determining hash collisions efficiently, given a secret trapdoor information. We detail how to integrate a chameleon hash function in virtually any blockchain-based technology, for both cases where the power of redacting the blockchain content is in the hands of a single trusted entity and where such a capability is distributed among several distrustful parties (as is the case with Bitcoin). We also report on a proof-of-concept implementation of a redactable blockchain, building on top of Nakamoto's Bitcoin core. The prototype only requires minimal changes to the way current client software interprets the information stored in the blockchain and to the current blockchain, block, or transaction structures. Moreover, our experiments show that the overhead imposed by a redactable blockchain is small compared to the case of an immutable one.

Journal ArticleDOI
TL;DR: Top-down approaches to determine the structure of the intact NPC in situ have converged, thereby bridging the resolution gap from the higher-order scaffold structure to near-atomic resolution and opening the door for structure-guided experimental interrogations of NPC function.
Abstract: Elucidating the structure of the nuclear pore complex (NPC) is a prerequisite for understanding the molecular mechanism of nucleocytoplasmic transport. However, owing to its sheer size and flexibility, the NPC is unapproachable by classical structure determination techniques and requires a joint effort of complementary methods. Whereas bottom-up approaches rely on biochemical interaction studies and crystal-structure determination of NPC components, top-down approaches attempt to determine the structure of the intact NPC in situ. Recently, both approaches have converged, thereby bridging the resolution gap from the higher-order scaffold structure to near-atomic resolution and opening the door for structure-guided experimental interrogations of NPC function.

Journal ArticleDOI
20 Dec 2016
TL;DR: In this article, the authors demonstrate the first QFC temporal mode sorting system in a four-dimensional Hilbert space, achieving a conversion efficiency and mode separability as high as 92% and 0.84, respectively.
Abstract: Quantum frequency conversion (QFC) of photonic signals preserves quantum information while simultaneously changing the signal wavelength. A common application of QFC is to translate the wavelength of a signal compatible with the current fiber-optic infrastructure to a shorter wavelength more compatible with high-quality single-photon detectors and optical memories. Recent work has investigated the use of QFC to manipulate and measure specific temporal modes (TMs) through tailoring the pump pulses. Such a scheme holds promise for multidimensional quantum state manipulation that is both low loss and re-programmable on a fast time scale. We demonstrate the first QFC temporal mode sorting system in a four-dimensional Hilbert space, achieving a conversion efficiency and mode separability as high as 92% and 0.84, respectively. A 20-GHz pulse train is projected onto 6 different TMs, including superposition states, and mode separability with weak coherent signals is verified via photon counting. Such ultrafast high-dimensional photonic signals could enable long-distance quantum communication at high rates.

Journal ArticleDOI
TL;DR: An adaptive memory programming (AMP) metaheuristic to address the robust capacitated vehicle routing problem under demand uncertainty and presents two classes of uncertainty sets for which route feasibility can be established much more efficiently.
Abstract: We present an adaptive memory programming (AMP) metaheuristic to address the robust capacitated vehicle routing problem under demand uncertainty. Contrary to its deterministic counterpart, the robust formulation allows for uncertain customer demands, and the objective is to determine a minimum cost delivery plan that is feasible for all demand realizations within a prespecified uncertainty set. A crucial step in our heuristic is to verify the robust feasibility of a candidate route. For generic uncertainty sets, this step requires the solution of a convex optimization problem, which becomes computationally prohibitive for large instances. We present two classes of uncertainty sets for which route feasibility can be established much more efficiently. Although we discuss our implementation in the context of the AMP framework, our techniques readily extend to other metaheuristics. Computational studies on standard literature benchmarks with up to 483 customers and 38 vehicles demonstrate that the proposed ap...

Journal ArticleDOI
TL;DR: A new derivation of the Rao test based on the subspace model is presented, and a modified Rao test (MRT) is proposed by introducing a tunable parameter to demonstrate that the MRT can offer the flexibility of being adjustable in the mismatched case where the target signal deviates from the presumed signal subspace.
Abstract: The problem of detecting a subspace signal is studied in colored Gaussian noise with an unknown covariance matrix. In the subspace model, the target signal belongs to a known subspace, but with unknown coordinates. We first present a new derivation of the Rao test based on the subspace model, and then propose a modified Rao test (MRT) by introducing a tunable parameter. The MRT is more general, which includes the Rao test and the generalized likelihood ratio test as special cases. Moreover, closed-form expressions for the probabilities of false alarm and detection of the MRT are derived, which show that the MRT bears a constant false alarm rate property against the noise covariance matrix. Numerical results demonstrate that the MRT can offer the flexibility of being adjustable in the mismatched case where the target signal deviates from the presumed signal subspace. In particular, the MRT provides better mismatch rejection capacities as the tunable parameter increases.

Journal ArticleDOI
TL;DR: Current knowledge of the innate and adaptive immune responses induced by PDT against tumors are summarized, providing evidence showing PDT facilitated-antitumor immunity.
Abstract: Photodynamic therapy (PDT) is a minimally invasive therapeutic strategy for cancer treatment, which can destroy local tumor cells and induce systemic antitumor immune response, whereas, focusing on improving direct cytotoxicity to tumor cells treated by PDT, there is growing interest in developing approaches to further explore the immune stimulatory properties of PDT. In this review we summarize the current knowledge of the innate and adaptive immune responses induced by PDT against tumors, providing evidence showing PDT facilitated-antitumor immunity. Various immunotherapeutic approaches on different cells are reviewed for their effectiveness in improving the treatment efficiency in concert with PDT. Future perspectives are discussed for further enhancing PDT efficiency via intracellular targetable drug delivery as well as optimized experimental model development associated with the study of antitumor immune response.