scispace - formally typeset
Search or ask a question

Showing papers by "Stevens Institute of Technology published in 2018"


Proceedings ArticleDOI
01 Oct 2018
TL;DR: A lightweight and ground-optimized lidar odometry and mapping method, LeGO-LOAM, for realtime six degree-of-freedom pose estimation with ground vehicles and integrated into a SLAM framework to eliminate the pose estimation error caused by drift is integrated.
Abstract: We propose a lightweight and ground-optimized lidar odometry and mapping method, LeGO-LOAM, for realtime six degree-of-freedom pose estimation with ground vehicles. LeGO-LOAM is lightweight, as it can achieve realtime pose estimation on a low-power embedded system. LeGO-LOAM is ground-optimized, as it leverages the presence of a ground plane in its segmentation and optimization steps. We first apply point cloud segmentation to filter out noise, and feature extraction to obtain distinctive planar and edge features. A two-step Levenberg-Marquardt optimization method then uses the planar and edge features to solve different components of the six degree-of-freedom transformation across consecutive scans. We compare the performance of LeGO-LOAM with a state-of-the-art method, LOAM, using datasets gathered from variable-terrain environments with ground vehicles, and show that LeGO-LOAM achieves similar or better accuracy with reduced computational expense. We also integrate LeGO-LOAM into a SLAM framework to eliminate the pose estimation error caused by drift, which is tested using the KITTI dataset.

960 citations


Journal ArticleDOI
TL;DR: Blockchain tokens may democratize entrepreneurship by giving entrepreneurs new ways to raise funds and engage stakeholders, and innovation by giving innovators a new way to develop, deploy, and diffuse decentralized applications.

294 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a vision, named Dragnet, tailoring the recently emerging Cognitive Internet of Things framework for amateur drone surveillance, and provide an exemplary case study on the detection and classification of authorized and unauthorized amateur drones, where an important event is being held and only authorized drones are allowed to fly over.
Abstract: Drones, also known as mini-unmanned aerial vehicles, have attracted increasing attention due to their boundless applications in communications, photography, agriculture, surveillance, and numerous public services. However, the deployment of amateur drones poses various safety, security, and privacy threats. To cope with these challenges, amateur drone surveillance has become a very important but largely unexplored topic. In this article, we first present a brief survey to show the stateof- the-art studies on amateur drone surveillance. Then we propose a vision, named Dragnet, tailoring the recently emerging Cognitive Internet of Things framework for amateur drone surveillance. Next, we discuss the key enabling techniques for Dragnet in detail, accompanied by the technical challenges and open issues. Furthermore, we provide an exemplary case study on the detection and classification of authorized and unauthorized amateur drones, where, for example, an important event is being held and only authorized drones are allowed to fly over.

279 citations


Proceedings ArticleDOI
15 Oct 2018
TL;DR: LEMNA is proposed, a high-fidelity explanation method dedicated for security applications that approximate a local area of the complex deep learning decision boundary using a simple interpretable model and has a much higher fidelity level compared to existing methods.
Abstract: While deep learning has shown a great potential in various domains, the lack of transparency has limited its application in security or safety-critical areas. Existing research has attempted to develop explanation techniques to provide interpretable explanations for each classification decision. Unfortunately, current methods are optimized for non-security tasks ( e.g., image analysis). Their key assumptions are often violated in security applications, leading to a poor explanation fidelity. In this paper, we propose LEMNA, a high-fidelity explanation method dedicated for security applications. Given an input data sample, LEMNA generates a small set of interpretable features to explain how the input sample is classified. The core idea is to approximate a local area of the complex deep learning decision boundary using a simple interpretable model. The local interpretable model is specially designed to (1) handle feature dependency to better work with security applications ( e.g., binary code analysis); and (2) handle nonlinear local boundaries to boost explanation fidelity. We evaluate our system using two popular deep learning applications in security (a malware classifier, and a function start detector for binary reverse-engineering). Extensive evaluations show that LEMNA's explanation has a much higher fidelity level compared to existing methods. In addition, we demonstrate practical use cases of LEMNA to help machine learning developers to validate model behavior, troubleshoot classification errors, and automatically patch the errors of the target models.

229 citations


Journal ArticleDOI
TL;DR: In this paper, the sharp corners of a metal nanocube are used to deform a 2D material to create an induced strain that can then create excitons at defined locations.
Abstract: Solid-state single-quantum emitters are crucial resources for on-chip photonic quantum technologies and require efficient cavity–emitter coupling to realize quantum networks beyond the single-node level1,2. Monolayer WSe2, a transition metal dichalcogenide semiconductor, can host randomly located quantum emitters3–6, while nanobubbles7 as well as lithographically defined arrays of pillars in contact with the transition metal dichalcogenide act as spatially controlled stressors8,9. The induced strain can then create excitons at defined locations. This ability to create zero-dimensional (0D) excitons anywhere within a 2D material is promising for the development of scalable quantum technologies, but so far lacks mature cavity integration and suffers from low emitter quantum yields. Here we demonstrate a deterministic approach to achieve Purcell enhancement at lithographically defined locations using the sharp corners of a metal nanocube for both electric field enhancement and to deform a 2D material. This nanoplasmonic platform allows the study of the same quantum emitter before and after coupling. For a 3 × 4 array of quantum emitters we show Purcell factors of up to 551 (average of 181), single-photon emission rates of up to 42 MHz and a narrow exciton linewidth as low as 55 μeV. Furthermore, the use of flux-grown WSe2 increases the 0D exciton lifetimes to up to 14 ns and the cavity-enhanced quantum yields from an initial value of 1% to up to 65% (average 44%). An array of Au nanocubes combined with a planar mirror simultaneously induces single-photon emitters at lithographically defined locations in WSe2 and enables directional outcoupling of Purcell-enhanced single photons with high quantum yield.

193 citations


Journal ArticleDOI
TL;DR: A deep reinforcement learning-based method is developed, which the secondary user can use to intelligently adjust its transmit power such that after a few rounds of interaction with the primary user, both users can transmit their own data successfully with required qualities of service.
Abstract: We consider the problem of spectrum sharing in a cognitive radio system consisting of a primary user and a secondary user. The primary user and the secondary user work in a non-cooperative manner. Specifically, the primary user is assumed to update its transmitted power based on a pre-defined power control policy. The secondary user does not have any knowledge about the primary user’s transmit power, or its power control strategy. The objective of this paper is to develop a learning-based power control method for the secondary user in order to share the common spectrum with the primary user. To assist the secondary user, a set of sensor nodes are spatially deployed to collect the received signal strength information at different locations in the wireless environment. We develop a deep reinforcement learning-based method, which the secondary user can use to intelligently adjust its transmit power such that after a few rounds of interaction with the primary user, both users can transmit their own data successfully with required qualities of service. Our experimental results show that the secondary user can interact with the primary user efficiently to reach a goal state (defined as a state in which both users can successfully transmit their data) from any initial states within a few number of steps.

163 citations


Journal ArticleDOI
TL;DR: This paper presents the preliminaries of spectrum inference, including the sources of spectrum occupancy statistics, the models of spectrum usage, and characterize the predictability of spectrum state evolution, and offers an in-depth tutorial on the existing algorithms.
Abstract: Spectrum inference, also known as spectrum prediction in the literature, is a promising technique of inferring the occupied/free state of radio spectrum from already known/measured spectrum occupancy statistics by effectively exploiting the inherent correlations among them. In the past few years, spectrum inference has gained increasing attention owing to its wide applications in cognitive radio networks (CRNs), ranging from adaptive spectrum sensing, and predictive spectrum mobility, to dynamic spectrum access and smart topology control, to name just a few. In this paper, we provide a comprehensive survey and tutorial on the recent advances in spectrum inference. Specifically, we first present the preliminaries of spectrum inference, including the sources of spectrum occupancy statistics, the models of spectrum usage, and characterize the predictability of spectrum state evolution. By introducing the taxonomy of spectrum inference from a time-frequency-space perspective, we offer an in-depth tutorial on the existing algorithms. Furthermore, we provide a comparative analysis of various spectrum inference algorithms and discuss the metrics of evaluating the efficiency of spectrum inference. We also portray the various potential applications of spectrum inference in CRNs and beyond, with an outlook to the fifth-generation mobile communications and next generation high frequency communications systems. Last but not least, we highlight the critical research challenges and open issues ahead.

139 citations


Journal ArticleDOI
TL;DR: A method is reported for endowing human amputees with a kinesthetic perception of dexterous robotic hands by vibrating the muscles used for prosthetic control via a neural-machine interface, which instilled participants with a sense of agency over the robotic movements.
Abstract: To effortlessly complete an intentional movement, the brain needs feedback from the body regarding the movement's progress. This largely nonconscious kinesthetic sense helps the brain to learn relationships between motor commands and outcomes to correct movement errors. Prosthetic systems for restoring function have predominantly focused on controlling motorized joint movement. Without the kinesthetic sense, however, these devices do not become intuitively controllable. We report a method for endowing human amputees with a kinesthetic perception of dexterous robotic hands. Vibrating the muscles used for prosthetic control via a neural-machine interface produced the illusory perception of complex grip movements. Within minutes, three amputees integrated this kinesthetic feedback and improved movement control. Combining intent, kinesthesia, and vision instilled participants with a sense of agency over the robotic movements. This feedback approach for closed-loop control opens a pathway to seamless integration of minds and machines.

127 citations


Journal ArticleDOI
TL;DR: In this paper, a two-stage compressed sensing method for mmWave channel estimation is proposed, where the sparse and low-rank properties are respectively utilized in two consecutive stages, namely, a matrix completion stage and a sparse recovery stage.
Abstract: We consider the problem of channel estimation for millimeter wave (mmWave) systems, where, to minimize the hardware complexity and power consumption, an analog transmit beamforming and receive combining structure with only one radio frequency chain at the base station and mobile station is employed. Most existing works for mmWave channel estimation exploit sparse scattering characteristics of the channel. In addition to sparsity, mmWave channels may exhibit angular spreads over the angle of arrival, angle of departure, and elevation domains. In this paper, we show that angular spreads give rise to a useful low-rank structure that, along with the sparsity, can be simultaneously utilized to reduce the sample complexity, i.e., the number of samples needed to successfully recover the mmWave channel. Specifically, to effectively leverage the joint sparse and low-rank structure, we develop a two-stage compressed sensing method for mmWave channel estimation, where the sparse and low-rank properties are respectively utilized in two consecutive stages, namely, a matrix completion stage and a sparse recovery stage. Our theoretical analysis reveals that the proposed two-stage scheme can achieve a lower sample complexity than a conventional compressed sensing method that exploits only the sparse structure of the mmWave channel. Simulation results are provided to corroborate our theoretical results and to show the superiority of the proposed two-stage method.

125 citations


Journal ArticleDOI
TL;DR: This review discusses the recent advancements in the use of oligoaniline-based conductive biomaterials for tissue engineering and regenerative medicine applications and introduces the salient features, the hurdles that must be overcome, the hopes, and practical constraints for further development.

111 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: In this paper, a novel Instance Transformation Network is presented to learn the geometry-aware representation encoding the unique geometric configurations of scene text instances with in-network transformation embedding, resulting in a robust and elegant framework to detect words or text lines at one pass.
Abstract: Localizing text in the wild is challenging in the situations of complicated geometric layout of the targets like random orientation and large aspect ratio. In this paper, we propose a geometry-aware modeling approach tailored for scene text representation with an end-to-end learning scheme. In our approach, a novel Instance Transformation Network (ITN) is presented to learn the geometry-aware representation encoding the unique geometric configurations of scene text instances with in-network transformation embedding, resulting in a robust and elegant framework to detect words or text lines at one pass. An end-to-end multi-task learning strategy with transformation regression, text/non-text classification and coordinates regression is adopted in the ITN. Experiments on the benchmark datasets demonstrate the effectiveness of the proposed approach in detecting scene text in various geometric configurations.

Journal ArticleDOI
TL;DR: In this paper, a review and analysis of the literature regarding the application of social media to emergency management is conducted and identified research gaps are mapped into social and technological challenges, which are then analyzed to set research directions for practitioners and researchers.
Abstract: Social media applications have proven to be a dependable communication channel even when traditional methods fail. Their application to emergency management offers new benefits to the domain. For instance, analysis of information as the event unfolds may increase situational awareness, news and alerts may reach larger audiences in less time and decision makers may monitor public activities as well as coordinate with stakeholders. With such benefits, it seems the adoption of social media applications to emergency management should be automatic. However, their implementation introduces risks as well. To better understand the benefits and challenges, a review and analysis of the literature regarding the application of social media to emergency management was conducted. Identified research gaps were mapped into social and technological challenges. These challenges were then analyzed to set research directions for practitioners and researchers.

Journal ArticleDOI
01 Sep 2018
TL;DR: An overview of the pollution in agricultural soils by pesticides, and the remediation techniques for pesticide-contaminated soils can be found in this article, where the microbial functions in rhizosphere including gene analysis tools are fields in remediation of contaminated soil which has generated a lot of interest lately.
Abstract: An increasing number of pesticides have been used in agriculture for protecting the crops from pests, weeds, and diseases but as much as 80 to 90% of applied pesticides hit non-target vegetation and stay as pesticide residue in the environment which is potentially a grave risk to the agricultural ecosystem. This review gives an overview of the pollution in agricultural soils by pesticides, and the remediation techniques for pesticide-contaminated soils. Currently, the remediation techniques involve physical, chemical, and biological remediation as well as combined ways for the removal of contaminants. The microbial functions in rhizosphere including gene analysis tools are fields in remediation of pesticide-contaminated soil which has generated a lot of interest lately. However, most of those studies were done in greenhouses; more research work should be done in the field conditions for proper evaluation of the efficiency of the proposed techniques. Long-term monitoring and evaluation of in situ remediation techniques should also be done in order to assess their long-term sustainability and practical applications in the field.

Journal ArticleDOI
TL;DR: The proposed novel noncontact heart-beat signal modeling and estimation algorithm using a compact 2.4-GHz Doppler radar is accurate, robust, and simple, and demonstrates an average heart- Beat detection accuracy of more than 90% at a distance of 1.5 m away from the subjects.
Abstract: This paper presents the theoretical and experimental study of a novel noncontact heart-beat signal modeling and estimation algorithm using a compact 2.4-GHz Doppler radar. The proposed technique is able to accurately reconstruct the heart-beat signal and generates heart rate variability indices at a distance of 1.5 m away from the human body. The feasibility of the proposed approach is validated by obtaining data from eight human subjects and comparing them with photoplethysmography (PPG) measurements. A Gaussian pulse train model is suggested for the heart-beat signal along with a modified-and-combined autocorrelation and frequency-time phase regression technique for high-accuracy detection of the human heart-beat rate. The proposed method is accurate, robust, and simple, and demonstrates an average heart-beat detection accuracy of more than 90% at a distance of 1.5 m away from the subjects. In addition, the average beat-to-beat time intervals extracted from the proposed model and signal reconstruction method show less than 2% error compared to PPG measurements. Bland–Altman analysis further validated the accuracy of the proposed approach in comparison with reference data.

Journal ArticleDOI
TL;DR: New light is shed into the mechanism of iron-porphyrin and hemoprotein-catalyzed cyclopropanation reactions and it is expected to facilitate future efforts toward sustainable carbene transfer catalysis using these systems.
Abstract: Catalytic carbene transfer to olefins is a useful approach to synthesize cyclopropanes, which are key structural motifs in many drugs and biologically active natural products. While catalytic methods for olefin cyclopropanation have largely relied on rare transition-metal-based catalysts, recent studies have demonstrated the promise and synthetic value of iron-based heme-containing proteins for promoting these reactions with excellent catalytic activity and selectivity. Despite this progress, the mechanism of iron-porphyrin and hemoprotein-catalyzed olefin cyclopropanation has remained largely unknown. Using a combination of quantum chemical calculations and experimental mechanistic analyses, the present study shows for the first time that the increasingly useful C═C functionalizations mediated by heme carbenes feature an FeII-based, nonradical, concerted nonsynchronous mechanism, with early transition state character. This mechanism differs from the FeIV-based, radical, stepwise mechanism of heme-depende...

Journal ArticleDOI
TL;DR: It is shown that the same model with different initial seeding zones reproduces the characteristic evolution of different prionlike diseases, and the expected evolution of the total toxic protein load is recovered.
Abstract: Many neurodegenerative diseases are related to the propagation and accumulation of toxic proteins throughout the brain. The lesions created by aggregates of these toxic proteins further lead to cell death and accelerated tissue atrophy. A striking feature of some of these diseases is their characteristic pattern and evolution, leading to well-codified disease stages visible to neuropathology and associated with various cognitive deficits and pathologies. Here, we simulate the anisotropic propagation and accumulation of toxic proteins in full brain geometry. We show that the same model with different initial seeding zones reproduces the characteristic evolution of different prionlike diseases. We also recover the expected evolution of the total toxic protein load. Finally, we couple our transport model to a mechanical atrophy model to obtain the typical degeneration patterns found in neurodegenerative diseases.

Journal ArticleDOI
TL;DR: In this article, a facile technique based on polymer encapsulation was used to apply several percent (>5%) controllable strains to monolayer and few-layer transition metal dichalcogenides (TMDs).
Abstract: We describe a facile technique based on polymer encapsulation to apply several percent (>5%) controllable strains to monolayer and few-layer transition metal dichalcogenides (TMDs) We use this technique to study the lattice response to strain via polarized Raman spectroscopy in monolayer WSe2 and WS2 The application of strain causes mode-dependent red shifts, with larger shift rates observed for in-plane modes We observe a splitting of the degeneracy of the in-plane E′ modes in both materials and measure the Gruneisen parameters At large strain, we observe that the reduction of crystal symmetry can lead to a change in the polarization response of the A′ mode in WS2 While both WSe2 and WS2 exhibit similar qualitative changes in the phonon structure with strain, we observe much larger changes in mode positions and intensities with strain in WS2 These differences can be explained simply by the degree of iconicity of the metal–chalcogen bond

Journal ArticleDOI
TL;DR: Nanostructured surfaces are called promising to control bacterial adhesion and biofilm formation as discussed by the authors, which can be made on the basis of periodic or random occurrence of nanostructure features, although often nanostructures are microstructured due to merging of their nanofeatures.
Abstract: Nanostructured surfaces are called “promising” to control bacterial adhesion and biofilm formation. Initial adhesion is followed by emergence of surface-programmed bacterial properties and biofilm growth. A easy distinction between nanostructured surfaces can be made on basis of periodic- or random-occurrence of nanostructured features, although often nanostructured surfaces are microstructured due to merging of their nanofeatures. Characterization of nanostructured surfaces is not trivial due to the myriad of different nanoscaled morphologies. Both superhydrophobic and hydrophilic, nanostructured surfaces generally yield low bacterial adhesion. On smooth surfaces, bacteria deform when adhering, causing membrane surface tension changes and accompanying responses yielding emergent properties. Adhesion to nanostructured surfaces, causes multiple cell wall deformation sites when bacteria are adhering in valleys, while in case of hill-top adhesion, highly localized cell wall deformation occurs. Accordingly, bacterial adhesion to nanostructured surfaces yields emergent responses that range from pressure-induced EPS production to cell wall rupture and death, based upon which nanostructured surfaces are consistently called “promising” for bacterial adhesion and biofilm control. Other promising features of nanostructured surfaces are increased antibiotic housing, thermal effects and photo-induced ROS production, but the latter two promises are largely based on properties of suspended nanoparticles and may not hold when particles are comprised in nanostructured coatings or materials. Moreover, in order to bring nanostructured coatings and materials to application, experiments are needed that go beyond the current limit of the laboratory bench.

Journal ArticleDOI
TL;DR: A head impact detection method that can be implemented on a wearable sensor for detecting field football head impacts using a support vector machine classifier that uses biomechanical features from the time domain and frequency domain, as well as model predictions of head-neck motions.
Abstract: Accumulation of head impacts may contribute to acute and long-term brain trauma. Wearable sensors can measure impact exposure, yet current sensors do not have validated impact detection methods for accurate exposure monitoring. Here we demonstrate a head impact detection method that can be implemented on a wearable sensor for detecting field football head impacts. Our method incorporates a support vector machine classifier that uses biomechanical features from the time domain and frequency domain, as well as model predictions of head-neck motions. The classifier was trained and validated using instrumented mouthguard data from collegiate football games and practices, with ground truth data labels established from video review. We found that low frequency power spectral density and wavelet transform features (10~30 Hz) were the best performing features. From forward feature selection, fewer than ten features optimized classifier performance, achieving 87.2% sensitivity and 93.2% precision in cross-validation on the collegiate dataset (n = 387), and over 90% sensitivity and precision on an independent youth dataset (n = 32). Accurate head impact detection is essential for studying and monitoring head impact exposure on the field, and the approach in the current paper may help to improve impact detection performance on wearable sensors.

Book ChapterDOI
01 Jan 2018
TL;DR: In this article, the authors summarized the problems related to heavy metal pollution and various heavy metal remediation technologies, including phytoremediation, which is a green technology that is environment friendly and less expensive compared with other conventional methods.
Abstract: Although heavy metals are naturally occurring compounds, anthropogenic activities introduce them in excessive quantities in different environmental matrices, which impose severe threats on both human and ecosystem health. Heavy metals are nondegradable and can bioaccumulate in living organisms; hence they can contaminate the entire food chain. Remediation of heavy metals requires proper attention to protect soil quality, the ecosystem, and human health. Physical and chemical heavy metal remediation technologies are very expensive, often destructive to the local ecosystem, and require handling of a large amount of hazardous waste. On the other hand, emerging technologies such as phytoremediation have great potential. Phytoremediation is a “green” technology that is environment friendly and less expensive compared with other conventional methods. This chapter summarizes the problems related to heavy metal pollution and various heavy metal remediation technologies.

Journal ArticleDOI
TL;DR: This strategy represents a new avenue for guided tissue regeneration by designing the grafts to promote tissue remodeling via controlling structure, degradation and mechanical properties of the scaffolds.

Journal ArticleDOI
TL;DR: The results suggest that the stiffness of the brain–unlike any other organ–is a dynamic property that is highly sensitive to the metabolic environment.
Abstract: Alterations in brain rheology are increasingly recognized as a diagnostic marker for various neurological conditions. Magnetic resonance elastography now allows us to assess brain rheology repeatably, reproducibly, and non-invasively in vivo. Recent elastography studies suggest that brain stiffness decreases one percent per year during normal aging, and is significantly reduced in Alzheimer's disease and multiple sclerosis. While existing studies successfully compare brain stiffnesses across different populations, they fail to provide insight into changes within the same brain. Here we characterize rheological alterations in one and the same brain under extreme metabolic changes: alive and dead. Strikingly, the storage and loss moduli of the cerebrum increased by 26% and 60% within only three minutes post mortem and continued to increase by 40% and 103% within 45 minutes. Immediate post mortem stiffening displayed pronounced regional variations; it was largest in the corpus callosum and smallest in the brainstem. We postulate that post mortem stiffening is a manifestation of alterations in polarization, oxidation, perfusion, and metabolism immediately after death. Our results suggest that the stiffness of our brain-unlike any other organ-is a dynamic property that is highly sensitive to the metabolic environment. Our findings emphasize the importance of characterizing brain tissue in vivo and question the relevance of ex vivo brain tissue testing as a whole. Knowing the true stiffness of the living brain has important consequences in diagnosing neurological conditions, planning neurosurgical procedures, and modeling the brain's response to high impact loading.

Journal ArticleDOI
TL;DR: This paper analyzes environmental sustainability assessment methods to enable more accurate decisions earlier in design and identifies opportunities for aligning standard data representation to promote sustainability assessment during design.

Book ChapterDOI
16 Sep 2018
TL;DR: This paper decompose brain tumor segmentation into three different but related tasks and proposes a multi-task deep model that trains them together to exploit their underlying correlation and introduces a simple yet effective post-processing method that can further improve the segmentation performance significantly.
Abstract: The model cascade strategy that runs a series of deep models sequentially for coarse-to-fine medical image segmentation is becoming increasingly popular, as it effectively relieves the class imbalance problem. This strategy has achieved state-of-the-art performance in many segmentation applications but results in undesired system complexity and ignores correlation among deep models. In this paper, we propose a light and clean deep model that conducts brain tumor segmentation in a single-pass and solves the class imbalance problem better than model cascade. First, we decompose brain tumor segmentation into three different but related tasks and propose a multi-task deep model that trains them together to exploit their underlying correlation. Second, we design a curriculum learning-based training strategy that trains the above multi-task model more effectively. Third, we introduce a simple yet effective post-processing method that can further improve the segmentation performance significantly. The proposed methods are extensively evaluated on BRATS 2017 and BRATS 2015 datasets, ranking first on the BRATS 2015 test set and showing top performance among 60+ competing teams on the BRATS 2017 validation set.

Book ChapterDOI
08 Sep 2018
TL;DR: In this paper, an end-to-end multi-context collaborative deep network was proposed for removing distortions from single fisheye images, which learns high-level semantics and low-level appearance features simultaneously to estimate the distortion parameters.
Abstract: Images captured by fisheye lenses violate the pinhole camera assumption and suffer from distortions. Rectification of fisheye images is therefore a crucial preprocessing step for many computer vision applications. In this paper, we propose an end-to-end multi-context collaborative deep network for removing distortions from single fisheye images. In contrast to conventional approaches, which focus on extracting hand-crafted features from input images, our method learns high-level semantics and low-level appearance features simultaneously to estimate the distortion parameters. To facilitate training, we construct a synthesized dataset that covers various scenes and distortion parameter settings. Experiments on both synthesized and real-world datasets show that the proposed model significantly outperforms current state of the art methods. Our code and synthesized dataset will be made publicly available.

Journal ArticleDOI
TL;DR: A comprehensive dielectric spectroscopy study is conducted for the first time to characterize the ultra-wideband dielectrics properties of freshly excised normal and malignant skin tissues obtained from skin cancer patients having undergone Mohs micrographic surgeries at Hackensack University Medical Center.
Abstract: Millimeter waves have recently gained attention for the evaluation of skin lesions and the detection of skin tumors. Such evaluations heavily rely on the dielectric contrasts existing between normal and malignant skin tissues at millimeter-wave frequencies. However, current studies on the dielectric properties of normal and diseased skin tissues at these frequencies are limited and inconsistent. In this study, a comprehensive dielectric spectroscopy study is conducted for the first time to characterize the ultra-wideband dielectric properties of freshly excised normal and malignant skin tissues obtained from skin cancer patients having undergone Mohs micrographic surgeries at Hackensack University Medical Center. Measurements are conducted using a precision slim-form open-ended coaxial probe in conjunction with a millimeter-wave vector network analyzer over the frequency range of 0.5–50 GHz. A one-pole Cole–Cole model is fitted to the complex permittivity dataset of each sample. Statistically considerable contrasts are observed between the dielectric properties of malignant and normal skin tissues over the ultra-wideband millimeter-wave frequency range considered.

Journal ArticleDOI
TL;DR: The approach is modeled as a binary labeling problem and solved using the efficient quadratic pseudo-Boolean optimization and yields promising tracking performance on the challenging PETS09 and MOT16 dataset.
Abstract: In this paper, we propose to exploit the interactions between non-associable tracklets to facilitate multi-object tracking. We introduce two types of tracklet interactions, close interaction and distant interaction. The close interaction imposes physical constraints between two temporally overlapping tracklets, and more importantly, allows us to learn local classifiers to distinguish targets that are close to each other in the spatiotemporal domain. The distant interaction, on the other hand, accounts for the higher order motion and appearance consistency between two temporally isolated tracklets. Our approach is modeled as a binary labeling problem and solved using the efficient quadratic pseudo-Boolean optimization. It yields promising tracking performance on the challenging PETS09 and MOT16 dataset.

Journal ArticleDOI
TL;DR: A new robust Kalman filter based on a detect-and-reject idea is developed that outperforms several recent robust solutions with higher computational efficiency and better accuracy.
Abstract: We consider the nonlinear robust filtering problem where the measurements are partially disturbed by outliers. A new robust Kalman filter based on a detect-and-reject idea is developed. To identify and exclude outliers automatically, each measurement is assigned an indicator variable, which is modeled by a beta-Bernoulli prior. The mean-field variational Bayesian method is then utilized to estimate the state of interest as well as the indicator in an iterative manner at each time instant. Simulation results reveal that the proposed algorithm outperforms several recent robust solutions with higher computational efficiency and better accuracy.

Journal ArticleDOI
TL;DR: The existence of localized modes and multimodal behavior in the brain as a hyperviscoelastic medium is shown and this dynamical phenomenon leads to strain concentration patterns, particularly in deep brain regions, which is consistent with reported concussion pathology.
Abstract: Although concussion is one of the greatest health challenges today, our physical understanding of the cause of injury is limited. In this Letter, we simulated football head impacts in a finite elem ...

Posted Content
TL;DR: This paper proposes an end-to-end multi-context collaborative deep network for removing distortions from single fisheye images and shows that the proposed model significantly outperforms current state of the art methods.
Abstract: Images captured by fisheye lenses violate the pinhole camera assumption and suffer from distortions. Rectification of fisheye images is therefore a crucial preprocessing step for many computer vision applications. In this paper, we propose an end-to-end multi-context collaborative deep network for removing distortions from single fisheye images. In contrast to conventional approaches, which focus on extracting hand-crafted features from input images, our method learns high-level semantics and low-level appearance features simultaneously to estimate the distortion parameters. To facilitate training, we construct a synthesized dataset that covers various scenes and distortion parameter settings. Experiments on both synthesized and real-world datasets show that the proposed model significantly outperforms current state of the art methods. Our code and synthesized dataset will be made publicly available.