scispace - formally typeset
Search or ask a question

Showing papers by "Stevens Institute of Technology published in 2011"


Journal ArticleDOI
TL;DR: Graphene oxide nanosheets were inkjet-printed onto Ti foils and thermally reduced at 200°C in N 2, as a new method of fabricating inkjet printed graphene electrodes (IPGEs) for supercapacitors as discussed by the authors.

377 citations


Journal ArticleDOI
12 Jul 2011
TL;DR: In this paper, a graph-theoretic definition of connectivity is provided, as well as an equivalent definition based on algebraic graph theory, which employs the adjacency and Laplacian matrices of the graph and their spectral properties.
Abstract: In this paper, we provide a theoretical framework for controlling graph connectivity in mobile robot networks. We discuss proximity-based communication models composed of disk-based or uniformly-fading-signal-strength communication links. A graph-theoretic definition of connectivity is provided, as well as an equivalent definition based on algebraic graph theory, which employs the adjacency and Laplacian matrices of the graph and their spectral properties. Based on these results, we discuss centralized and distributed algorithms to maintain, increase, and control connectivity in mobile robot networks. The various approaches discussed in this paper range from convex optimization and subgradient-descent algorithms, for the maximization of the algebraic connectivity of the network, to potential fields and hybrid systems that maintain communication links or control the network topology in a least restrictive manner. Common to these approaches is the use of mobility to control the topology of the underlying communication network. We discuss applications of connectivity control to multirobot rendezvous, flocking and formation control, where so far, network connectivity has been considered an assumption.

345 citations


Proceedings ArticleDOI
19 Sep 2011
TL;DR: This work addresses the fundamental problem of distinguishing between a driver and passenger using a mobile phone, which is the critical input to enable numerous safety and interface enhancements, and leverages the existing car stereo infrastructure, in particular the speakers and Bluetooth network.
Abstract: This work addresses the fundamental problem of distinguishing between a driver and passenger using a mobile phone, which is the critical input to enable numerous safety and interface enhancements. Our detection system leverages the existing car stereo infrastructure, in particular the speakers and Bluetooth network. Our acoustic approach has the phone send a series of customized high frequency beeps via the car stereo. The beeps are spaced in time across the left, right, and if available, front and rear speakers. After sampling the beeps, we use a sequential change-point detection scheme to time their arrival, and then use a differential approach to estimate the phone's distance from the car's center. From these differences a passenger or driver classification can be made. To validate our approach, we experimented with two kinds of phones and in two different cars. We found that our customized beeps were imperceptible to most users, yet still playable and recordable in both cars. Our customized beeps were also robust to background sounds such as music and wind, and we found the signal processing did not require excessive computational resources. In spite of the cars' heavy multi-path environment, our approach had a classification accuracy of over 90%, and around 95% with some calibrations. We also found we have a low false positive rate, on the order of a few percent.

247 citations


Journal ArticleDOI
TL;DR: In this article, the authors quantitatively studied, using X-ray photoelectron spectroscopy (XPS), oxidation of substrate-immobilized silver nanoparticles (Ag NPs) in a wide range of conditions, including exposure to ambient air and controlled ozone environment under UV irradiation, and correlated the degree of silver oxidation with surfaceenhanced Raman scattering enhancement factors (EFs).
Abstract: We quantitatively studied, using X-ray photoelectron spectroscopy (XPS), oxidation of substrate-immobilized silver nanoparticles (Ag NPs) in a wide range of conditions, including exposure to ambient air and controlled ozone environment under UV irradiation, and we correlated the degree of silver oxidation with surface-enhanced Raman scattering (SERS) enhancement factors (EFs). The SERS activity of pristine and oxidized Ag NPs was assessed by use of trans-1,2-bis(4-pyridyl)ethylene (BPE) and sodium thiocynate as model analytes at the excitation wavelength of 532 nm. Our study showed that the exposure of Ag NPs to parts per million (ppm) level concentrations of ozone led to the formation of Ag2O and orders of magnitude reduction in SERS EFs. Such an adverse effect was also notable upon exposure of Ag NPs under ambient conditions where ozone existed at parts per billion (ppb) level. The correlated XPS and SERS studies suggested that formation of just a submonolayer of Ag2O was sufficient to decrease markedly...

238 citations


Journal ArticleDOI
TL;DR: While the economic climate is improving at different rates around the globe – albeit at a slower pace than anticipated – ITs role continues to evolve as it provides organizations with a fundamental vehicle for reducing business expenses and new opportunities for increasing revenues.
Abstract: The importance of the impact of IT for organizations around the world, especially in light of the global financial crisis, has amplified the need to provide a better understanding of the specific geographic similarities and differences of IT managerial and technical trends. Going beyond identifying these influential factors is also the need to understand the considerations for addressing them, in light of recognizing the respective local characteristics, especially when operating in a globally linked environment. By comparing and contrasting different geographies, this paper presents important local and international factors (e.g., management concerns, influential technologies, budgets/spending, organizational considerations) necessary to prepare IT leaders for the challenges that await them. The research is based on data from four geographic regions (United States, Europe, Asia, and Latin America). The same questionnaire (although translated for the respective respondents), based on the lead authors of the well-respected and long-running Society for Information Management survey, was applied across geographies. This paper presents the major findings based on survey responses from 472 organizations (172 US, 142 European, 103 Asian, and 55 Latin) in mid-2010. The top five management concerns were: (1) business productivity and cost reduction; (2) IT and business alignment; (3) business agility and speed to market; (4) business process re-engineering; and (5) IT reliability and efficiency. The five most influential technologies were business intelligence, cloud computing, enterprise resource planning, Software as a Service/Platform as a Service, and collaborative tools.

235 citations


Journal ArticleDOI
TL;DR: It is found that desirable osteoblast-surface interactions are maximized on plasma-sprayed surfaces and minimized on satin-finished surfaces, indicating that both the vertical and lateral character of surface roughness can be modified to not only optimize implant-bone interactions but to simultaneously minimize implant-bacteria interactions.

221 citations


Posted Content
TL;DR: It is argued that neither CIO reporting structure is necessarily optimal, and that the CIOReporting structure should not be used to gauge the strategic role of IT in the firm, and firms that align their CIO reported structure with their strategic positioning will have superior future performance.
Abstract: Almost 30 years after the introduction of the CIO position, the ideal CIO reporting structure (whether the CIO should report to the CEO or the CFO) is yet to be identified. There is an intuitive assumption among some proponents of IT that the CIO should always report to the CEO to promote the importance of IT and the CIO’s clout in the firm, while some adversaries of IT call for a CIO-CFO reporting structure to keep a tab on IT spending. However, we challenge these two ad hoc prescriptions by arguing that neither CIO reporting structure is necessarily optimal, and that the CIO reporting structure should not be used to gauge the strategic role of IT in the firm.First, extending the strategy-structure paradigm, we propose that a firm’s strategic positioning (differentiation or cost leadership) should be a primary determinant of its CIO reporting structure. We hypothesize that differentiators are more likely to have their CIO report to the CEO in order to pursue IT initiatives that help the firm’s differentiation strategy. We also hypothesize that cost leaders are more likely to have their CIO report to the CFO to lead IT initiatives to facilitate the firm’s cost leadership strategy. Second, extending the alignment-fit view, we propose that firms that align their CIO reporting structure with their strategic positioning (specifically, differentiation with a CIO-CEO reporting structure and cost leadership with a CIO-CFO reporting structure) will have superior future performance.Longitudinal data from two periods (1990-1993 and 2006) support the proposed hypotheses, validating the relationship between a firm’s strategic positioning and its CIO reporting structure, and also the positive impact of their alignment on firm performance. These results challenge the ad hoc prescriptions about the CIO reporting structure, demonstrating that a CIO-CEO reporting structure is only superior for differentiators and a CIO-CFO reporting structure is superior only for cost leaders. The CIO reporting structure must, therefore, be designed to align with the firm’s strategic positioning, independent of whether IT plays a key strategic role in the firm.

221 citations


Journal ArticleDOI
TL;DR: This paper investigates author gender identification for short length, multi-genre, content-free text, such as the ones found in many Internet applications, and proposes 545 psycho-linguistic and gender-preferential cues along with stylometric features to build the feature space for this identification problem.

219 citations


Proceedings ArticleDOI
07 May 2011
TL;DR: Findings suggest that crowd based design processes may be effective, and point the way toward computer-human interactions that might further encourage crowd creativity.
Abstract: A sketch combination system is introduced and tested: a crowd of 1047 participated in an iterative process of design, evaluation and combination. Specifically, participants in a crowdsourcing marketplace sketched chairs for children. One crowd created a first generation of chairs, and then successive crowds created new generations by combining the chairs made by previous crowds. Other participants evaluated the chairs. The crowd judged the chairs from the third generation more creative than those from the first generation. An analysis of the design evolution shows that participants inherited and modified presented features, and also added new features. These findings suggest that crowd based design processes may be effective, and point the way toward computer-human interactions that might further encourage crowd creativity.

197 citations


Journal ArticleDOI
TL;DR: A review of recent developments in constructing layered matrices using linear polymers, polymer nanoparticles and block copolymer micelles capable of multi-stage delivery of multiple drugs, as well as challenges and opportunities associated with fabrication of stratified multilayer films.

197 citations


Journal ArticleDOI
TL;DR: The most recent advances in the context of decision making under uncertainty are surveyed, with an emphasis on the modeling of risk-averse preferences using the apparatus of axiomatically defined risk functionals and their connection to utility theory, stochastic dominance, and other more established methods.

Journal ArticleDOI
TL;DR: The focus of this study was to evaluate the potential use of the predatory bacteria Bdellovibrio bacteriovorus and Micavibrio aeruginosavorus to control the pathogens associated with human infection.
Abstract: Aims: The focus of this study was to evaluate the potential use of the predatory bacteria Bdellovibrio bacteriovorus and Micavibrio aeruginosavorus to control the pathogens associated with human infection. Methods and Results: By coculturing B. bacteriovorus 109J and M. aeruginosavorus ARL-13 with selected pathogens, we have demonstrated that predatory bacteria are able to attack bacteria from the genus Acinetobacter, Aeromonas, Bordetella, Burkholderia, Citrobacter, Enterobacter, Escherichia, Klebsiella, Listonella, Morganella, Proteus, Pseudomonas, Salmonella, Serratia, Shigella, Vibrio and Yersinia. Predation was measured in single and multispecies microbial cultures as well as on monolayer and multilayer preformed biofilms. Additional experiments aimed at assessing the optimal predation characteristics of M. aeruginosavorus demonstrated that the predator is able to prey at temperatures of 25–37°C but is unable to prey under oxygen-limiting conditions. In addition, an increase in M. aeruginosavorus ARL-13 prey range was also observed. Conclusions: Bdellovibrio bacteriovorus and M. aeruginosavorus have an ability to prey and reduce many of the multidrug-resistant pathogens associated with human infection. Significance and Impact of the Study: Infectious complications caused by micro-organisms that have become resistant to drug therapy are an increasing problem in medicine, with more infections becoming difficult to treat using traditional antimicrobial agents. The work presented here highlights the potential use of predatory bacteria as a biological-based agent for eradicating multidrug-resistant bacteria, with the hope of paving the way for future studies in animal models.

Journal ArticleDOI
TL;DR: This paper proposes a general adaptive incremental learning framework named ADAIN that is capable of learning from continuous raw data, accumulating experience over time, and using such knowledge to improve future learning and prediction performance.
Abstract: Recent years have witnessed an incredibly increasing interest in the topic of incremental learning Unlike conventional machine learning situations, data flow targeted by incremental learning becomes available continuously over time Accordingly, it is desirable to be able to abandon the traditional assumption of the availability of representative training data during the training period to develop decision boundaries Under scenarios of continuous data flow, the challenge is how to transform the vast amount of stream raw data into information and knowledge representation, and accumulate experience over time to support future decision-making process In this paper, we propose a general adaptive incremental learning framework named ADAIN that is capable of learning from continuous raw data, accumulating experience over time, and using such knowledge to improve future learning and prediction performance Detailed system level architecture and design strategies are presented in this paper Simulation results over several real-world data sets are used to validate the effectiveness of this method

Journal ArticleDOI
TL;DR: A generalized-likelihood ratio test (GLRT) for moving target detection in distributed MIMO radar is developed and shown to be a constant false alarm rate (CFAR) detector and the test statistic is a central and noncentral Beta variable under the null and alternative hypotheses, respectively.
Abstract: In this paper, we consider moving target detection using a distributed multiple-input multiple-output (MIMO) radar on stationary platforms in nonhomogeneous clutter environments. Our study is motivated by the fact that the multistatic transmit-receive configuration in a distributed MIMO radar causes nonstationary clutter. Specifically, the clutter power for the same test cell may vary significantly from one transmit-receive pair to another, due to azimuth-selective backscattering of the clutter. To account for these issues, a new nonhomogeneous clutter model, where the clutter resides in a low-rank subspace with different subspace coefficients (and hence different clutter power) for different transmit-receive pair, is introduced and the relation to a general clutter model is discussed. Following the proposed clutter model, we develop a generalized-likelihood ratio test (GLRT) for moving target detection in distributed MIMO radar. The GLRT is shown to be a constant false alarm rate (CFAR) detector, and the test statistic is a central and noncentral Beta variable under the null and alternative hypotheses, respectively. Simulations are provided to demonstrate the performance of the proposed GLRT in comparison with several existing techniques.

Journal ArticleDOI
TL;DR: A large-scale, multi-year, randomized study compared learning activities and outcomes for hands-on, remotely-operated, and simulation-based educational laboratories in an undergraduate engineering course as discussed by the authors.
Abstract: A large-scale, multi-year, randomized study compared learning activities and outcomes for hands-on, remotely-operated, and simulation-based educational laboratories in an undergraduate engineering course. Students (N = 458) worked in small-group lab teams to perform two experiments involving stress on a cantilever beam. Each team conducted the experiments in one of three lab formats (hands-on, remotely-operated, or simulation-based), collecting data either individually or as a team. Lab format and data-collection mode showed an interaction, such that for the hands-on lab format learning outcomes were higher when the lab team collected data sets working as a group rather than individually collecting data sets to be combined later, while for remotely-operated labs individual data collection was best. The pattern of time spent on various lab-related activities suggests that working with real instead of simulated data may induce higher levels of motivation. The results also suggest that learning with computer-mediated technologies can be improved by careful design and coordination of group and individual activities.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that the CIO reporting structure should not be used to gauge the strategic role of IT in the firm, and propose that a firm's strategic positioning (differentiation or cost leadership) should be a primary determinant of its CIO report structure.
Abstract: Almost 30 years after the introduction of the CIO position, the ideal CIO reporting structure (whether the CIO should report to the CEO or the CFO) is yet to be identified. There is an intuitive assumption among some proponents of IT that the CIO should always report to the CEO to promote the importance of IT and the CIO's clout in the firm, while some adversaries of IT call for a CIO-CFO reporting structure to keep a tab on IT spending. However, we challenge these two ad hoc prescriptions by arguing that neither CIO reporting structure is necessarily optimal, and that the CIO reporting structure should not be used to gauge the strategic role of IT in the firm. First, extending the strategy-structure paradigm, we propose that a firm's strategic positioning (differentiation or cost leadership) should be a primary determinant of its CIO reporting structure. We hypothesize that differentiators are more likely to have their CIO report to the CEO in order to pursue IT initiatives that help the firm's differentiation strategy. We also hypothesize that cost leaders are more likely to have their CIO report to the CFO to lead IT initiatives to facilitate the firm's cost leadership strategy. Second, extending the alignment if it view, we propose that firms that align their CIO reporting structure with their strategic positioning (specifically, differentiation with a CIO-CEO reporting structure and cost leadership with a CIO-CFO reporting structure) will have superior future performance. Longitudinal data from two periods (1990-1993 and 2006) support the proposed hypotheses, validating the relationship between a firm's strategic positioning and its CIO reporting structure, and also the positive impact of their alignment on firm performance. These results challenge the ad hoc prescriptions about the CIO reporting structure, demonstrating that a CIO-CEO reporting structure is only superior for differentiators and a CIO-CFO reporting structure is superior only for cost leaders. The CIO reporting structure must, therefore, be designed to align with the firm's strategic positioning, independent of whether IT plays a key strategic role in the firm.

Proceedings ArticleDOI
06 Dec 2011
TL;DR: The solution, icing, incorporates an optimized cryptographic construction that is compact, and requires negligible configuration state and no PKI, and is demonstrated to have plausibility with a NetFPGA hardware implementation.
Abstract: We describe a new networking primitive, called a Path Verification Mechanism (pvm). There has been much recent work about how senders and receivers express policies about the paths that their packets take. For instance, a company might want fine-grained control over which providers carry which traffic between its branch offices, or a receiver may want traffic sent to it to travel through an intrusion detection service.While the ability to express policies has been well-studied, the ability to enforce policies has not. The core challenge is: if we assume an adversarial, decentralized, and high-speed environment, then when a packet arrives at a node, how can the node be sure that the packet followed an approved path? Our solution, icing, incorporates an optimized cryptographic construction that is compact, and requires negligible configuration state and no PKI. We demonstrate icing's plausibility with a NetFPGA hardware implementation. At 93% more costly than an IP router on the same platform, its cost is significant but affordable. Indeed, our evaluation suggests that icing can scale to backbone speeds.

Journal ArticleDOI
TL;DR: In this article, the authors review recent theoretical and experimental progress on nanolasers with a focus on the emission properties of devices operating with a few or even an individual semiconductor quantum dot as a gain medium.
Abstract: Recent theoretical and experimental progress on nanolasers is reviewed with a focus on the emission properties of devices operating with a few or even an individual semiconductor quantum dot as a gain medium. Concepts underlying the design and operation of these devices, microscopic models describing light-matter interaction and semiconductor effects, as well as recent experimental results and lasing signatures are discussed. In particular, a critical review of the “self-tuned gain” mechanism which gives rise to quantum-dot mode coupling in the off-resonant case is provided. Furthermore recent advances in the modeling of single quantum dot lasers beyond the artificial atom model are presented with a focus on the exploration of similarities and differences between the atomic and semiconductor systems.

Journal ArticleDOI
TL;DR: In the proposed scheme, the sensing information of different secondary users is combined at a fusion center and the combining weights are optimized with the objective of maximizing the detection probability of available channels under the constraint of a required false alarm probability.
Abstract: In recent years, the security issues of the cognitive radio (CR) networks have drawn a lot of research attentions. Primary user emulation attack (PUEA), as one of common attacks, compromises the spectrum sensing, where a malicious user forestalls vacant channels by impersonating the primary user to prevent other secondary users from accessing the idle frequency bands. In this paper, we propose a new cooperative spectrum sensing scheme, considering the existence of PUEA in CR networks. In the proposed scheme, the sensing information of different secondary users is combined at a fusion center and the combining weights are optimized with the objective of maximizing the detection probability of available channels under the constraint of a required false alarm probability. We also investigate the impact of the channel estimation errors on the detection probability. Simulation and numerical results illustrate the effectiveness of the proposed scheme in cooperative spectrum sensing in the presence of PUEA.

Journal ArticleDOI
TL;DR: The method developed in this work may be suitable for the preparation of a wide range of insensitive explosive compositions as it exhibited a markedly lower shock sensitivity than previously reported.

Book
09 Nov 2011
TL;DR: This book explores how non-commutative (infinite) groups, which are typically studied in combinatorial group theory, can be used in public-key cryptography and describes new interesting developments in the algorithmic theory of solvable groups.
Abstract: This book is about relations between three different areas of mathematics and theoretical computer science: combinatorial group theory, cryptography, and complexity theory. It explores how non-commutative (infinite) groups, which are typically studied in combinatorial group theory, can be used in public-key cryptography. It also shows that there is remarkable feedback from cryptography to combinatorial group theory because some of the problems motivated by cryptography appear to be new to group theory, and they open many interesting research avenues within group theory. In particular, a lot of emphasis in the book is put on studying search problems, as compared to decision problems traditionally studied in combinatorial group theory. Then, complexity theory, notably generic-case complexity of algorithms, is employed for cryptanalysis of various cryptographic protocols based on infinite groups, and the ideas and machinery from the theory of generic-case complexity are used to study asymptotically dominant properties of some infinite groups that have been applied in public-key cryptography so far. This book also describes new interesting developments in the algorithmic theory of solvable groups and another spectacular new development related to complexity of group-theoretic problems, which is based on the ideas of compressed words and straight-line programs coming from computer science.

Journal ArticleDOI
TL;DR: Numerical results show that, under a target probability of false alarm of spectrum holes, the SFSS- BRDT scheme outperforms the FFSS-BRDT scheme in terms of the spectrum hole utilization efficiency.
Abstract: In cognitive radio networks, each cognitive transmission process typically requires two phases: the spectrum sensing phase and data transmission phase. In this paper, we investigate cognitive transmissions with multiple relays by jointly considering the two phases over Rayleigh fading channels. We study a selective fusion spectrum sensing and best relay data transmission (SFSS-BRDT) scheme in multiple-relay cognitive radio networks. Specifically, in the spectrum sensing phase, only the initial spectrum sensing results, which are received from the cognitive relays and decoded correctly at a cognitive source, are selected and used for fusion. In the data transmission phase, only the best relay is utilized to assist the cognitive source for data transmissions. Under the constraint of satisfying a required probability of false alarm of spectrum holes (for the protection of the primary user), we derive an exact closed-form expression of the spectrum hole utilization efficiency for the SFSS-BRDT scheme, which is used as a measure to quantify the percentage of spectrum holes utilized by the cognitive source for its successful data transmissions. For the comparison purpose, we also examine the spectrum hole utilization efficiency for a fixed fusion spectrum sensing and best relay data transmission (FFSS-BRDT) scheme, where all the initial spectrum sensing results are used for fusion without any refined selection. Numerical results show that, under a target probability of false alarm of spectrum holes, the SFSS-BRDT scheme outperforms the FFSS-BRDT scheme in terms of the spectrum hole utilization efficiency. Moreover, the spectrum hole utilization efficiency of the SFSS-BRDT scheme always improves as the number of cognitive relays increases, whereas the FFSS-BRDT scheme's performance improves initially and degrades eventually after a critical number of cognitive relays. It is also shown that a maximum spectrum hole utilization efficiency can be achieved through an optimal allocation of the time durations between the spectrum sensing and data transmission phases for both the FFSS-BRDT and SFSS-BRDT schemes.

Journal ArticleDOI
TL;DR: In this article, the authors developed a business model innovation typology to better explain the complex set of factors that distinguish three types of business model innovations and their associated challenges and developed a set of features for each of them.
Abstract: OVERVIEW:Business model innovation represents a significant opportunity for established firms, as demonstrated by the considerable success of Apple's iPod/iTunes franchise. However, it also represents a challenge, as evidenced by Kodak's failed attempt to dominate the digital photography market and Microsoft's difficulty gaining share in the gaming market, despite both companies' huge financial investments. We developed a business model innovation typology to better explain the complex set of factors that distinguishes three types of business model innovations and their associated challenges.

Journal ArticleDOI
TL;DR: Theoretical analysis proves that REA can provide less erroneous prediction results than a comparative algorithm and empirical study on both synthetic benchmarks and real-world data set is applied to validate effectiveness of REA as compared with other algorithms in terms of evaluation metrics.
Abstract: Difficulties of learning from nonstationary data stream are generally twofold. First, dynamically structured learning framework is required to catch up with the evolution of unstable class concepts, i.e., concept drifts. Second, imbalanced class distribution over data stream demands a mechanism to intensify the underrepresented class concepts for improved overall performance. To alleviate the challenges brought by these issues, we propose the recursive ensemble approach (REA) in this paper. To battle against the imbalanced learning problem in training data chunk received at any timestamp t, i.e., $${{\mathcal{S}}_t,}$$ REA adaptively pushes into $${{\mathcal{S}}_t}$$ part of minority class examples received within [0, t − 1] to balance its skewed class distribution. Hypotheses are then progressively developed over time for all balanced training data chunks and combined together as an ensemble classifier in a dynamically weighted manner, which therefore addresses the concept drifts issue in time. Theoretical analysis proves that REA can provide less erroneous prediction results than a comparative algorithm. Besides that, empirical study on both synthetic benchmarks and real-world data set is also applied to validate effectiveness of REA as compared with other algorithms in terms of evaluation metrics consisting of overall prediction accuracy and ROC curve.

Journal ArticleDOI
TL;DR: The numerical results strongly support the conclusion that maximal ratio combining of channel diversity can enhance the security of the wireless communication system in normal operating scenarios.
Abstract: In this paper, we present a method of utilizing channel diversity to increase secrecy capacity in wireless communication. With the presence of channel diversity, an intended receiver can achieve a relatively high secrecy capacity even at low SNRs. We present a theoretical analysis on the outage probability at a normalized target secrecy capacity in Rayleigh fading environment. Our numerical results strongly support our conclusion that maximal ratio combining of channel diversity can enhance the security of the wireless communication system in normal operating scenarios.

Patent
26 Apr 2011
TL;DR: In this article, an apparatus and method for determining whether a text is deceptive has a computer programmed with software that automatically analyzes a text message in digital form for deceptiveness by at least one of statistical analysis of text content to ascertain and evaluate psycho-linguistic cues that are present in the text message.
Abstract: An apparatus and method for determining whether a text is deceptive has a computer programmed with software that automatically analyzes a text message in digital form for deceptiveness by at least one of statistical analysis of text content to ascertain and evaluate psycho-linguistic cues that are present in the text message, IP geo-location of the source of the message, gender analysis of the author of the message, authorship similarity analysis, and analysis to detect coded/camouflaged messages. The computer has access to truth data against which the veracity of the text message can be compared and a graphical user interface through which a user of the system can control the system and receive results concerning the deceptiveness of the text message analyzed by the system. The computer may be connectable to the Internet and may obtain the text to be analyzed either under the control of the user or automatically.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: A method for automatically extracting salient object from a single image, which is cast in an energy minimization framework, and employs an auto-context cue as a complementary data term to achieve a clear separation of the salient object.
Abstract: We present a method for automatically extracting salient object from a single image, which is cast in an energy minimization framework. Unlike most previous methods that only leverage appearance cues, we employ an auto-context cue as a complementary data term. Benefitting from a generic saliency model for bootstrapping, the segmentation of the salient object and the learning of the auto-context model are iteratively performed without any user intervention. Upon convergence, we obtain not only a clear separation of the salient object, but also an auto-context classifier which can be used to recognize the same type of object in other images. Our experiments on four benchmarks demonstrated the efficacy of the added contextual cue. It is shown that our method compares favorably with the state-of-the-art, some of which even embraced user interactions.

Journal ArticleDOI
TL;DR: The novel aspects of this work reside in the fact that a TWIMS arrangement was used to obtain a high level structural information including location of fatty acyl substituents and double bonds for PCs in plasma, and the presence of alkali metal adduct ions such as [M + Li]+ was not required to obtain double bond positions.

Journal ArticleDOI
TL;DR: This article found that black faculty respond to personal and institutional racism though a form of psychological departure and acts of critical agency, specifically forming external networks, aiming to disprove stereotypes and engaging in service activities.
Abstract: A growing body of research demonstrates that many college environments present challenges for black professors, particularly as they face institutional and personal racism. While scholars have linked these experiences to their attrition, this qualitative study explores black professors’ larger range of responses to difficult professional environments. Twenty-eight black professors employed at two large public research universities participated in this study. Findings indicate that in addition to institutional departure, black faculty respond to personal and institutional racism though a form of psychological departure and acts of critical agency, specifically forming external networks, aiming to disprove stereotypes and engaging in service activities. Thus, institutions must be mindful of the full range of responses to the racism that black professors face, not assuming the climate is hospitable simply because faculty are not leaving the institution. Rather, campuses must improve their campus environments...

Proceedings ArticleDOI
09 May 2011
TL;DR: Numerical results on the generated map segments shows that the 4D method resolves ambiguity in registration and converges faster than the 3D ICP.
Abstract: This paper presents methodologies to accelerate the registration of 3D point cloud segments by using hue data from the associated imagery. The proposed variant of the Iterative Closest Point (ICP) algorithm combines both normalized point range data and weighted hue value calculated from RGB data of an image registered 3D point cloud. A k-d tree based nearest neighbor search is used to associated common points in {x, y, z, hue} 4D space. The unknown rigid translation and rotation matrix required for registration is iteratively solved using Singular Value Decomposition (SVD) method. A mobile robot mounted scanner was used to generate color point cloud segments over a large area. The 4D ICP registration has been compared with typical 3D ICP and numerical results on the generated map segments shows that the 4D method resolves ambiguity in registration and converges faster than the 3D ICP