scispace - formally typeset
Search or ask a question

Showing papers by "Stevens Institute of Technology published in 2012"


Posted Content
TL;DR: A new approach for the assessment of both vertical and lateral collinearity in variance-based structural equation modeling is proposed and demonstrated in the context of the illustrative analysis, showing that standard validity and reliability tests do not properly capture lateral collInearity.
Abstract: Variance-based structural equation modeling is extensively used in information systems research, and many related findings may have been distorted by hidden collinearity. This is a problem that may extent to multivariate analyses in general, in the field of information systems as well as in many other fields. In multivariate analyses, collinearity is usually assessed as a predictor-predictor relationship phenomenon, where two or more predictors are checked for redundancy. This type of assessment addresses vertical, or “classic,” collinearity. However, another type of collinearity may also exist, called here “lateral” collinearity. It refers to predictor-criterion collinearity. Lateral collinearity problems are exemplified based on an illustrative variance-based structural equation modeling analysis. The analysis employs WarpPLS 2.0, with the results double-checked with other statistical analysis software tools. It is shown that standard validity and reliability tests do not properly capture lateral collinearity. A new approach for the assessment of both vertical and lateral collinearity in variance-based structural equation modeling is proposed and demonstrated in the context of the illustrative analysis.

1,432 citations


Book ChapterDOI
Wil M. P. van der Aalst1, Wil M. P. van der Aalst2, A Arya Adriansyah2, Ana Karla Alves de Medeiros3, Franco Arcieri4, Thomas Baier5, Tobias Blickle6, Jagadeesh Chandra Bose2, Peter van den Brand, Ronald Brandtjen, Joos C. A. M. Buijs2, Andrea Burattin7, Josep Carmona8, Malu Castellanos9, Jan Claes10, Jonathan Cook11, Nicola Costantini, Francisco Curbera12, Ernesto Damiani13, Massimiliano de Leoni2, Pavlos Delias, Boudewijn F. van Dongen2, Marlon Dumas14, Schahram Dustdar15, Dirk Fahland2, Diogo R. Ferreira16, Walid Gaaloul17, Frank van Geffen18, Sukriti Goel19, CW Christian Günther, Antonella Guzzo20, Paul Harmon, Arthur H. M. ter Hofstede1, Arthur H. M. ter Hofstede2, John Hoogland, Jon Espen Ingvaldsen, Koki Kato21, Rudolf Kuhn, Akhil Kumar22, Marcello La Rosa1, Fabrizio Maria Maggi2, Donato Malerba23, RS Ronny Mans2, Alberto Manuel, Martin McCreesh, Paola Mello24, Jan Mendling25, Marco Montali26, Hamid Reza Motahari-Nezhad9, Michael zur Muehlen27, Jorge Munoz-Gama8, Luigi Pontieri28, Joel Ribeiro2, A Anne Rozinat, Hugo Seguel Pérez, Ricardo Seguel Pérez, Marcos Sepúlveda29, Jim Sinur, Pnina Soffer30, Minseok Song31, Alessandro Sperduti7, Giovanni Stilo4, Casper Stoel, Keith D. Swenson21, Maurizio Talamo4, Wei Tan12, Christopher Turner32, Jan Vanthienen33, George Varvaressos, Eric Verbeek2, Marc Verdonk34, Roberto Vigo, Jianmin Wang35, Barbara Weber36, Matthias Weidlich37, Ton Weijters2, Lijie Wen35, Michael Westergaard2, Moe Thandar Wynn1 
01 Jan 2012
TL;DR: This manifesto hopes to serve as a guide for software developers, scientists, consultants, business managers, and end-users to increase the maturity of process mining as a new tool to improve the design, control, and support of operational business processes.
Abstract: Process mining techniques are able to extract knowledge from event logs commonly available in today’s information systems. These techniques provide new means to discover, monitor, and improve processes in a variety of application domains. There are two main drivers for the growing interest in process mining. On the one hand, more and more events are being recorded, thus, providing detailed information about the history of processes. On the other hand, there is a need to improve and support business processes in competitive and rapidly changing environments. This manifesto is created by the IEEE Task Force on Process Mining and aims to promote the topic of process mining. Moreover, by defining a set of guiding principles and listing important challenges, this manifesto hopes to serve as a guide for software developers, scientists, consultants, business managers, and end-users. The goal is to increase the maturity of process mining as a new tool to improve the (re)design, control, and support of operational business processes.

1,135 citations


Posted Content
TL;DR: In this paper, the authors outline a framework that will enable crowd work that is complex, collaborative, and sustainable, and lay out research challenges in twelve major areas: workflow, task assignment, hierarchy, real-time response, synchronous collaboration, quality control, crowds guiding AIs, AIs guiding crowds, platforms, job design, reputation, and motivation.
Abstract: Paid crowd work offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale. But it is also possible that crowd work will fail to achieve its potential, focusing on assembly-line piecework. Can we foresee a future crowd workplace in which we would want our children to participate? This paper frames the major challenges that stand in the way of this goal. Drawing on theory from organizational behavior and distributed computing, as well as direct feedback from workers, we outline a framework that will enable crowd work that is complex, collaborative, and sustainable. The framework lays out research challenges in twelve major areas: workflow, task assignment, hierarchy, real-time response, synchronous collaboration, quality control, crowds guiding AIs, AIs guiding crowds, platforms, job design, reputation, and motivation.

803 citations


Journal ArticleDOI
TL;DR: Generic metrics and formulae for quantifying system resilience are proposed that are generic enough to be implemented in a variety of applications as long as appropriate figures-of-merit and the necessary system parameters, system decomposition and component parameters are defined.

650 citations


Proceedings ArticleDOI
22 Aug 2012
TL;DR: A peer assisted localization approach is proposed that can reduce the maximum and 80-percentile errors to as small as $2m$ and $1m, in time no longer than the original WiFi scanning, with negligible impact on battery lifetime.
Abstract: Highly accurate indoor localization of smartphones is critical to enable novel location based features for users and businesses. In this paper, we first conduct an empirical investigation of the suitability of WiFi localization for this purpose. We find that although reasonable accuracy can be achieved, significant errors (e.g., $6\sim8m$) always exist. The root cause is the existence of distinct locations with similar signatures, which is a fundamental limit of pure WiFi-based methods. Inspired by high densities of smartphones in public spaces, we propose a peer assisted localization approach to eliminate such large errors. It obtains accurate acoustic ranging estimates among peer phones, then maps their locations jointly against WiFi signature map subjecting to ranging constraints. We devise techniques for fast acoustic ranging among multiple phones and build a prototype. Experiments show that it can reduce the maximum and 80-percentile errors to as small as $2m$ and $1m$, in time no longer than the original WiFi scanning, with negligible impact on battery lifetime.

430 citations


Journal ArticleDOI
TL;DR: There has been a controversy over the TiO(2) PCO mechanisms of arsenite for the past 10 years but the adsorption mechanisms of inorganic and organic arsenic onto TiO (2)-based materials are relatively well established.

328 citations


Journal ArticleDOI
TL;DR: This paper reports on a series of 256 Web-based experiments in which groups of 16 individuals collectively solved a complex problem and shared information through different communication networks, finding that collective exploration improved average success over independent exploration because good solutions could diffuse through the network.
Abstract: Complex problems in science, business, and engineering typically require some tradeoff between exploitation of known solutions and exploration for novel ones, where, in many cases, information about known solutions can also disseminate among individual problem solvers through formal or informal networks. Prior research on complex problem solving by collectives has found the counterintuitive result that inefficient networks, meaning networks that disseminate information relatively slowly, can perform better than efficient networks for problems that require extended exploration. In this paper, we report on a series of 256 Web-based experiments in which groups of 16 individuals collectively solved a complex problem and shared information through different communication networks. As expected, we found that collective exploration improved average success over independent exploration because good solutions could diffuse through the network. In contrast to prior work, however, we found that efficient networks outperformed inefficient networks, even in a problem space with qualitative properties thought to favor inefficient networks. We explain this result in terms of individual-level explore-exploit decisions, which we find were influenced by the network structure as well as by strategic considerations and the relative payoff between maxima. We conclude by discussing implications for real-world problem solving and possible extensions.

316 citations


Journal ArticleDOI
TL;DR: It is shown that with the proposed games, global optimization is achieved with local information, specifically, the local altruistic game maximized the network throughput and the local congestion game minimizes the network collision level.
Abstract: We investigate the problem of achieving global optimization for distributed channel selections in cognitive radio networks (CRNs), using game theoretic solutions. To cope with the lack of centralized control and local influences, we propose two special cases of local interaction game to study this problem. The first is local altruistic game, in which each user considers the payoffs of itself as well as its neighbors rather than considering itself only. The second is local congestion game, in which each user minimizes the number of competing neighbors. It is shown that with the proposed games, global optimization is achieved with local information. Specifically, the local altruistic game maximizes the network throughput and the local congestion game minimizes the network collision level. Also, the concurrent spatial adaptive play (C-SAP), which is an extension of the existing spatial adaptive play (SAP), is proposed to achieve the global optimum both autonomously as well as rapidly.

300 citations


Journal ArticleDOI
TL;DR: An extensive evaluation of 17 confidence measures for stereo matching that compares the most widely used measures as well as several novel techniques proposed here, and finds that such an evaluation is missing from the rapidly maturing stereo literature.
Abstract: We present an extensive evaluation of 17 confidence measures for stereo matching that compares the most widely used measures as well as several novel techniques proposed here. We begin by categorizing these methods according to which aspects of stereo cost estimation they take into account and then assess their strengths and weaknesses. The evaluation is conducted using a winner-take-all framework on binocular and multibaseline datasets with ground truth. It measures the capability of each confidence method to rank depth estimates according to their likelihood for being correct, to detect occluded pixels, and to generate low-error depth maps by selecting among multiple hypotheses for each pixel. Our work was motivated by the observation that such an evaluation is missing from the rapidly maturing stereo literature and that our findings would be helpful to researchers in binocular and multiview stereo.

278 citations


Journal ArticleDOI
TL;DR: This work proposes a stochastic learning automata (SLA) based channel selection algorithm, with which the secondary users learn from their individual action-reward history and adjust their behaviors towards a NE point, and investigates the achievable performance of the game in terms of system throughput and fairness.
Abstract: We investigate the problem of distributed channel selection using a game-theoretic stochastic learning solution in an opportunistic spectrum access (OSA) system where the channel availability statistics and the number of the secondary users are apriori unknown. We formulate the channel selection problem as a game which is proved to be an exact potential game. However, due to the lack of information about other users and the restriction that the spectrum is time-varying with unknown availability statistics, the task of achieving Nash equilibrium (NE) points of the game is challenging. Firstly, we propose a genie-aided algorithm to achieve the NE points under the assumption of perfect environment knowledge. Based on this, we investigate the achievable performance of the game in terms of system throughput and fairness. Then, we propose a stochastic learning automata (SLA) based channel selection algorithm, with which the secondary users learn from their individual action-reward history and adjust their behaviors towards a NE point. The proposed learning algorithm neither requires information exchange, nor needs prior information about the channel availability statistics and the number of secondary users. Simulation results show that the SLA based learning algorithm achieves high system throughput with good fairness.

262 citations


Journal ArticleDOI
09 Mar 2012-ACS Nano
TL;DR: The convergence of the experimental and simulated results suggests that Au NP-enhanced and size-dependent ROS formation can be attributed directly to the localized electromagnetic field as a result of surface plasmonic resonance of Au NPs under light irradiation.
Abstract: Photosensitizer, protoporphyrin IX (PpIX), was conjugated with Au nanoparticles (Au NPs) of 19, 66, and 106 nm diameter to study the size-dependent enhancement of reactive oxygen species (ROS) formation enabled by Au NPs. The ROS enhancement ratio is determined to be 1:2.56:4.72 in order of increasing Au NP size, in general agreement with theoretically calculated field enhancement to the fourth power. The convergence of the experimental and simulated results suggests that Au NP-enhanced and size-dependent ROS formation can be attributed directly to the localized electromagnetic field as a result of surface plasmonic resonance of Au NPs under light irradiation. In vitro study on the ROS formation enabled by PpIX-conjugated Au NPs in human breast cancer cells (MDA-MB-231) revealed the similar size-dependent enhancement of intracellular ROS formation, while the enhancement greatly depended on cellular uptake of Au NPs. Cellular photodynamic therapy revealed that cell destruction significantly increased in the presence of Au NPs. Compared to the untreated control (0% destruction), 22.6% cell destruction was seen in the PpIX alone group and more than 50% cell destruction was obtained for all PpIX-conjugated Au NPs. The 66 nm Au NPs yielded the highest cell destruction, consistent with the highest cellular uptake and highest ROS formation. Clearly, the complex cellular environment, size-dependent cellular uptake of Au NPs, and ROS generations are vital contributors to the overall cellular PDT efficacy.

Journal ArticleDOI
04 Sep 2012-Langmuir
TL;DR: The graphene electrode was found to be stable under mechanical flexing and behave as a negative temperature coefficient (NTC) material, exhibiting rapid electrical resistance decrease with temperature increase, which suggests the potential use of the inkjet-printed graphene electrode as a writable, very thin, mechanically flexible, and transparent temperature sensor.
Abstract: Graphene electrode was fabricated by inkjet printing, as a new means of directly writing and micropatterning the electrode onto flexible polymeric materials. Graphene oxide sheets were dispersed in water and subsequently reduced using an infrared heat lamp at a temperature of ~200 °C in 10 min. Spacing between adjacent ink droplets and the number of printing layers were used to tailor the electrode's electrical sheet resistance as low as 0.3 MΩ/□ and optical transparency as high as 86%. The graphene electrode was found to be stable under mechanical flexing and behave as a negative temperature coefficient (NTC) material, exhibiting rapid electrical resistance decrease with temperature increase. Temperature sensitivity of the graphene electrode was similar to that of conventional NTC materials, but with faster response time by an order of magnitude. This finding suggests the potential use of the inkjet-printed graphene electrode as a writable, very thin, mechanically flexible, and transparent temperature sensor.

Journal ArticleDOI
TL;DR: This article shows a trade-off design of cognitive transmissions with cooperative relays by jointly considering the spectrum sensing and secondary transmissions in cognitive radio networks.
Abstract: Cognitive radio is a promising technology that enables an unlicensed user (also known as a cognitive user) to identify the white space of a licensed spectrum band (called a spectrum hole) and utilize the detected spectrum hole for its data transmissions. To design a reliable and efficient cognitive radio system, there are two fundamental issues: to devise an accurate and robust spectrum sensing algorithm to detect spectrum holes as accurately as possible; and to design a secondary user transmission mechanism for the cognitive user to utilize the detected spectrum holes as efficiently as possible. This article investigates and shows that cooperative relay technology can significantly benefit the abovementioned two issues, spectrum sensing and secondary transmissions. We summarize existing research about the application of cooperative relays for spectrum sensing (referred to as the cooperative sensing) and address the related potential challenges. We discuss the use of cooperative relays for the secondary transmissions with a primary user's quality-of-service (QoS) constraint, for which a diversity-multiplexing trade-off is developed. In addition, this article shows a trade-off design of cognitive transmissions with cooperative relays by jointly considering the spectrum sensing and secondary transmissions in cognitive radio networks.

Journal ArticleDOI
TL;DR: Experimental results indicate that this CNT-graphene structure has the potential towards three-dimensional (3D) graphene-CNT multi-stack structures for high-performance supercapacitor applications.
Abstract: This paper describes the fabrication and characterization of a hybrid nanostructure comprised of carbon nanotubes (CNTs) grown on graphene layers for supercapacitor applications. The entire nanostructure (CNTs and graphene) was fabricated via atmospheric pressure chemical vapor deposition (APCVD) and designed to minimize self-aggregation of the graphene and CNTs. Growth parameters of the CNTs were optimized by adjusting the gas flow rates of hydrogen and methane to control the simultaneous, competing reactions of carbon formation toward CNT growth and hydrogenation which suppresses CNT growth via hydrogen etching of carbon. Characterization of the supercapacitor performance of the CNT–graphene hybrid nanostructure indicated that the average measured capacitance of a fabricated graphene–CNT structure was 653.7 μF cm − 2 at 10 mV s − 1 with a standard rectangular cyclic voltammetry curve. Rapid charging–discharging characteristics (mV s − 1) were exhibited with a capacitance of approximately 75% (490.3 μF cm − 2). These experimental results indicate that this CNT–graphene structure has the potential towards three-dimensional (3D) graphene–CNT multi-stack structures for high-performance supercapacitors.

Journal ArticleDOI
TL;DR: This study reveals that the proposed semantic model vectors representation outperforms-and is complementary to-other low-level visual descriptors for video event modeling, and validates it not only as the best individual descriptor, outperforming state-of-the-art global and local static features as well as spatio-temporal HOG and HOF descriptors, but also as the most compact.
Abstract: We propose semantic model vectors, an intermediate level semantic representation, as a basis for modeling and detecting complex events in unconstrained real-world videos, such as those from YouTube. The semantic model vectors are extracted using a set of discriminative semantic classifiers, each being an ensemble of SVM models trained from thousands of labeled web images, for a total of 280 generic concepts. Our study reveals that the proposed semantic model vectors representation outperforms-and is complementary to-other low-level visual descriptors for video event modeling. We hence present an end-to-end video event detection system, which combines semantic model vectors with other static or dynamic visual descriptors, extracted at the frame, segment, or full clip level. We perform a comprehensive empirical study on the 2010 TRECVID Multimedia Event Detection task (http://www.nist.gov/itl/iad/mig/med10.cfm), which validates the semantic model vectors representation not only as the best individual descriptor, outperforming state-of-the-art global and local static features as well as spatio-temporal HOG and HOF descriptors, but also as the most compact. We also study early and late feature fusion across the various approaches, leading to a 15% performance boost and an overall system performance of 0.46 mean average precision. In order to promote further research in this direction, we made our semantic model vectors for the TRECVID MED 2010 set publicly available for the community to use (http://www1.cs.columbia.edu/~mmerler/SMV.html).

Journal ArticleDOI
01 Oct 2012
TL;DR: This essay contends that a new vision for the IS discipline should address the challenges facing IS departments, and discusses the role of IS curricula and program development, in delivering BI&A education.
Abstract: “Big Data,” huge volumes of data in both structured and unstructured forms generated by the Internet, social media, and computerized transactions, is straining our technical capacity to manage it. More importantly, the new challenge is to develop the capability to understand and interpret the burgeoning volume of data to take advantage of the opportunities it provides in many human endeavors, ranging from science to business. Data Science, and in business schools, Business Intelligence and Analytics (BI&A) are emerging disciplines that seek to address the demands of this new era. Big Data and BI&A present unique challenges and opportunities not only for the research community, but also for Information Systems (IS) programs at business schools. In this essay, we provide a brief overview of BI&A, speculate on the role of BI&A education in business schools, present the challenges facing IS departments, and discuss the role of IS curricula and program development, in delivering BI&A education. We contend that a new vision for the IS discipline should address these challenges.

Journal ArticleDOI
TL;DR: The global stability of the proposed neural network and the optimality of the neural solution are proven in theory and application orientated simulations demonstrate the effectiveness of this proposed method.

Journal ArticleDOI
TL;DR: The maximal three-phase contact line attainable along the actual droplet boundary is found to be a direct and linear parameter that decides the depinning force on the superhydrophobic surface.
Abstract: This study explores how surface morphology affects the dynamics of contact line depinning of an evaporating sessile droplet on micropillared superhydrophobic surfaces. The result shows that neither a liquid-solid contact area nor an apparent contact line is a critical physical parameter to determine the depinning force. The configuration of a contact line on a superhydrophobic surface is multimodal, composed of both two phases (liquid and air) and three phases (liquid, solid, and air). The multimodal state is dynamically altered when a droplet recedes. The maximal three-phase contact line attainable along the actual droplet boundary is found to be a direct and linear parameter that decides the depinning force on the superhydrophobic surface.

Journal ArticleDOI
TL;DR: Strategic project management is gradually becoming a popular and growing trend within the discipline of project management and the general idea is that project management teams must learn how to deal with strategic project management as discussed by the authors.
Abstract: Strategic project management is gradually becoming a popular and growing trend within the discipline of project management. The general idea is that project management teams must learn how to deal ...

Journal ArticleDOI
05 Dec 2012-Langmuir
TL;DR: In this article, the fabrication of cotton fabrics with single-face superhydrophobicity using a simple foam finishing process was reported, which exhibited asymmetric wettability on their two faces: one face showing super-hydrophobic behavior (highly nonwetting or water-repellent characteristics) and the other face retaining the inherent hydrophilic nature of cotton.
Abstract: This article reports on the fabrication of cotton fabrics with single-faced superhydrophobicity using a simple foam finishing process. Unlike most commonly reported superhydrophobic fabrics, the fabrics developed in this study exhibit asymmetric wettability on their two faces: one face showing superhydrophobic behavior (highly nonwetting or water-repellent characteristics) and the other face retaining the inherent hydrophilic nature of cotton. The superhydrophobic face exhibits a low contact angle hysteresis of θ(a)/θ(r) = 151°/144° (θ(a), advancing contact angle; θ(r), receding contact angle), which enables water drops to roll off the surface easily so as to endow the surface with well-known self-cleaning properties. The untreated hydrophilic face preserves its water-absorbing capability, resulting in 44% of the water-absorbing capacity compared to that of the original cotton samples with both sides untreated (hydrophilic). The single-faced superhydrophobic fabrics also retain moisture transmissibility that is as good as that of the original untreated cotton fabrics. They also show robust washing fastness with the chemical cross-linking process of hydrophobic fluoropolymer to fabric fibers. Fabric materials with such asymmetric or gradient wettability will be of great use in many applications such as unidirectional liquid transporting, moisture management, microfluidic systems, desalination of seawater, flow management in fuel cells, and water/oil separation.

Proceedings ArticleDOI
25 Mar 2012
TL;DR: This work proposes a framework for collaborative key generation among a group of wireless devices leveraging RSS and develops a secret key extraction scheme exploiting the trend exhibited in RSS resulted from shadow fading, which is robust to outsider adversary performing stalking attacks.
Abstract: Securing communication in mobile wireless networks is challenging because the traditional cryptographic-based methods are not always applicable in dynamic mobile wireless environments. Using physical layer information of radio channel to generate keys secretly among wireless devices has been proposed as an alternative in wireless mobile networks. And the Received Signal Strength (RSS) based secret key extraction gains much attention due to the RSS readings are readily available in wireless infrastructure. However, the problem of using RSS to generate keys among multiple devices to ensure secure group communication remains open. In this work, we propose a framework for collaborative key generation among a group of wireless devices leveraging RSS. The proposed framework consists of a secret key extraction scheme exploiting the trend exhibited in RSS resulted from shadow fading, which is robust to outsider adversary performing stalking attacks. To deal with mobile devices not within each other's communication range, we employ relay nodes to achieve reliable key extraction. To enable secure group communication, two protocols, namely star-based and chain-based, are developed in our framework by exploiting RSS from multiple devices to perform group key generation collaboratively. Our experiments in both outdoor and indoor environments confirm the feasibility of using RSS for group key generation among multiple wireless devices under various mobile scenarios. The results also demonstrate that our collaborative key extraction scheme can achieve a lower bit mismatch rate compared to existing works when maintaining the comparable bit generation rate.

Journal ArticleDOI
TL;DR: The authors explored the antecedent factors that impact new product development team stability as well as its consequences and found that the most direct antecedents of team stability are goal stability and goal support.
Abstract: Group member change or team stability is a popular and important topic in the group and organizational behavior literature. Team member stability is viewed as a critical factor for an effectively functioning and performing group. Even though there is a plethora of studies on group member change and stability, research on member stability in cross-functional new product development teams is still lacking. This study explores the antecedent factors that impact new product development team stability as well as its consequences. By studying 211 new product teams, we found: (1) the most direct antecedents of team stability are goal stability and goal support; and (2) team stability has a significantly positive effect on outcome variables including team learning and cycle time. This study also shows that team stability may not be universally good; under some circumstances, such as when there is a high degree of market and technical turbulence, team instability can be advantageous.

Journal ArticleDOI
TL;DR: The important role of integrin β1 signaling pathway in regulating the nanofiber-induced fibroblast phenotypic alteration is suggested, providing insightful understanding of the possible application of collagen-containing nanofibrous matrices for skin regeneration.

Journal ArticleDOI
TL;DR: It is demonstrated for the first time that the anodizing parameters can be engineered to design novel pillar-on-pore (POP) hybrid nanostructures directly in a simple one-step fabrication process so that superior surface superhydrophobicity can also be realized effectively from the electrochemical anodization process.
Abstract: Conventional electrochemical anodizing processes of metals such as aluminum typically produce planar and homogeneous nanopore structures. If hydrophobically treated, such 2D planar and interconnected pore structures typically result in lower contact angle and larger contact angle hysteresis than 3D disconnected pillar structures and, hence, exhibit inferior superhydrophobic efficiency. In this study, we demonstrate for the first time that the anodizing parameters can be engineered to design novel pillar-on-pore (POP) hybrid nanostructures directly in a simple one-step fabrication process so that superior surface superhydrophobicity can also be realized effectively from the electrochemical anodization process. On the basis of the characteristic of forming a self-ordered porous morphology in a hexagonal array, the modulation of anodizing voltage and duration enabled the formulation of the hybrid-type nanostructures having controlled pillar morphology on top of a porous layer in both mild and hard anodizatio...

Journal ArticleDOI
TL;DR: The study shows that, on laboratory scale, these energetic compounds are easily degraded in solution by suspensions of bimetallic particles (Fe/Ni and Fe/Cu) prepared by electro-less deposition.

Journal ArticleDOI
TL;DR: An approach that combines the cubic phase function (CPF) and the high-order ambiguity function (HAF) is proposed, referred to as the hybrid CPF-HAF method, which outperforms the HAF in terms of the accuracy and signal-to-noise-ratio threshold.
Abstract: In this paper, we consider parameter estimation of high-order polynomial-phase signals (PPSs). We propose an approach that combines the cubic phase function (CPF) and the high-order ambiguity function (HAF), and is referred to as the hybrid CPF-HAF method. In the proposed method, the phase differentiation is first applied on the observed PPS to produce a cubic phase signal, whose parameters are, in turn, estimated by the CPF. The performance analysis, carried out in the paper, considers up to the tenth-order PPSs, and is supported by numerical examples revealing that the proposed approach outperforms the HAF in terms of the accuracy and signal-to-noise-ratio threshold. Extensions to multicomponent and multidimensional PPSs are also considered, all supported by numerical examples. Specifically, when multicomponent PPSs are considered, the product version of the CPF-HAF outperforms the product HAF (PHAF) that fails to estimate parameters of components whose PPS order exceeds three.

Proceedings ArticleDOI
16 Jun 2012
TL;DR: A discriminative latent topic model for scene recognition based on the modeling of two types of visual contexts, i.e., the category specific global spatial layout of different scene elements and the reinforcement of the visual coherence in uniform local regions is presented.
Abstract: We present a discriminative latent topic model for scene recognition. The capacity of our model is originated from the modeling of two types of visual contexts, i.e., the category specific global spatial layout of different scene elements, and the reinforcement of the visual coherence in uniform local regions. In contrast, most previous methods for scene recognition either only modeled one of these two visual contexts, or just totally ignored both of them. We cast these two coupled visual contexts in a discriminative Latent Dirichlet Allocation framework, namely context aware topic model. Then scene recognition is achieved by Bayesian inference given a target image. Our experiments on several scene recognition benchmarks clearly demonstrated the advantages of the proposed model.

Journal ArticleDOI
TL;DR: In this paper, the role of various physical processes in controlling total water level was examined for the August 2011 tropical cyclone Irene and a March 2010 nor'easter that affected the New York City (NYC) metropolitan area.
Abstract: [1] Detailed simulations, comparisons with observations, and model sensitivity experiments are presented for the August 2011 tropical cyclone Irene and a March 2010 nor'easter that affected the New York City (NYC) metropolitan area. These storms brought strong winds, heavy rainfall, and the fourth and seventh highest gauged storm tides (total water level), respectively, at the Battery, NYC. To dissect the storm tides and examine the role of various physical processes in controlling total water level, a series of model experiments was performed where one process was omitted for each experiment, and results were studied for eight different tide stations. Neglecting remote meteorological forcing (beyond ∼250 km) led to typical reductions of 7–17% in peak storm tide, neglecting water density variations led to typical reductions of 1–13%, neglecting a parameterization that accounts for enhanced wind drag due to wave steepness led to typical reductions of 3–12%, and neglecting atmospheric pressure gradient forcing led to typical reductions of 3–11%. Neglecting freshwater inputs to the model domain led to reductions of 2% at the Battery and 9% at Piermont, 14 km up the Hudson River from NYC. Few storm surge modeling studies or operational forecasting systems incorporate the “estuary effects” of freshwater flows and water density variations, yet joint omission of these processes for Irene leads to a low-bias in storm tide for NYC sites like La Guardia and Newark Airports (9%) and the Battery (7%), as well as nearby vulnerable sites like the Indian Point nuclear plant (23%).

Proceedings Article
20 May 2012
TL;DR: This paper reports on the emergence of a particular social convention on Twitter—the way to indicate a tweet is being reposted and to attribute the content to its source.
Abstract: The way in which social conventions emerge in communities has been of interest to social scientists for decades. Here we report on the emergence of a particular social convention on Twitter—the way to indicate a tweet is being reposted and to attribute the content to its source. Initially, different variations were invented and spread through the Twitter network. The inventors and early adopters were well-connected, active, core members of the Twitter community. The diffusion networks of these conventions were dense and highly clustered, so no single user was critical to the adoption of the conventions. Despite being invented at different times and having different adoption rates, only two variations came to be widely adopted. In this paper we describe this process in detail, highlighting insights and raising questions about how social conventions emerge.

Journal ArticleDOI
TL;DR: It is suggested that the use of DspB-loaded multilayer coatings presents a promising method for creating biocompatible surfaces with high antibiofilm efficiency, especially when combined with conventional antimicrobial treatment of dispersed bacteria.
Abstract: We developed a highly efficient, biocompatible surface coating that disperses bacterial biofilms through enzymatic cleavage of the extracellular biofilm matrix. The coating was fabricated by binding the naturally existing enzyme dispersin B (DspB) to surface-attached polymer matrices constructed via a layer-by-layer (LbL) deposition technique. LbL matrices were assembled through electrostatic interactions of poly(allylamine hydrochloride) (PAH) and poly(methacrylic acid) (PMAA), followed by chemical cross-linking with glutaraldehyde and pH-triggered removal of PMAA, producing a stable PAH hydrogel matrix used for DspB loading. The amount of DspB loaded increased linearly with the number of PAH layers in surface hydrogels. DspB was retained within these coatings in the pH range from 4 to 7.5. DspB-loaded coatings inhibited biofilm formation by two clinical strains of Staphylococcus epidermidis. Biofilm inhibition was ≥98% compared to mock-loaded coatings as determined by CFU enumeration. In addition, DspB-loaded coatings did not inhibit attachment or growth of cultured human osteoblast cells. We suggest that the use of DspB-loaded multilayer coatings presents a promising method for creating biocompatible surfaces with high antibiofilm efficiency, especially when combined with conventional antimicrobial treatment of dispersed bacteria.