scispace - formally typeset
Search or ask a question

Showing papers by "Stevens Institute of Technology published in 2010"


Posted Content
TL;DR: In this article, the authors demonstrate how to use Mechanical Turk for conducting behavioral research and lower the barrier to entry for researchers who could benefit from this platform, and illustrate the mechanics of putting a task on Mechanical Turk including recruiting subjects, executing the task, and reviewing the work submitted.
Abstract: Amazon’s Mechanical Turk is an online labor market where requesters post jobs and workers choose which jobs to do for pay. The central purpose of this paper is to demonstrate how to use this website for conducting behavioral research and lower the barrier to entry for researchers who could benefit from this platform. We describe general techniques that apply to a variety of types of research and experiments across disciplines. We begin by discussing some of the advantages of doing experiments on Mechanical Turk, such as easy access to a large, stable, and diverse subject pool, the low cost of doing experiments and faster iteration between developing theory and executing experiments. We will discuss how the behavior of workers compares to experts and to laboratory subjects. Then, we illustrate the mechanics of putting a task on Mechanical Turk including recruiting subjects, executing the task, and reviewing the work that was submitted. We also provide solutions to common problems that a researcher might face when executing their research on this platform including techniques for conducting synchronous experiments, methods to ensure high quality work, how to keep data private, and how to maintain code security.

2,755 citations


Journal ArticleDOI
TL;DR: A piezoelectric nanogenerator based on PZT nanofibers, with a diameter and length of approximately 60 nm and 500 microm, was reported, aligned on interdigitated electrodes of platinum fine wires and packaged using a soft polymer on a silicon substrate.
Abstract: Energy harvesting technologies that are engineered to miniature sizes, while still increasing the power delivered to wireless electronics,(1, 2) portable devices, stretchable electronics,(3) and implantable biosensors,(4, 5) are strongly desired. Piezoelectric nanowire- and nanofiber-based generators have potential uses for powering such devices through a conversion of mechanical energy into electrical energy.(6) However, the piezoelectric voltage constant of the semiconductor piezoelectric nanowires in the recently reported piezoelectric nanogenerators(7-12) is lower than that of lead zirconate titanate (PZT) nanomaterials. Here we report a piezoelectric nanogenerator based on PZT nanofibers. The PZT nanofibers, with a diameter and length of approximately 60 nm and 500 μm, were aligned on interdigitated electrodes of platinum fine wires and packaged using a soft polymer on a silicon substrate. The measured output voltage and power under periodic stress application to the soft polymer was 1.63 V and 0.03 ...

818 citations


Journal ArticleDOI
TL;DR: An overall framework for data governance is provided that can be used by researchers to focus on important data governance issues, and by practitioners to develop an effective data governance approach, strategy and design.
Abstract: IntroductionOrganizations are becoming increasingly serious about the notion of "data as an asset" as they face increasing pressure for reporting a "single version of the truth." In a 2006 survey of 359 North American organizations that had deployed business intelligence and analytic systems, a program for the governance of data was reported to be one of the five success "practices" for deriving business value from data assets. In light of the opportunities to leverage data assets as well ensure legislative compliance to mandates such as the Sarbanes-Oxley (SOX) Act and Basel II, data governance has also recently been given significant prominence in practitioners' conferences, such as TDWI (The Data Warehousing Institute) World Conference and DAMA (Data Management Association) International Symposium.The objective of this article is to provide an overall framework for data governance that can be used by researchers to focus on important data governance issues, and by practitioners to develop an effective data governance approach, strategy and design. Designing data governance requires stepping back from day-to-day decision making and focusing on identifying the fundamental decisions that need to be made and who should be making them. Based on Weill and Ross, we also differentiate between governance and management as follows:• Governance refers to what decisions must be made to ensure effective management and use of IT (decision domains) and who makes the decisions (locus of accountability for decision-making).• Management involves making and implementing decisions.For example, governance includes establishing who in the organization holds decision rights for determining standards for data quality. Management involves determining the actual metrics employed for data quality. Here, we focus on the former.Corporate governance has been defined as a set of relationships between a company's management, its board, its shareholders and other stakeholders that provide a structure for determining organizational objectives and monitoring performance, thereby ensuring that corporate objectives are attained. Considering the synergy between macroeconomic and structural policies, corporate governance is a key element in not only improving economic efficiency and growth, but also enhancing corporate confidence. A framework for linking corporate and IT governance (see Figure 1) has been proposed by Weill and Ross.Unlike these authors, however, we differentiate between IT assets and information assets: IT assets refers to technologies (computers, communication and databases) that help support the automation of well-defined tasks, while information assets (or data) are defined as facts having value or potential value that are documented. Note that in the context of this article, we do not differentiate between data and information.Next, we use the Weill and Ross framework for IT governance as a starting point for our own framework for data governance. We then propose a set of five data decision domains, why they are important, and guidelines for what governance is needed for each decision domain. By operationalizing the locus of accountability of decision making (the "who") for each decision domain, we create a data governance matrix, which can be used by practitioners to design their data governance. The insights presented here have been informed by field research, and address an area that is of growing interest to the information systems (IS) research and practice community.

478 citations


Posted Content
TL;DR: There is no direct relationship between corporate responsibility and financial performance—merely an indirect relationship that relies on the mediating effect of a firm's intangible resources.
Abstract: This paper examines the effects of a firm’s intangible resources in mediating the relationship between corporate responsibility and financial performance. We hypothesize that previous empirical findings of a positive relationship between social and financial performance may be spurious because the researchers failed to account for the mediating effects of intangible resources. Our results indicate that there is no direct relationship between corporate responsibility and financial performance — merely an indirect relationship that relies on the mediating effect of a firm’s intangible resources. We demonstrate our theoretical contention with the use of a database comprising 599 companies from 28 countries.

403 citations


Journal ArticleDOI
TL;DR: The generalized diversity gain is derived and it is shown that, with a guaranteed primary outage probability, the full diversity order is achieved using the proposed adaptive cooperation scheme.
Abstract: In this correspondence, an adaptive cooperation diversity scheme with best-relay selection is proposed for multiple-relay cognitive radio networks to improve the performance of secondary transmissions while ensuring the quality of service (QoS) of primary transmissions. Exact closed-form expressions of the outage probability of secondary transmissions, referred to as secondary outage probability, are derived under the constraint of satisfying a required outage probability of primary transmissions (primary outage probability) for both the traditional non-cooperation and the proposed adaptive cooperation schemes over Rayleigh fading channels. Numerical and simulation results show that, with a guaranteed primary outage probability, a floor of the secondary outage probability occurs in high signal-to-noise ratio (SNR) regions. Moreover, the outage probability floor of the adaptive cooperation scheme is lower than that of the non-cooperation scenario, which illustrates the advantage of the proposed scheme. In addition, we generalize the traditional definition of the diversity gain, which can not be applied directly in cognitive radio networks since mutual interference between the primary and secondary users should be considered. We derive the generalized diversity gain and show that, with a guaranteed primary outage probability, the full diversity order is achieved using the proposed adaptive cooperation scheme.

362 citations


Journal ArticleDOI
TL;DR: The results show that the proposed GLRT exhibits better performance than other existing techniques, particularly when the number of samples is small, which is particularly critical in vehicular applications.
Abstract: In this paper, we consider the problem of detecting a primary user in a cognitive radio network by employing multiple antennas at the cognitive receiver. In vehicular applications, cognitive radios typically transit regions with differing densities of primary users. Therefore, speed of detection is key, and so, detection based on a small number of samples is particularly advantageous for vehicular applications. Assuming no prior knowledge of the primary user's signaling scheme, the channels between the primary user and the cognitive user, and the variance of the noise seen at the cognitive user, a generalized likelihood ratio test (GLRT) is developed to detect the presence/absence of the primary user. Asymptotic performance analysis for the proposed GLRT is also presented. A performance comparison between the proposed GLRT and other existing methods, such as the energy detector (ED) and several eigenvalue-based methods under the condition of unknown or inaccurately known noise variance, is provided. Our results show that the proposed GLRT exhibits better performance than other existing techniques, particularly when the number of samples is small, which is particularly critical in vehicular applications.

320 citations


Posted Content
TL;DR: A meta-analytic review of the generalizability of the relationships between NPD speed and 17 of its antecedents found that process and team characteristics are more generalizable and cross-situationally consistent determinants of N PD speed than strategy and project characteristics.
Abstract: New product development (NPD) speed is a key component of time-based strategy, which has become increasingly important for managing innovation in a fast-changing business environment. This meta-analytic review assesses the generalizability of the relationships between NPD speed and 17 of its antecedents to provide a better understanding of the salient and cross-situationally consistent factors that affect NPD speed. We grouped the antecedents into four categories of strategy, project, process, and team, and found that process and team characteristics are more generalizable and cross-situationally consistent determinants of NPD speed than strategy and project characteristics. We also conducted subgroup analyses and found that research method variables, such as level of analysis, source of data, and measurement of speed, moderate the relationships between NPD speed and its antecedents. We apply the study’s findings to assess several models of NPD speed, such as the balanced model of product development, the strategic orientation and organizational capability model, the compression versus the experiential model, the centrifugal and centripetal model, and the product development cycle time model. We also discuss the implications of our findings for research and practice.

271 citations


Journal ArticleDOI
TL;DR: In this article, a meta-analytic review assesses the generalizability of the relationships between new product development speed and 17 antecedents to provide a better understanding of the salient and cross-situationally consistent factors that affect NPD speed.

243 citations


Journal ArticleDOI
TL;DR: The results show that it is possible to detect wireless identity-based attacks with both a high detection rate and a low false-positive rate, thereby providing strong evidence of the effectiveness of the attack detector utilizing the spatial correlation of RSS and the attack localizer.
Abstract: Wireless networks are vulnerable to identity-based attacks, including spoofing and Sybil attacks, which allows for many other forms of attacks on the networks. Although the identity of a node can be verified through cryptographic authentication, authentication is not always possible, because it requires key management and additional infrastructural overhead. In this paper, we propose a method for detecting both spoofing and Sybil attacks by using the same set of techniques. We first propose a generalized attack-detection model that utilizes the spatial correlation of received signal strength (RSS) inherited from wireless nodes. We further provide a theoretical analysis of our approach. We then derive the test statistics for detection of identity-based attacks by using the K-means algorithm. Our attack detector is robust when handling the situations of attackers that use different transmission power levels to attack the detection scheme. We further describe how we integrated our attack detector into a real-time indoor localization system, which can also localize the positions of the attackers. We show that the positions of the attackers can be localized using either area- or point-based localization algorithms with the same relative errors as in the normal case. We further evaluated our methods through experimentation in two real office buildings using both an IEEE 802.11 (WiFi) network and an IEEE 802.15.4 (ZigBee) network. Our results show that it is possible to detect wireless identity-based attacks with both a high detection rate and a low false-positive rate, thereby providing strong evidence of the effectiveness of the attack detector utilizing the spatial correlation of RSS and the attack localizer.

213 citations


Journal ArticleDOI
TL;DR: The proposed framework is based on the expanded application of two primary enablers of enterprise resilience: the capability of an enterprise to connect systems, people, processes and information in a way that allows enterprise to become more connected and responsive to the dynamics of its environment, stakeholders and competitors.
Abstract: This article proposes a framework for investigation into 'extended enterprise resilience' based on the key attributes of enterprise resilience in the context of extended enterprises. Such attributes, namely agility, flexibility, adaptability and connectivity, are frequently defined as supporting attributes of enterprise resilience, but the issue is how they can be more effectively applied to extended enterprises. The role of information technology in assisting connectivity and collaboration is frequently recognised as contributing to resilience on all levels, and will likewise be employed on the level of extended enterprise systems. The proposed framework is based on the expanded application of two primary enablers of enterprise resilience: (i) the capability of an enterprise to connect systems, people, processes and information in a way that allows enterprise to become more connected and responsive to the dynamics of its environment, stakeholders and competitors; (ii) the alignment of information technology with business goals. The former requires inter-and intra-level interoperability and integration within the extended enterprises, and the latter requires modelling of the underlying technology infrastructure and creation of a consolidated view of, and access to, all available resources in the extended enterprises that can be attained by well-defined enterprise architecture.

207 citations


Journal ArticleDOI
TL;DR: RAMOBoost adaptively ranks minority class instances at each learning iteration according to a sampling probability distribution that is based on the underlying data distribution, and can adaptively shift the decision boundary toward difficult-to-learn minority and majority class instances by using a hypothesis assessment procedure.
Abstract: In recent years, learning from imbalanced data has attracted growing attention from both academia and industry due to the explosive growth of applications that use and produce imbalanced data. However, because of the complex characteristics of imbalanced data, many real-world solutions struggle to provide robust efficiency in learning-based applications. In an effort to address this problem, this paper presents Ranked Minority Oversampling in Boosting (RAMOBoost), which is a RAMO technique based on the idea of adaptive synthetic data generation in an ensemble learning system. Briefly, RAMOBoost adaptively ranks minority class instances at each learning iteration according to a sampling probability distribution that is based on the underlying data distribution, and can adaptively shift the decision boundary toward difficult-to-learn minority and majority class instances by using a hypothesis assessment procedure. Simulation analysis on 19 real-world datasets assessed over various metrics-including overall accuracy, precision, recall, F-measure, G-mean, and receiver operation characteristic analysis-is used to illustrate the effectiveness of this method.

Journal ArticleDOI
TL;DR: This paper examines current Reference Architectures and the driving forces behind development of them to come to a collective conclusion on what a Reference Architecture should truly be.
Abstract: The concept of Reference Architectures is novel in the business world However, many architects active in the creation of complex systems frequently use the term Reference Architecture Yet, these experienced architects do not collectively have a consistent notion of what constitutes a Reference Architecture, what is the value of maintaining the Reference Architecture, what is the best approach to visualizing a Reference Architecture, what is the most appropriate level of abstraction, and how should an architect make use of the Reference Architecture in their work? This paper examines current Reference Architectures and the driving forces behind development of them to come to a collective conclusion on what a Reference Architecture should truly be It will be shown that a Reference Architecture captures the accumulated architectural knowledge of thousands man-years of work This knowledge ranges from why (market segmentation, value chain, customer key drivers, application), what (systems, key performance parameters, system interfaces, functionality, variability), to how (design views and diagrams, essential design patterns, main concepts) The purpose of the Reference Architecture is to provide guidance for future developments The Reference Architecture incorporates the vision and strategy for the future The Reference Architecture is a reference for the hundreds of teams related to ongoing developments By providing this reference all these teams have a shared baseline of why, what and how It is the authors' goal that this paper will facilitate further research in the concepts and ideas presented herein © 2009 Wiley Periodicals, Inc Syst Eng

Journal ArticleDOI
TL;DR: In this paper, the authors compare the representation capabilities of four rule modeling languages: Simple Rule Markup Language (SRML), the Semantic Web Rules Language (SWRL), the Production Rule Representation (PRR), and the Semantics of Business Vocabulary and Business Rules (SBVR) specification.

Journal ArticleDOI
TL;DR: The authors studied the factors related to the development of trust between pairs of coworkers (dyads) in a new product development team and found reciprocal effects for propensity to trust and trust in dyads.
Abstract: Trust between coworkers is critical to the success of organizations and teams. This is especially true for those who are geographically dispersed and who must interact virtually. The authors studied the factors related to the development of trust between pairs of coworkers (dyads) in a new product development team. Some of the members were colocated, and others worked virtually. Using the actor-partner interdependence model, the authors found reciprocal effects for propensity to trust and trust in dyads. They found that propensity has greater influence on trust for virtual dyads and that trust has less influence on organizational citizenship when partners are virtual. Trustworthiness was shown to fully mediate the influence of trusting predisposition on trust.

Journal ArticleDOI
TL;DR: The layer-by-layer design principles of poly(methacrylic acid) ultrathin hydrogel coatings that release antimicrobial agents (AmAs) in response to pH variations provide new opportunities to study the fundamental mechanisms of AmA-coating-bacteria interactions and develop a new class of clinically relevant antibacterial coatings for medical devices.

Journal ArticleDOI
TL;DR: The performance of the TS-MIMO radar is examined in terms of the output signal-to-interference-plus-noise ratio (SINR) of an adaptive beamformer in an interference and training limited environment, where it is shown analytically how the output SINR is affected by several key design parameters, including the size/number of the subapertures and the number of training signals.
Abstract: We present a transmit subaperturing (TS) approach for multiple-input multiple-output (MIMO) radars with co-located antennas. The proposed scheme divides the transmit array elements into multiple groups, each group forms a directional beam and modulates a distinct waveform, and all beams are steerable and point to the same direction. The resulting system is referred to as a TS-MIMO radar. A TS-MIMO radar is a tunable system that offers a continuum of operating modes from the phased-array radar, which achieves the maximum directional gain but the least interference rejection ability, to the omnidirectional transmission based MIMO radar, which can handle the largest number of interference sources but offers no directional gain. Tuning of the TS-MIMO system can be easily made by changing the configuration of the transmit subapertures, which provides a direct tradeoff between the directional gain and interference rejection power of the system. The performance of the TS-MIMO radar is examined in terms of the output signal-to-interference-plus-noise ratio (SINR) of an adaptive beamformer in an interference and training limited environment, where we show analytically how the output SINR is affected by several key design parameters, including the size/number of the subapertures and the number of training signals. Our results are verified by computer simulation and comparisons are made among various operating modes of the proposed TS-MIMO system.

Journal ArticleDOI
TL;DR: In this paper, an integrated cubic phase function (ICPF) is introduced for the estimation and detection of linear frequency-modulated (LFM) signals, which extends the standard CPF to handle cases involving low signal-to-noise ratio (SNR) and multi-component LFM signals.
Abstract: In this paper, an integrated cubic phase function (ICPF) is introduced for the estimation and detection of linear frequency-modulated (LFM) signals. The ICPF extends the standard cubic phase function (CPF) to handle cases involving low signal-to-noise ratio (SNR) and multi-component LFM signals. The asymptotic mean squared error (MSE) of an ICPF-based estimator as well as the output SNR of an ICPF-based detector are derived in closed form and verified by computer simulation. Comparison with several existing approaches is also included, which shows that the ICPF serves as a good candidate for LFM signal analysis.

Posted Content
TL;DR: A significant shift in IT priorities first captured in 2009 continues: IT organizations are working aggressively and closely with their business partners to identify opportunities to reduce costs and improve productivity across the company through IT initiatives.
Abstract: While the recession has officially been declared as ending during the past summer, the prolonged economic conundrum continues to pose new challenges to organizations around the world. The past year has shown some increase in IT investments, yet IT executives continue to proceed cautiously and predict incremental improvements in 2011; there is no anticipated dramatic return to the growth levels that preceded the recession.However, a significant shift in IT priorities first captured in 2009 continues: IT organizations are working aggressively and closely with their business partners to identify opportunities to reduce costs and improve productivity across the company through IT initiatives. This phenomenon is very different from previous recessions where IT budgets were typically the first to be cut.Since its inception in 1980, the Society for Information Management (SIM) survey has helped IT leaders around the globe understand important issues and trends. This article presents the major findings based on survey responses from 172 U.S. organizations in mid-2010. The top five management concerns were: 1. Business productivity and cost reduction; 2. Business agility and speed to market; 3. IT and business alignment; 4. IT reliability and efficiency; 5. Business process re-engineering.This is the fifth in a series of MISQE-published reports based on a SIM membership survey facilitated by the lead author. As in previous reports, this article also presents findings on key application and technology developments, and various aspects of the IT organization. In addition, similarities and differences between the U.S. results and those from similar samples of European and Asian/Australian organizations provide a more global perspective.

Journal Article
TL;DR: This work addresses instance-based learning from a perceptual organization standpoint and presents methods for dimensionality estimation, manifold learning and function approximation, which employs a novel formulation of tensor voting, which allows an N-D implementation.
Abstract: We address instance-based learning from a perceptual organization standpoint and present methods for dimensionality estimation, manifold learning and function approximation. Under our approach, manifolds in high-dimensional spaces are inferred by estimating geometric relationships among the input instances. Unlike conventional manifold learning, we do not perform dimensionality reduction, but instead perform all operations in the original input space. For this purpose we employ a novel formulation of tensor voting, which allows an N-D implementation. Tensor voting is a perceptual organization framework that has mostly been applied to computer vision problems. Analyzing the estimated local structure at the inputs, we are able to obtain reliable dimensionality estimates at each instance, instead of a global estimate for the entire data set. Moreover, these local dimensionality and structure estimates enable us to measure geodesic distances and perform nonlinear interpolation for data sets with varying density, outliers, perturbation and intersections, that cannot be handled by state-of-the-art methods. Quantitative results on the estimation of local manifold structure using ground truth data are presented. In addition, we compare our approach with several leading methods for manifold learning at the task of measuring geodesic distances. Finally, we show competitive function approximation results on real data.

Journal ArticleDOI
TL;DR: Possible future standardization topics for IEEE SCC41 are outlined, in the framework of other related standardization activities, and open research issues that present future challenges for the standardization community are discussed.
Abstract: Spectrum crowding, spectrum management, quality of service, and user support are the topics of vigorous research in the cognitive and dynamic spectrum access network communities. As research matures, standardization provides a bridge between research results, implementation, and widespread deployment of such networks. This article reports recent developments within the IEEE Standardization Coordinating Committee 41, "Dynamic Spectrum Access Networks." It outlines possible future standardization topics for IEEE SCC41, in the framework of other related standardization activities, and discusses open research issues that present future challenges for the standardization community.

Journal ArticleDOI
TL;DR: In this article, the quantum entanglement dynamics of two spins in the presence of classical Ornstein-Uhlenbeck noise were investigated, and exact solutions for evolution dynamics were obtained.

Journal ArticleDOI
TL;DR: Even though project management tools and techniques (PMTT) have been commonly used by project managers, research on PMTT still has not been adequately investigated as to whether its use contributes as mentioned in this paper.
Abstract: Even though project management tools and techniques (PMTT) have been commonly used by project managers, research on PMTT still has not been adequately investigated as to whether its use contributes...

Journal ArticleDOI
TL;DR: It is shown that a significant improvement is achieved by the proposed cognitive relay scheme in terms of the overall outage probability and an asymptotic outage probability tradeoff can be achieved through a tradeoff in determining the time durations for the spectrum hole detection and data transmission phases.
Abstract: In cognitive radio networks, a cognitive source node requires two essential phases to complete a cognitive transmission process: the phase of spectrum sensing with a certain time duration (also referred to as spectrum sensing overhead) to detect a spectrum hole and the phase of data transmission through the detected spectrum hole. In this paper, we focus on the outage probability analysis of cognitive transmissions by considering the two phases jointly to examine the impact of spectrum sensing overhead on system performance. A closed-form expression of an overall outage probability that accounts for both the probability of no spectrum hole detected and the probability of a channel outage is derived for cognitive transmissions over Rayleigh fading channels. We further conduct an asymptotic outage analysis in high signal-to-noise ratio regions and obtain an optimal spectrum sensing overhead solution to minimize the asymptotic outage probability. Besides, numerical results show that a minimized overall outage probability can be achieved through a tradeoff in determining the time durations for the spectrum hole detection and data transmission phases. In this paper, we also investigate the use of cognitive relay to improve the outage performance of cognitive transmissions. We show that a significant improvement is achieved by the proposed cognitive relay scheme in terms of the overall outage probability.

Journal ArticleDOI
TL;DR: This work has established a convolutionless stochastic Schrödinger equation called the time-local quantum state diffusion (QSD) equation without any approximations, in particular, without Markov approximation.
Abstract: The non-Markovian dynamics of a three-level quantum system coupled to a bosonic environment is a difficult problem due to the lack of an exact dynamic equation such as a master equation. We present for the first time an exact quantum trajectory approach to a dissipative three-level model. We have established a convolutionless stochastic Schrodinger equation called the time-local quantum state diffusion (QSD) equation without any approximations, in particular, without Markov approximation. Our exact time-local QSD equation opens a new avenue for exploring quantum dynamics for a higher dimensional quantum system coupled to a non-Markovian environment.

Journal ArticleDOI
TL;DR: IMM requirements are developed through review of aerospace and defense related literature, and these requirements are applied to currently existing integration maturity metrics, and the proposed Integration Readiness Level (IRL).
Abstract: In order to optimize the process of complex system integration, it is necessary to first improve the management of the process. This can be accomplished through the use of a generally understood metric. One such metric is Technology Readiness Level (TRL), which is used to determine technology maturity, but does not address integration maturity. Integration Maturity Metric (IMM) requirements are developed through review of aerospace and defense related literature. These requirements are applied to currently existing integration maturity metrics, and the proposed Integration Readiness Level (IRL). IRL is then refined to fully meet these requirements, and applied to three aerospace case studies, along with the other identified metrics, to compare and contrast the results obtained.

Posted Content
TL;DR: It is suggested that business simulation games are an effective way to engage students in Decision Support Systems (DSS) and their experience with a game designed to channel students into a stream of entrepreneurial decision-making is discussed.
Abstract: This study discusses business simulation games as teaching tools in Information Systems (IS). The discipline's traditional teaching methods, while appropriate for the dissemination of foundational knowledge, do not provide the optimal platform for students to implement IS concepts. We suggest that business simulation games are an effective way to engage students in Decision Support Systems (DSS). Specifically, we discuss our experience with a game designed to channel students into a stream of entrepreneurial decision-making. Our findings indicate that business simulation games represent a sufficiently novel approach to teaching and learning.

Proceedings ArticleDOI
06 Dec 2010
TL;DR: It is shown that once a single peer-to-peer (P2P) bot is detected in a network, it may be possible to efficiently identify other members of the same botnet in the same network even before they exhibit any overtly malicious behavior.
Abstract: In this work we show that once a single peer-to-peer (P2P) bot is detected in a network, it may be possible to efficiently identify other members of the same botnet in the same network even before they exhibit any overtly malicious behavior. Detection is based on an analysis of connections made by the hosts in the network. It turns out that if bots select their peers randomly and independently (i.e. unstructured topology), any given pair of P2P bots in a network communicate with at least one mutual peer outside the network with a surprisingly high probability. This, along with the low probability of any other host communicating with this mutual peer, allows us to link local nodes within a P2P botnet together. We propose a simple method to identify potential members of an unstructured P2P botnet in a network starting from a known peer. We formulate the problem as a graph problem and mathematically analyze a solution using an iterative algorithm. The proposed scheme is simple and requires only flow records captured at network borders. We analyze the efficacy of the proposed scheme using real botnet data, including data obtained from both observing and crawling the Nugache botnet.

Journal ArticleDOI
TL;DR: Transmission electron microscopy (TEM) of polymers involves the problem definition and methodologies associated with the microscopy of both inorganic and biological materials but cannot be categorized within either of these fields alone as discussed by the authors.
Abstract: Transmission electron microscopy (TEM) of polymers involves the problem definition and methodologies associated with the microscopy of both inorganic and biological materials but cannot be categorized within either of these fields alone. On the one hand, like other synthetic materials, polymers offer the ability to control properties through synthesis and processing, and TEM is a powerful method with which to provide information within the synthesis–structure–property paradigm of materials science and engineering. The well-established techniques of bright/dark-field imaging, electron diffraction, high-resolution imaging, and analytical microscopies are thus all used to study polymers. On the other hand, the electron–specimen interactions are more like those in biological systems. Synthetic polymers and biological materials consist largely of light elements whose elastic interactions with energetic electrons are relatively weak. Generating image contrast can thus be a challenge in polymer TEM. The inelasti...

Journal ArticleDOI
TL;DR: The empirical findings show that the validity of the widely accepted project organization typology is in question and develop an alternative taxonomy of project management structures that encompasses five structural types, differentiated by the entities managing them.
Abstract: This paper addresses the question of how projects are organized and how these management structures impact project success. Despite its widely accepted managerial importance, empirical studies could not provide significant evidence of a relationship between implemented management structures and project success. A major problem in finding meaningful empirical evidence is the conceptualization of the structure measure, which is derived from a typologist's perspective. In this study, we follow the taxonomists' perspective and empirically develop an alternative taxonomy of project management structures. We empirically compare both approaches, by using two different samples, collected in the United States and Germany, including together over 600 projects. Our empirical findings show that the validity of the widely accepted project organization typology is in question. The use of cluster analyses reveals an alternative taxonomy that encompasses five structural types, differentiated by the entities managing them: project coordinator, supervised project coordinator, autonomous project manager, supervised project manager, and autonomous functional project manager. The results strongly support the widely accepted proposition of a relationship between project organization and project success. The emerging taxonomy of project organization configurations enriches the theoretical and conceptual discussions of organizing projects and unravels the multiple aspects involved in organizing the execution of projects.

Journal ArticleDOI
TL;DR: This article is the first report of net accumulative SERS from the full-length Ag-nanoparticlefunctionalized PCFs, and recommended and recently described in a brief study forward scattering as a more suitable detection mode to unambiguously assess the SERS-active nature of PCF.
Abstract: The integration of microfluidics with photonics on a single platform using well-established planar device technology has led to the emergence of the exciting field of optofluidics. As both a light guide and a liquid/gas transmission cell, photonic crystal fiber (PCF, also termed microstructured or holey fiber), synergistically combines microfluidics and optics in a single fiber with unprecedented light path length not readily achievable using planar optofluidics. PCF, an inherent optofluics platform, offers excellent prospects for a multitude of scientific and technological applications. The accessibility to the air channels of PCF has also opened up the possibility for functionalization of the channel surfaces (silica in nature) at the molecular and the nanometer scales, in particular to impart the functionality of surface-enhanced Raman scattering (SERS) in PCF for sensing and detection. SERS, an ever advancing research field since its discovery in the 1970s, has tremendous potential for label-freemolecular identification at trace or even single-molecule levels due to up to 10 increase in the Raman scattering cross-section of a molecule in the presence of Ag or Au nanostructures. Seminal work has been reported on the development of 1D and 2D SERS substrates for a variety of sensing applications. The use of 3D geometry, i.e., substrates obtained by the deposition of noble nanoparticles onto porous silicon or porous aluminum membrane offered additional advantages of increased SERS intensity due to increased SERS probing area, as well as the membrane waveguiding properties. Specifically, several orders of magnitude higher SERS intensity, affording picoor zeptogram-level detection of explosives, has been demonstrated with porous alumina membranes containing 60-mm-long nanochannels, as compared to a solid planar substrate. SERS-active PCF optofluidics, as a special fiber optic SERS platform, offers easy system integration for in situ flow-through detection, and, more importantly, much longer light interaction length with analyte, thus promising to open a new vista in chemical/biological sensing, medical diagnosis, and process monitoring, especially in geometrically confined or sampling volume-limited systems. Various attempts have been made over the last several years to integrate SERS with the PCF platform by incorporating Ag or Au nanostructures albeit inside a very limited segment (typically a few centimeters) of the fiber air channels. Examples include deposition of Ag particulates and thin films by chemical vapor deposition at high pressure or coating of Ag and Au nanoparticles using colloidal solutions driven into the microscopic air channels via capillary force with backscattering as the typical data acquisition mode. Building uniform SERS functionality throughout the length of the PCF while preserving its light guide characteristics has remained elusive as measured Raman intensity is a combination of the accumulative gain from Raman scattering and the continuous loss from nanoparticle-induced light attenuation over the path length. As a result, we have recommended and recently described in a brief study forward scattering as a more suitable detection mode to unambiguously assess the SERS-active nature of PCF. To the best of our knowledge, this article is the first report of net accumulative SERS from the full-length Ag-nanoparticlefunctionalized PCFs. The finding has been enabled by a fine control of the coverage density of Ag nanoparticles and studies of a competitive interplay between SERS gain and light attenuation in the Raman intensity with PCFs of varying length. Using two different types of PCF, i.e., solid-core PCF and hollow-core PCF, we show that Raman gain in PCF prevails at relatively low nanoparticle coverage density (below 0.5 particlemm ), allowing full benefit of accumulation of Raman intensity along the fiber length for robust SERS sensing and enhanced measurement sensitivity. Light attenuation dominates at higher nanoparticle coverage density, however, diminishing the path-length benefit. Shown in Figure 1 are cross-sectional scanning electron microscopy (SEM) images of solid-core PCFand hollow-core PCF used in this work. Also depicted in the figure is the light-guide process in the corresponding PCFs that contain immobilized Ag nanoparticles and are filled with aqueous solution throughout the cladding air channels for solid-core PCFand in the center air core only for hollow-core PCF. Note that light is guided via total internal reflectance in both cases. The presence of the aqueous solution in the cladding air channels does not fundamentally change the contrast of the higher index silica core and the lower index liquid-silica cladding in the solid-core PCF. The selective