scispace - formally typeset
Search or ask a question

Showing papers by "Indian Institute of Technology Bombay published in 2012"


Proceedings ArticleDOI
22 Aug 2012
TL;DR: Zee is presented -- a system that makes the calibration zero-effort, by enabling training data to be crowdsourced without any explicit effort on the part of users.
Abstract: Radio Frequency (RF) fingerprinting, based onWiFi or cellular signals, has been a popular approach to indoor localization. However, its adoption in the real world has been stymied by the need for sitespecific calibration, i.e., the creation of a training data set comprising WiFi measurements at known locations in the space of interest. While efforts have been made to reduce this calibration effort using modeling, the need for measurements from known locations still remains a bottleneck. In this paper, we present Zee -- a system that makes the calibration zero-effort, by enabling training data to be crowdsourced without any explicit effort on the part of users. Zee leverages the inertial sensors (e.g., accelerometer, compass, gyroscope) present in the mobile devices such as smartphones carried by users, to track them as they traverse an indoor environment, while simultaneously performing WiFi scans. Zee is designed to run in the background on a device without requiring any explicit user participation. The only site-specific input that Zee depends on is a map showing the pathways (e.g., hallways) and barriers (e.g., walls). A significant challenge that Zee surmounts is to track users without any a priori, user-specific knowledge such as the user's initial location, stride-length, or phone placement. Zee employs a suite of novel techniques to infer location over time: (a) placement-independent step counting and orientation estimation, (b) augmented particle filtering to simultaneously estimate location and user-specific walk characteristics such as the stride length,(c) back propagation to go back and improve the accuracy of ocalization in the past, and (d) WiFi-based particle initialization to enable faster convergence. We present an evaluation of Zee in a large office building.

1,114 citations


Journal ArticleDOI
01 Jan 2012
TL;DR: The neuro fuzzy system is applied to predict the rock Young's modulus to overcome the limitation of ANN and fuzzy logic and endow with high performance of predictive neuro-fuzzy system to make use for prediction of complex rock parameter.
Abstract: The engineering properties of the rocks have the most vital role in planning of rock excavation and construction for optimum utilization of earth resources with greater safety and least damage to surroundings. The design and construction of structure is influenced by physico-mechanical properties of rock mass. Young's modulus provides insight about the magnitude and characteristic of the rock mass deformation due to change in stress field. The determination of the Young's modulus in laboratory is very time consuming and costly. Therefore, basic rock properties like point load, density and water absorption have been used to predict the Young's modulus. Point load, density and water absorption can be easily determined in field as well as laboratory and are pertinent properties to characterize a rock mass. The artificial neural network (ANN), fuzzy inference system (FIS) and neuro fuzzy are promising techniques which have proven to be very reliable in recent years. In, present study, neuro fuzzy system is applied to predict the rock Young's modulus to overcome the limitation of ANN and fuzzy logic. Total 85 dataset were used for training the network and 10 dataset for testing and validation of network rules. The network performance indices correlation coefficient, mean absolute percentage error (MAPE), root mean square error (RMSE), and variance account for (VAF) are found to be 0.6643, 7.583, 6.799, and 91.95 respectively, which endow with high performance of predictive neuro-fuzzy system to make use for prediction of complex rock parameter.

339 citations


Journal ArticleDOI
TL;DR: In this paper, the authors use extreme value theory to examine trends in Indian rainfall over the past half century in the context of long-term, low-frequency variability, and show that when generalized extreme value theories are applied to annual maximum rainfall over India, no statistically significant spatially uniform trends are observed, in agreement with previous studies using different approaches.
Abstract: Future changes in the Indian monsoon could affect millions of people, yet even the ways in which it might have changed over recent years remain uncertain. Statistical analysis indicates that during the second half of the twentieth century there were no spatially uniform changes in the frequency or intensity of heavy rainfall events over India, but there was an increase in the spatial variability of these characteristics. Recent studies disagree on how rainfall extremes over India have changed in space and time over the past half century1,2,3,4, as well as on whether the changes observed are due to global warming5,6 or regional urbanization7. Although a uniform and consistent decrease in moderate rainfall has been reported1,3, a lack of agreement about trends in heavy rainfall may be due in part to differences in the characterization and spatial averaging of extremes. Here we use extreme value theory8,9,10,11,12,13,14,15 to examine trends in Indian rainfall over the past half century in the context of long-term, low-frequency variability. We show that when generalized extreme value theory8,16,17,18 is applied to annual maximum rainfall over India, no statistically significant spatially uniform trends are observed, in agreement with previous studies using different approaches2,3,4. Furthermore, our space–time regression analysis of the return levels points to increasing spatial variability of rainfall extremes over India. Our findings highlight the need for systematic examination of global versus regional drivers of trends in Indian rainfall extremes, and may help to inform flood hazard preparedness and water resource management in the region.

264 citations


Journal ArticleDOI
Betty Abelev1, Jaroslav Adam2, Dagmar Adamová3, Andrew Marshall Adare4  +999 moreInstitutions (83)
TL;DR: The ALICE experiment has measured the inclusive J/psi production in Pb-Pb collisions at root s(NN) = 2.76 TeV down to zero transverse momentum in the rapidity range 2.5 < y < 4.
Abstract: The ALICE experiment has measured the inclusive J/psi production in Pb-Pb collisions at root s(NN) = 2.76 TeV down to zero transverse momentum in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/psi yield in Pb-Pb is observed with respect to the one measured in pp collisions scaled by the number of binary nucleon-nucleon collisions. The nuclear modification factor, integrated over the 0%-80% most central collisions, is 0.545 +/- 0.032(stat) +/- 0.083dsyst_ and does not exhibit a significant dependence on the collision centrality. These features appear significantly different from measurements at lower collision energies. Models including J/psi production from charm quarks in a deconfined partonic phase can describe our data.

238 citations


Journal ArticleDOI
TL;DR: This article outlines the components required to use virtual machine migration for dynamic resource management in the virtualized cloud environment, and presents categorization and details of migration heuristics aimed at reducing server sprawl, minimizing power consumption, balancing load across physical machines, and so on.
Abstract: Virtualization is a key concept in enabling the "computing-as-a-service" vision of cloud-based solutions. Virtual machine related features such as flexible resource provisioning, and isolation and migration of machine state have improved efficiency of resource usage and dynamic resource provisioning capabilities. Live virtual machine migration transfers "state" of a virtual machine from one physical machine to another, and can mitigate overload conditions and enables uninterrupted maintenance activities. The focus of this article is to present the details of virtual machine migration techniques and their usage toward dynamic resource management in virtualized environments. We outline the components required to use virtual machine migration for dynamic resource management in the virtualized cloud environment. We present categorization and details of migration heuristics aimed at reducing server sprawl, minimizing power consumption, balancing load across physical machines, and so on. We conclude with a discussion of open research problems in the area.

235 citations


Proceedings ArticleDOI
13 Feb 2012
TL;DR: This work extends a prior study to improve the algorithm based on using accelerometer, GPS and magnetometer sensor readings for traffic and road conditions detection and proposes Wolverine - a non-intrusive method that uses sensors present on smartphones.
Abstract: Monitoring road and traffic conditions in a city is a problem widely studied. Several methods have been proposed towards addressing this problem. Several proposed techniques require dedicated hardware such as GPS devices and accelerometers in vehicles [7][15][8] or cameras on roadside and near traffic signals [13]. All such methods are expensive in terms of monetary cost and human effort required. We propose Wolverine1 - a non-intrusive method that uses sensors present on smartphones. We extend a prior study [12] to improve the algorithm based on using accelerometer, GPS and magnetometer sensor readings for traffic and road conditions detection. We are specifically interested in identifying braking events - frequent braking indicates congested traffic conditions - and bumps on the roads to characterize the type of road. We evaluate the effectiveness of the proposed method based on experiments conducted on the roads in Mumbai, with promising results.

229 citations


Journal ArticleDOI
K. Aamodt1, Betty Abelev2, A. Abrahantes Quintana, Dagmar Adamová3  +931 moreInstitutions (76)
TL;DR: In this paper, the shape of the pair correlation distributions is studied in a variety of collision centrality classes between 0 and 50% of the total hadronic cross section for particles in the pseudorapidity interval |eta| 0.76 TeV for transverse momenta 0.25 p(T)(a).

210 citations


Journal ArticleDOI
TL;DR: It is shown that it is difficult to select the mostappropriate wastewater treatment alternative under the "no scenario" condition, and the decision-making methodology presented in this paper effectively identifies the most appropriate wastewater treatmentAlternative for each of the scenarios.

203 citations


Journal ArticleDOI
TL;DR: Based on extensive simulation results of the system developed using Java Agent DEvelopment framework, it has been found that multi-agent based demand response is successful in reducing the system peak in addition to cost benefit for the customers with high priority index.
Abstract: In this paper, an agent based intelligent energy management system is proposed to facilitate power trading among microgrids and allow customers to participate in demand response. The proposed intelligence uses demand response, and diversity in electricity consumption patterns of the customers and availability of power from distributed generators as the vital means in managing power in the system. A new priority index is proposed for customers participating in the market based on frequency and size of load participating in demand response. In order to validate the proposed method, a case study with two interconnected microgrids is simulated. Based on extensive simulation results of the system developed using Java Agent DEvelopment framework (JADE), it has been found that multi-agent based demand response is successful in reducing the system peak in addition to cost benefit for the customers with high priority index.

202 citations


Journal ArticleDOI
TL;DR: In this article, a reliability-based analysis of the underground tunnel support system of an underground tunnel in soil is presented, in terms of thrust, moment and shear forces in the lining.
Abstract: Underground openings and excavations are increasingly being used for civilian and strategic purposes all over the world. Recent earthquakes and resulting damage have brought into focus and raised the awareness for aseismic design and construction. In addition, underground tunnels, particularly, have distinct seismic behaviour due to their complete enclosure in soil or rock and their significant length. Therefore, seismic response of tunnel support systems warrant closer attention. The geological settings in which they are placed are often difficult to describe due to limited site investigation data and vast spatial variability. Therefore, the parameters which govern the design are many and their variabilities cannot be ignored. A solution to this issue is reliability based analysis and design. These real conditions of variability can only be addressed through a reliability based design. The problem addressed here is one of reliability-based analysis of the support system of an underground tunnel in soil. Issues like the description of the interaction between the tunnel lining and the surrounding medium, the type of limit state that would be appropriate, the nonavailability of a closed form performance function and the advantages of response surface method [RSM] are looked into. Both static and seismic environment with random variability in the material properties are studied here. Support seismic response is studied in terms of thrust, moment and shear forces in the lining. Interactive analysis using finite element method [FEM], combined with RSM and Hasofer-Lind reliability concept to assess the performance of the tunnel support, has proven useful under real field situations.

200 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examine the relationship between a customer's experience of product returns and subsequent shopping behavior, and demonstrate that the returns management process, rather than being regarded as an afterthought to the production and deployment of goods, can significantly and positively influence repurchase behavior.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluate the impact of band-to-band tunneling on the characteristics of n-channel junctionless transistors (JLTs) and present guidelines to optimize the device for high on-tooff current ratio.
Abstract: We evaluate the impact of band-to-band tunneling (BTBT) on the characteristics of n-channel junctionless transistors (JLTs) A JLT that has a heavily doped channel, which is fully depleted in the off state, results in a significant band overlap between the channel and drain regions This overlap leads to a large BTBT of electrons from the channel to the drain in n-channel JLTs This BTBT leads to a nonnegligible increase in the off-state leakage current, which needs to be understood and alleviated In the case of n-channel JLTs, tunneling of electrons from the valence band of the channel to the conduction band of the drain leaves behind holes in the channel, which would raise the channel potential This triggers a parasitic bipolar junction transistor formed by the source, channel, and drain regions induced in a JLT in the off state Tunneling current is observed to be a strong function of the silicon body thickness and doping of a JLT We present guidelines to optimize the device for high on-to-off current ratio Finally, we compare the off-state leakage of bulk JLTs with that of silicon-on-insulator JLTs

Journal ArticleDOI
TL;DR: In this paper, an attempt has been made to assess, prognosis and observe dynamism of soil erosion by universal soil loss equation (USLE) method at Penang Island, Malaysia.
Abstract: In this paper, an attempt has been made to assess, prognosis and observe dynamism of soil erosion by universal soil loss equation (USLE) method at Penang Island, Malaysia. Multi-source (map-, space- and ground-based) datasets were used to obtain both static and dynamic factors of USLE, and an integrated analysis was carried out in raster format of GIS. A landslide location map was generated on the basis of image elements interpretation from aerial photos, satellite data and field observations and was used to validate soil erosion intensity in the study area. Further, a statistical-based frequency ratio analysis was carried out in the study area for correlation purposes. The results of the statistical correlation showed a satisfactory agreement between the prepared USLE-based soil erosion map and landslide events/locations, and are directly proportional to each other. Prognosis analysis on soil erosion helps the user agencies/decision makers to design proper conservation planning program to reduce soil erosion. Temporal statistics on soil erosion in these dynamic and rapid developments in Penang Island indicate the co-existence and balance of ecosystem.

Journal ArticleDOI
TL;DR: The first measurements of the invariant differential cross sections of inclusive pi(0) and eta meson production at mid-rapidity in proton-proton collisions root s = 0.9 TeV and root s= 7 TeV are reported in this paper.

Journal ArticleDOI
TL;DR: Biocompatibility of HA-CNT composites, which is extremely important for its intended orthopedic application, has been summarized with an overview of the present status.

Journal ArticleDOI
TL;DR: This methodology is marked by excellent yield, regioselectivity and, above all, adaptability to synthesize imidazopyridine-based drug molecules such as Alpidem and Zolpidem.

Journal ArticleDOI
TL;DR: Direct chemical techniques are applied to categorically demonstrate the preservation of eumelanin in two > 160 Ma Jurassic cephalopod ink sacs and to confirm its chemical similarity to the ink of the modern cepHalopod, Sepia officinalis.
Abstract: Melanin is a ubiquitous biological pigment found in bacteria, fungi, plants, and animals. It has a diverse range of ecological and biochemical functions, including display, evasion, photoprotection, detoxification, and metal scavenging. To date, evidence of melanin in fossil organisms has relied entirely on indirect morphological and chemical analyses. Here, we apply direct chemical techniques to categorically demonstrate the preservation of eumelanin in two > 160 Ma Jurassic cephalopod ink sacs and to confirm its chemical similarity to the ink of the modern cephalopod, Sepia officinalis. Identification and characterization of degradation-resistant melanin may provide insights into its diverse roles in ancient organisms.

Journal ArticleDOI
TL;DR: In this article, the empirical relation between point load strength index (PLI) and uniaxial compressive strength (UCS) for Indian rocks has been investigated and verified.
Abstract: Uniaxial compressive strength (UCS) is one of the most significant geomechanical properties of rock, being of importance in civil engineering, mining, geotechnical, and infrastructure projects, etc. The UCS is a useful approximate parameter when considering a variety of issues encountered during blasting, excavation, and supporting in engineering works (Hoek 1977). UCS values are also employed in geomechanical classification of rock mass, viz. the rock mass rating (RMR) and Q system, which are used in designing and planning of underground works (Bieniawski 1976; Barton et al. 1974). There are standard methods for determination of UCS, proposed by various scientific agencies (ASTM 1984; ISRM 1979 1985), but all of them are tedious, time consuming, and expensive. Moreover, obtaining core samples of the desired geometry is often not possible, particularly in soft or highly jointed rock masses. Therefore, indirect tests such as the determination of the point load strength index (PLI) are widely used and accepted for estimation of the UCS value. Point load tests are preferred, as the test is quite flexible in terms of the sample to be used, ease of testing, and applicability in the laboratory as well as in the field. A number of researchers have attempted to provide empirical relations between UCS and PLI (D’Andrea et al. 1964; Broch and Franklin 1972; Bieniawski 1975; Hassani et al. 1980; Gunsallus and Kulhawy 1984; Panek and Fannon 1992; Singh and Singh 1993 and Kahraman 2001). These equations give quite similar results, although a few of them show wide variation. However, there is a need for more experimental work for better correlation, particularly for Indian rocks. The main objective of this study is to test and verify the empirical relation between PLI and UCS for some Indian rocks. All tests were performed on NX-size core samples of ten different rock types of igneous, sedimentary, and metamorphic origin from seven different lithostratigraphic units. A total number of 318 core samples were tested, and the average of three test results for each rock was used for analysis to reduce variation in the dataset.

Journal ArticleDOI
Betty Abelev1, Jaroslav Adam2, Dagmar Adamová3, Andrew Marshall Adare4  +1012 moreInstitutions (86)
TL;DR: In this paper, the authors used the ALICE detector at the Large Hadron Collider (LHC) to measure the cross-sections of the prompt (B feed-down subtracted) charmed mesons D0, D+, D+, and D*+ in the rapidity range |y| < 0.5, and for transverse momentum 1 < 0.
Abstract: The p t-differential production cross sections of the prompt (B feed-down subtracted) charmed mesons D0, D+, and D*+ in the rapidity range |y| < 0.5, and for transverse momentum 1 < p t < 12 GeV/c, were measured in proton-proton collisions at $ \sqrt {s} = 2.76\;{\text{TeV}} $ with the ALICE detector at the Large Hadron Collider. The analysis exploited the hadronic decays D0 → K−π+, D+ → K−π+π+, D*+ → D0π+, and their charge conjugates, and was performed on a $ {\mathcal{L}_{{{\rm int} }}} = 1.1\;{\text{n}}{{\text{b}}^{{ - 1}}} $ event sample collected in 2011 with a minimum-bias trigger. The total charm production cross section at $ \sqrt {s} = 2.76\;{\text{TeV}} $ and at 7 TeV was evaluated by extrapolating to the full phase space the p t-differential production cross sections at $ \sqrt {s} = 2.76\;{\text{TeV}} $ and our previous measurements at $ \sqrt {s} = 7\;{\text{TeV}} $ . The results were compared to existing measurements and to perturbative-QCD calculations. The fraction of $ {\text{c}}\overline {\text{d}} $ D mesons produced in a vector state was also determined.

Journal ArticleDOI
TL;DR: A procedure for tuning the spatial and the temporal resolution of Lissajous trajectories is presented and experimental results obtained on a custom-built atomic force microscope (AFM) are shown.
Abstract: A novel scan trajectory for high-speed scanning probe microscopy is presented in which the probe follows a two-dimensional Lissajous pattern The Lissajous pattern is generated by actuating the scanner with two single-tone harmonic waveforms of constant frequency and amplitude Owing to the extremely narrow frequency spectrum, high imaging speeds can be achieved without exciting the unwanted resonant modes of the scanner and without increasing the sensitivity of the feedback loop to the measurement noise The trajectory also enables rapid multiresolution imaging, providing a preview of the scanned area in a fraction of the overall scan time We present a procedure for tuning the spatial and the temporal resolution of Lissajous trajectories and show experimental results obtained on a custom-built atomic force microscope (AFM) Real-time AFM imaging with a frame rate of 1 frame s⁻¹ is demonstrated

Journal ArticleDOI
TL;DR: A facile decarbonylation reaction of aldehydes has been developed by employing Pd(OAc)(2) without using any exogenous ligand for palladium as well as CO-scavenger.

Journal ArticleDOI
01 Jun 2012
TL;DR: The design of a structured search engine which returns a multi-column table in response to a query consisting of keywords describing each of its columns is presented and a novel query segmentation model for matching keywords to table columns is defined.
Abstract: We present the design of a structured search engine which returns a multi-column table in response to a query consisting of keywords describing each of its columns. We answer such queries by exploiting the millions of tables on the Web because these are much richer sources of structured knowledge than free-format text. However, a corpus of tables harvested from arbitrary HTML web pages presents huge challenges of diversity and redundancy not seen in centrally edited knowledge bases. We concentrate on one concrete task in this paper. Given a set of Web tables T1,..., Tn, and a query Q with q sets of keywords Q1,..., Qq, decide for each Ti if it is relevant to Q and if so, identify the mapping between the columns of Ti and query columns. We represent this task as a graphical model that jointly maps all tables by incorporating diverse sources of clues spanning matches in different parts of the table, corpus-wide co-occurrence statistics, and content overlap across table columns. We define a novel query segmentation model for matching keywords to table columns, and a robust mechanism of exploiting content overlap across table columns. We design efficient inference algorithms based on bipartite matching and constrained graph cuts to solve the joint labeling task. Experiments on a workload of 59 queries over a 25 million web table corpus shows significant boost in accuracy over baseline IR methods.

Journal ArticleDOI
TL;DR: Various recent developments in the area of nonlinear state estimators from a Bayesian perspective are reviewed, including the constrained state estimation, the handling of multi-rate and delayed measurements and recent advances in model parameter estimation.

13 May 2012
TL;DR: In this article, the authors investigated the relationship between the Indian stock market index (BSE Sensex) and five macroeconomic variables, namely, industrial production index, wholesale price index, money supply, treasury bills rates and exchange rates over the period 1994:04-2011:06.
Abstract: The study investigates the relationships between the Indian stock market index (BSE Sensex) and five macroeconomic variables, namely, industrial production index, wholesale price index, money supply, treasury bills rates and exchange rates over the period 1994:04–2011:06. Johansen’s co-integration and vector error correction model have been applied to explore the long-run equilibrium relationship between stock market index and macroeconomic variables. The analysis reveals that macroeconomic variables and the stock market index are co-integrated and, hence, a long-run equilibrium relationship exists between them. It is observed that the stock prices positively relate to the money supply and industrial production but negatively relate to inflation. The exchange rate and the short-term interest rate are found to be insignificant in determining stock prices. In the Granger causality sense, macroeconomic variable causes the stock prices in the long-run but not in the short-run. There is bidirectional causality exists between industrial production and stock prices whereas, unidirectional causality from money supply to stock price, stock price to inflation and interest rates to stock prices are found.

Journal ArticleDOI
01 Sep 2012-Carbon
TL;DR: In this article, superparamagnetic Fe3O4 nanoparticles were anchored on reduced graphene oxide nanosheets by co-precipitation of iron salts in the presence of different amounts of graphene oxide (GO).

Journal ArticleDOI
Leszek Adamczyk1, G. Agakishiev2, Madan M. Aggarwal3, Zubayer Ahammed4  +367 moreInstitutions (52)
TL;DR: In this paper, a systematic study for centrality, transverse momentum (p(T)), and pseudorapidity (eta) dependence of the hadron elliptic flow (v(2)) at midrapidity (vertical bar eta vertical bar < 1.0) in Au + Au collisions at root s(NN) = 7.7, 11.5, 19.6, 27, and 39 GeV.
Abstract: A systematic study is presented for centrality, transverse momentum (p(T)), and pseudorapidity (eta) dependence of the inclusive charged hadron elliptic flow (v(2)) at midrapidity (vertical bar eta vertical bar < 1.0) in Au + Au collisions at root s(NN) = 7.7, 11.5, 19.6, 27, and 39 GeV. The results obtained with different methods, including correlations with the event plane reconstructed in a region separated by a large pseudorapidity gap and four-particle cumulants (v(2){4}), are presented to investigate nonflow correlations and v(2) fluctuations. We observe that the difference between v(2){2} and v(2){4} is smaller at the lower collision energies. Values of v(2), scaled by the initial coordinate space eccentricity, v(2)/epsilon, as a function of p(T) are larger in more central collisions, suggesting stronger collective flow develops in more central collisions, similar to the results at higher collision energies. These results are compared to measurements at higher energies at the Relativistic Heavy Ion Collider (root s(NN) = 62.4 and 200 GeV) and at the Large Hadron Collider (Pb + Pb collisions at root s(NN) = 2.76 TeV). The v(2)(pT) values for fixed pT rise with increasing collision energy within the pT range studied (<2 GeV/c). A comparison to viscous hydrodynamic simulations is made to potentially help understand the energy dependence of v(2)(pT). We also compare the v(2) results to UrQMD and AMPT transport model calculations, and physics implications on the dominance of partonic versus hadronic phases in the system created at beam energy scan energies are discussed.

Journal ArticleDOI
TL;DR: The present work aimed to develop a novel chitosan–PVA-based hydrogel which could behave both as a nanoreactor and an immobilizing matrix for silver nanoparticles (AgNPs) with promising antibacterial applications.
Abstract: Hydrogels are water-insoluble crosslinked hydrophilic networks capable of retaining a large amount of water. The present work aimed to develop a novel chitosan–PVA-based hydrogel which could behave both as a nanoreactor and an immobilizing matrix for silver nanoparticles (AgNPs) with promising antibacterial applications. The hydrogel containing AgNPs were prepared by repeated freeze–thaw treatment using varying amounts of the crosslinker, followed by in situ reduction with sodium borohydride as a reducing agent. Characterization studies established that the hydrogel provides a controlled and uniform distribution of nanoparticles within the polymeric network without addition of any further stabilizer. The average particle size was found to be 13 nm with size distribution from 8 to 21 nm as per HR-TEM studies. Swelling studies confirmed that higher amount of crosslinker and silver incorporation inside the gel matrices significantly enhanced the porosity and chain entanglement of the polymeric species of the hydrogel, respectively. The AgNP-hydrogel exhibited good antibacterial activity and was found to cause significant reduction in microbial growth (Escherichia coli) in 12 h while such activity was not observed for the hydrogel without AgNPs.

Journal ArticleDOI
TL;DR: In this article, the authors present new laboratory and field measurements showing that 7-9% of kerosene consumed by widely used simple wick lamps is converted to carbonaceous particulate matter that is nearly pure BC.
Abstract: Kerosene-fueled wick lamps used in millions of developing-country households are a significant but overlooked source of black carbon (BC) emissions. We present new laboratory and field measurements showing that 7-9% of kerosene consumed by widely used simple wick lamps is converted to carbonaceous particulate matter that is nearly pure BC. These high emission factors increase previous BC emission estimates from kerosene by 20-fold, to 270 Gg/year (90% uncertainty bounds: 110, 590 Gg/year). Aerosol climate forcing on atmosphere and snow from this source is estimated at 22 mW/m² (8, 48 mW/m²), or 7% of BC forcing by all other energy-related sources. Kerosene lamps have affordable alternatives that pose few clear adoption barriers and would provide immediate benefit to user welfare. The net effect on climate is definitively positive forcing as coemitted organic carbon is low. No other major BC source has such readily available alternatives, definitive climate forcing effects, and cobenefits. Replacement of kerosene-fueled wick lamps deserves strong consideration for programs that target short-lived climate forcers.

Journal ArticleDOI
TL;DR: L-M and BFGS algorithm-based BPNN models for grinding process are provided and can predict the nonlinear behaviour of multiple response grinding process with same level of accuracy as A-L based network.
Abstract: Highlights? Levenberg-Marquardt (L-M) and Boyden, Fletcher, Goldfarb and Shanno (BFGS) update Quasi-Newton (Q-N)-based BPNN networks are equally efficient as adaptive learning (A-L) algorithm-based BPNN network. ? L-M algorithm has fastest network convergence rate, followed by BFGS update Q-N and A-L algorithm. ? A-L -based BPNN learns faster than BFGS update Q-N, and L-M takes maximum time for network training. ? A-L algorithm is relatively easy-to-understand and implement, as compared to L-M or BFGS update Q-N algorithm, for online process control. Monitoring and control of multiple process quality characteristics (responses) in grinding plays a critical role in precision parts manufacturing industries. Precise and accurate mathematical modelling of multiple response process behaviour holds the key for a better quality product with minimum variability in the process. Artificial neural network (ANN)-based nonlinear grinding process model using backpropagation weight adjustment algorithm (BPNN) is used extensively by researchers and practitioners. However, suitability and systematic approach to implement Levenberg-Marquardt (L-M) and Boyden, Fletcher, Goldfarb and Shanno (BFGS) update Quasi-Newton (Q-N) algorithm for modelling and control of grinding process is seldom explored. This paper provides L-M and BFGS algorithm-based BPNN models for grinding process, and verified their effectiveness by using a real life industrial situation. Based on the real life data, the performance of L-M and BFGS update Q-N are compared with an adaptive learning (A-L) and gradient descent algorithm-based BPNN model. The results clearly indicate that L-M and BFGS-based networks converge faster and can predict the nonlinear behaviour of multiple response grinding process with same level of accuracy as A-L based network.

Journal ArticleDOI
TL;DR: It is proved that the receding horizon implementation of the resulting control policies renders the state of the overall system mean-square bounded under mild assumptions, thus reducing the on-line computation.