scispace - formally typeset
Search or ask a question
Author

Sandeep K. S. Gupta

Other affiliations: Ohio University, Colorado State University, Duke University  ...read more
Bio: Sandeep K. S. Gupta is an academic researcher from Arizona State University. The author has contributed to research in topics: Automatic test pattern generation & Wireless sensor network. The author has an hindex of 62, co-authored 452 publications receiving 14100 citations. Previous affiliations of Sandeep K. S. Gupta include Ohio University & Colorado State University.


Papers
More filters
Proceedings ArticleDOI
16 Jul 2001
TL;DR: The aim is to motivate vigorous research in this area by illustrating the need for more application-specific and novel approaches toward developing wireless networking solutions for human-implanted smart sensors.
Abstract: Implanted biomedical devices have the potential to revolutionize medicine. Smart sensors, which are created by combining sensing materials with integrated circuitry, are being considered for several biomedical applications such as a glucose level monitor or a retina prosthesis. These devices require the capability to communicate with an external computer system (base station) via a wireless interface. The limited power and computational capabilities of smart sensor based biological implants present research challenges in several aspects of wireless networking due to the need for having a bio-compatible, fault-tolerant, energy-efficient, and scalable design. Further, em bedding thesesensors in humans add additional requirements. For example, the wireless networking solutions should be ultra-safe and reliable, work trouble-free in different geographical locations (although implants are typically not expected to move; they shouldn't restrict the movements of their human host), and require minimal maintenance. This necessitates application-specific solutions which are vastly different from traditional solutions.In this paper, we describe the potential of biomedical smart sensors. We then explain the challenges for wireless networking of human-embedded smart sensor arrays and our preliminary approach for wireless networking of a retina prosthesis. Our aim is to motivate vigorous research in this area by illustrating the need for more application-specific and novel approaches toward developing wireless networking solutions for human-implanted smart sensors.

626 citations

Journal ArticleDOI
TL;DR: Results from small-scale data center simulations show that solving the formulation leads to an inlet temperature distribution that, compared to other approaches, is 2 degC to 5 degC lower and achieves about 20 to 30 percent cooling energy savings at common data center utilization rates.
Abstract: High-performance computing data centers have been rapidly growing, both in number and size. Thermal management of data centers can address dominant problems associated with cooling such as the recirculation of hot air from the equipment outlets to their inlets and the appearance of hot spots. In this paper, we show through formalization that minimizing the peak inlet temperature allows for the lowest cooling power needs. Using a low-complexity linear heat recirculation model, we define the problem of minimizing the peak inlet temperature within a data center through task assignment (MPIT-TA), consequently leading to minimal cooling-requirement. We also provide two methods to solve the formulation: Xlnt-GA, which uses a genetic algorithm, and Xlnt-SQP, which uses sequential quadratic programming. Results from small-scale data center simulations show that solving the formulation leads to an inlet temperature distribution that, compared to other approaches, is 2 degC to 5 degC lower and achieves about 20 to 30 percent cooling energy savings at common data center utilization rates. Moreover, our algorithms consistently outperform the minimize heat recirculation algorithm, a recirculation-reducing task placement algorithm in the literature.

486 citations

Journal ArticleDOI
TL;DR: Reconfigurable Context-Sensitive Middleware facilitates the development and runtime operations of context-sensitive pervasive computing software.
Abstract: Context-sensitive applications need data from sensors, devices, and user actions, and might need ad hoc communication support to dynamically discover new devices and engage in spontaneous information exchange. Reconfigurable Context-Sensitive Middleware facilitates the development and runtime operations of context-sensitive pervasive computing software.

416 citations

Journal ArticleDOI
TL;DR: Simulation results show that the proposed algorithm is able to schedule transmissions such that the bandwidth allocated to different flows is proportional to their weights.
Abstract: Fairness is an important issue when accessing a shared wireless channel. With fair scheduling, it is possible to allocate bandwidth in proportion to weights of the packet flows sharing the channel. This paper presents a fully distributed algorithm for fair scheduling in a wireless LAN. The algorithm can be implemented without using a centralized coordinator to arbitrate medium access. The proposed protocol is derived from the Distributed Coordination Function in the IEEE 802.11 standard. Simulation results show that the proposed algorithm is able to schedule transmissions such that the bandwidth allocated to different flows is proportional to their weights. An attractive feature of the proposed approach is that it can be implemented with simple modifications to the IEEE 802.11 standard.

324 citations

Proceedings ArticleDOI
27 Oct 2003
TL;DR: This paper proposes an approach wherein, biometrics derived from the body are used for securing the keying material, which obviates the need for expensive computation and avoids unnecessary communication making this approach novel compared to existing approaches.
Abstract: Advances in microelectronics, material science and wireless technology have led to the development of sensors that can be used for accurate monitoring of inaccessible environments. Health monitoring, telemedicine, military and environmental monitoring are some of the applications where sensors can be used. The sensors implanted inside the human body to monitor parts of the body are called biosensors. These biosensors form a network and collectively monitor the health condition of their carrier or host. Health monitoring involves collection of data about vital body parameters from different parts of the body and making decisions based on it. This information is of personal nature and is required to be secured. Insecurity may also lead to dangerous consequences. Due to the extreme constraints of energy, memory and computation securing the communication among the biosensors is not a trivial problem. Key distribution is central to any security mechanism. In this paper we propose an approach wherein, biometrics derived from the body are used for securing the keying material. This method obviates the need for expensive computation and avoids unnecessary communication making our approach novel compared to existing approaches.

319 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
24 Jan 2005
TL;DR: It is shown that such an approach can yield an implementation of the discrete Fourier transform that is competitive with hand-optimized libraries, and the software structure that makes the current FFTW3 version flexible and adaptive is described.
Abstract: FFTW is an implementation of the discrete Fourier transform (DFT) that adapts to the hardware in order to maximize performance. This paper shows that such an approach can yield an implementation that is competitive with hand-optimized libraries, and describes the software structure that makes our current FFTW3 version flexible and adaptive. We further discuss a new algorithm for real-data DFTs of prime size, a new way of implementing DFTs by means of machine-specific single-instruction, multiple-data (SIMD) instructions, and how a special-purpose compiler can derive optimized implementations of the discrete cosine and sine transforms automatically from a DFT algorithm.

5,172 citations

Proceedings ArticleDOI
03 Nov 2004
TL;DR: The FTSP achieves its robustness by utilizing periodic flooding of synchronization messages, and implicit dynamic topology update and comprehensive error compensation including clock skew estimation, which is markedly better than that of the existing RBS and TPSN algorithms.
Abstract: Wireless sensor network applications, similarly to other distributed systems, often require a scalable time synchronization service enabling data consistency and coordination. This paper describes the Flooding Time Synchronization Protocol (FTSP), especially tailored for applications requiring stringent precision on resource limited wireless platforms. The proposed time synchronization protocol uses low communication bandwidth and it is robust against node and link failures. The FTSP achieves its robustness by utilizing periodic flooding of synchronization messages, and implicit dynamic topology update. The unique high precision performance is reached by utilizing MAC-layer time-stamping and comprehensive error compensation including clock skew estimation. The sources of delays and uncertainties in message transmission are analyzed in detail and techniques are presented to mitigate their effects. The FTSP was implemented on the Berkeley Mica2 platform and evaluated in a 60-node, multi-hop setup. The average per-hop synchronization error was in the one microsecond range, which is markedly better than that of the existing RBS and TPSN algorithms.

2,267 citations

09 Mar 2012
TL;DR: Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems as mentioned in this paper, and they have been widely used in computer vision applications.
Abstract: Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems. In this entry, we introduce ANN using familiar econometric terminology and provide an overview of ANN modeling approach and its implementation methods. † Correspondence: Chung-Ming Kuan, Institute of Economics, Academia Sinica, 128 Academia Road, Sec. 2, Taipei 115, Taiwan; ckuan@econ.sinica.edu.tw. †† I would like to express my sincere gratitude to the editor, Professor Steven Durlauf, for his patience and constructive comments on early drafts of this entry. I also thank Shih-Hsun Hsu and Yu-Lieh Huang for very helpful suggestions. The remaining errors are all mine.

2,069 citations

Proceedings ArticleDOI
12 May 1998
TL;DR: An adaptive FFT program that tunes the computation automatically for any particular hardware, and tests show that FFTW's self-optimizing approach usually yields significantly better performance than all other publicly available software.
Abstract: FFT literature has been mostly concerned with minimizing the number of floating-point operations performed by an algorithm. Unfortunately, on present-day microprocessors this measure is far less important than it used to be, and interactions with the processor pipeline and the memory hierarchy have a larger impact on performance. Consequently, one must know the details of a computer architecture in order to design a fast algorithm. In this paper, we propose an adaptive FFT program that tunes the computation automatically for any particular hardware. We compared our program, called FFTW, with over 40 implementations of the FFT on 7 machines. Our tests show that FFTW's self-optimizing approach usually yields significantly better performance than all other publicly available software. FFTW also compares favorably with machine-specific, vendor-optimized libraries.

1,824 citations