scispace - formally typeset
Search or ask a question

Showing papers by "V. Kamakoti published in 2013"


Journal ArticleDOI
TL;DR: A novel algorithm for synthesis (placement and embedding) of microarrays, which consumes significantly less time than the best algorithm reported in the literature, while maintaining the quality (border length of masks) of the result.
Abstract: DNA microarrays are used extensively for biochemical analysis that includes genomics and drug discovery. This increased usage demands large microarrays, thus complicating their computer aided design (CAD) and manufacturing methodologies. One such time-consuming design problem is to minimize the border length of masks used during the manufacture of microarrays. From the manufacturing point of view the border length of masks is one of the crucial parameters determining the reliability of the microarray. This article presents a novel algorithm for synthesis (placement and embedding) of microarrays, which consumes significantly less time than the best algorithm reported in the literature, while maintaining the quality (border length of masks) of the result. The proposed technique uses only a part of each probe to decide on the placement and the remaining parts for deciding on the embedding sequence. This is in contrast to the earlier methods that considered the entire probe for both placement and embedding. The second novelty of the proposed technique is the preclassification (prior to placement and embedding) of probes based on their prefixes. This decreases the complexity of the problem of deciding the next probe to be placed from that involving computation of Hamming distance between all probes (as used in earlier approaches) to the one involving searching of nonempty cells on a constant size grid array. The proposed algorithm is 43× faster than the best reported in the literature for the case of synthesizing a microarray with 250,000 probes and further exhibits linear behavior in terms of computation time for larger microarrays.

8 citations


Book ChapterDOI
01 Jan 2013
TL;DR: A novel design of a portable low cost 3 Lead wireless wearable ECG device that acquires the raw ECG signal using three electrodes placed on the chest of the subject and sends it to the microcontroller for further filtration of artifacts.
Abstract: In this paper, we present a novel design of a portable low cost 3 Lead wireless wearable ECG device. This device acquires the raw ECG signal using three electrodes placed on the chest of the subject. An analog circuit board conditions this raw signal which is then sent to the microcontroller for further filtration of artifacts. A novel method is used for phase compensation. Bluetooth module receives the filtered data from the microcontroller and transmits it to the user’s phone where the ECG is displayed and simultaneously stored in text and JPEG format. The recorded JPEG image can be transmitted to the doctor’s mobile phone via MMS and the latter can give an instant feedback. The analog front-end (AFE) module is designed using low cost reliable components. TI’s MSP430F47186 microcontroller runs the digital filters written in C language. The Bluetooth module “WT12” is from Bluegiga and the system requires 3 volts for operation. The Android application was written in Java for acquiring, plotting and storing the data in the phone’s SD card, in text and JPEG formats. Samsung Galaxy Fit android phone was used for the prototype design. All through the system design, cost incurred was kept to a minimum.

4 citations


Proceedings ArticleDOI
24 Oct 2013
TL;DR: This paper presents a framework wherein, the devices can operate at varying error tolerant modes while significantly reducing the power dissipated, and presents a novel layered synthesis optimization coupled with temperature aware supply and body bias voltage scaling to operate the design at various “tunable” error tolerance modes.
Abstract: With increasing computing power in mobile devices, conserving battery power (or extending battery life) has become crucial. This together with the fact that most applications running on these mobile devices are increasingly error tolerant, has created immense interest in stochastic (or inexact) computing. In this paper, we present a framework wherein, the devices can operate at varying error tolerant modes while significantly reducing the power dissipated. Further, in very deep sub-micron technologies, temperature has a crucial role in both performance and power. The proposed framework presents a novel layered synthesis optimization coupled with temperature aware supply and body bias voltage scaling to operate the design at various “tunable” error tolerant modes. We implement the proposed technique on a H.264 decoder block in industrial 28nm low leakage technology node, and demonstrate reductions in total power varying from 30% to 45%, while changing the operating mode from exact computing to inaccurate/error-tolerant computing.

2 citations


Journal ArticleDOI
TL;DR: This paper models the Capture-Power minimization problem as an instance of the Bottleneck Traveling Salesman Path Problem (BTSPP) and presents a methodology for estimating a lower bound on the peak capture-power.
Abstract: IR-Drop induced timing failures during testing can be avoided by minimizing the peak capturepower. This paper models the Capture-Power minimization problem as an instance of the Bottleneck Traveling Salesman Path Problem (BTSPP). The solution for the BTSPP implies an ordering on the input test vectors, which when followed during testing minimizes the Peak Capture-Power. The paper also presents a methodology for estimating a lower bound on the peak capture-power. Applying the proposed technique on ITC'99 benchmarks yielded optimal (equal to the estimated lower bound) results for all circuits. Interestingly, the technique also significantly reduced the average power consumed during testing when compared with commercial state-of-the-art tools

2 citations



Proceedings ArticleDOI
27 May 2013
TL;DR: PinPoint is presented, a technique that further divides the equivalence class into smaller sets based on the capture power consumed by the circuit under test in the presence of different faults in it, thus aiding in narrowing down on the fault.
Abstract: Conventional ATPG tools help in detecting only the equivalence class to which a fault belongs and not the fault itself. This paper presents PinPoint, a technique that further divides the equivalence class into smaller sets based on the capture power consumed by the circuit under test in the presence of different faults in it, thus aiding in narrowing down on the fault. Applying the technique on ITC benchmark circuits yielded significant improvement in diagnostic resolution.

1 citations