scispace - formally typeset
Search or ask a question

Showing papers by "Gaurav Sharma published in 2005"


Journal ArticleDOI
TL;DR: This article indicates several potential implementation errors that are not uncovered in tests performed using the original sample data published with the recently developed CIEDE2000 color-difference formula.
Abstract: This article and the associated data and programs provided with it are intended to assist color engineers and scientists in correctly implementing the recently developed CIEDE2000 color-difference formula. We indicate several potential implementation errors that are not uncovered in tests performed using the original sample data published with the standard. A supplemental set of data is provided for comprehensive testing of implementations. The test data, Microsoft Excel spreadsheets, and MATLAB scripts for evaluating the CIEDE2000 color difference are made available at the first author's website. Finally, we also point out small mathematical discontinuities in the formula. © 2004 Wiley Periodicals, Inc. Col Res Appl, 30, 21–30, 2005; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/col.20070

1,451 citations


Journal ArticleDOI
TL;DR: In this paper, a generalization of the well-known least significant bit (LSB) modification is proposed as the data-embedding method, which introduces additional operating points on the capacity-distortion curve.
Abstract: We present a novel lossless (reversible) data-embedding technique, which enables the exact recovery of the original host signal upon extraction of the embedded information. A generalization of the well-known least significant bit (LSB) modification is proposed as the data-embedding method, which introduces additional operating points on the capacity-distortion curve. Lossless recovery of the original is achieved by compressing portions of the signal that are susceptible to embedding distortion and transmitting these compressed descriptions as a part of the embedded payload. A prediction-based conditional entropy coder which utilizes unaltered portions of the host signal as side-information improves the compression efficiency and, thus, the lossless data-embedding capacity.

1,058 citations


Proceedings ArticleDOI
25 May 2005
TL;DR: The results indicate that adding a few wires to a wireless sensor network can not only reduce the average energy expenditure per sensor node, but also the non-uniformity in the energy expenditure across the sensor nodes.
Abstract: In this paper, we investigate the use of limited infrastructure, in the form of wires, for improving the energy efficiency of a wireless sensor network. We call such a sensor network - a wireless sensor network with a limited infrastructural support - a hybrid sensor network. The wires act as short cuts to bring down the average hop count of the network, resulting in a reduced energy dissipation per node. Our results indicate that adding a few wires to a wireless sensor network can not only reduce the average energy expenditure per sensor node, but also the non-uniformity in the energy expenditure across the sensor nodes.

122 citations


Journal ArticleDOI
TL;DR: The research described here investigates the hypothesis that nanoarchitecture contained in a nanowire array is capable of attenuating the adverse host response generated when medical devices are implanted in the body and suggests that the architecture in the static Nanowire arrays and the shear created by oscillating the nanowires would attenuate the biofouling response in vivo.
Abstract: The research described here investigates the hypothesis that nanoarchitecture contained in a nanowire array is capable of attenuating the adverse host response generated when medical devices are implanted in the body. This adverse host response, or biofouling, generates an avascular fibrous mass transfer barrier between the device and the analyte of interest, disabling the implant if it is a sensor. Numerous studies have indicated that surface chemistry and architecture modulate the host response. These findings led us to hypothesize that nanostructured surfaces will inhibit the formation of an avascular fibrous capsule significantly. We are investigating whether arrays of oscillating magnetostrictive nanowires can prevent protein adsorption. Magnetostrictive nanowires were fabricated by electroplating a ferromagnetic metal alloy into the pores of a nanoporous alumina template. The ferromagnetic nanowires are made to oscillate by oscillating the magnetic field surrounding the wires. Radiolabeled bovine serum albumin, enzyme-linked immunosorbent assay (ELISA), and other protein assays were used to study protein adhesion on the nanowire arrays. These results display a reduced protein adsorption per surface area of static nanowires. Comparing the surfaces, 14-30% of the protein that absorbed on the flat surface adsorbed on the nanowires. Our contact angle measurements indicate that the attenuation of protein on the nanowire surface might be due to the increased hydrophilicity of the nanostructured surface compared to a flat surface of the same material. We oscillated the magnetostrictive wires by placing them in a 38 G 10 Hz oscillating magnetic field. The oscillating nanowires show a further reduction in protein adhesion where only 7-67% of the protein on the static wires was measured on the oscillating nanowires. By varying the viscosity of the fluid the nanowires are oscillated in, we determined that protein detachment is shear-stress modulated. We created a high shearing fluid with dextran, which reduced protein adsorption on the oscillating nanowires by 70% over nanowires oscillating in baseline viscosity fluid. Our preliminary studies strongly suggest that the architecture in the static nanowire arrays and the shear created by oscillating the nanowire arrays would attenuate the biofouling response in vivo.

40 citations


Proceedings ArticleDOI
18 Mar 2005
TL;DR: A pitch synchronous overlap and add (PSOLA) algorithm is used for pitch and duration modifications in the watermark embedding phase and experiments with multiple speech codecs show very good robustness with low data-rate speech coders.
Abstract: We propose a speech watermarking algorithm based on the modification of the pitch (fundamental frequency) and duration of the quasi-periodic speech segments. Natural variability of these speech features allows watermarking modifications to be imperceptible to the human observer. On the other hand, the significance of these features makes the system robust to common signal processing operations and low data-rate source excitation based speech coders. This class of coders is particularly obstructive for conventional audio watermarking algorithms when applied to speech signals. A pitch synchronous overlap and add (PSOLA) algorithm is used for pitch and duration modifications in the watermark embedding phase. Experiments with multiple speech codecs show very good robustness with low data-rate (5-8 kbps) speech coders.

34 citations


Journal ArticleDOI
TL;DR: A novel two-dimensional color correction architecture is proposed that enables much greater control over the device color gamut with a modest increase in implementation cost and results show significant improvement in calibration accuracy and stability when compared to traditional 1-D calibration.
Abstract: Color device calibration is traditionally performed using one-dimensional (1-D) per-channel tone-response corrections (TRCs). While 1-D TRCs are attractive in view of their low implementation complexity and efficient real-time processing of color images, their use severely restricts the degree of control that can be exercised along various device axes. A typical example is that per separation (or per-channel), TRCs in a printer can be used to either ensure gray balance along the C=M=Y axis or to provide a linear response in delta-E units along each of the individual (C, M, and Y) axis, but not both. This paper proposes a novel two-dimensional color correction architecture that enables much greater control over the device color gamut with a modest increase in implementation cost. Results show significant improvement in calibration accuracy and stability when compared to traditional 1-D calibration. Superior cost quality tradeoffs (over 1-D methods) are also achieved for emulation of one color device on another.

28 citations


Proceedings ArticleDOI
18 Mar 2005
TL;DR: The principle of DMD is used to derive a steganalysis tool that detects the presence of hidden messages in uncompressed audio files and is illustrated its validity in the audio domain using a morphological distortion metric.
Abstract: Steganographic methods attempt to insert data in multimedia signals in an undetectable fashion. However, these methods often disrupt the underlying signal characteristics, thereby allowing detection under careful steganalysis. Under repeated embedding, disruption of the signal characteristics is the highest for the first embedding and decreases subsequently. That is, the marginal distortions due to repeated embeddings decrease monotonically. We name this general principle as the principle of diminishing marginal distortions (DMD) and illustrate its validity in the audio domain using a morphological distortion metric. The principle of DMD is used to derive a steganalysis tool that detects the presence of hidden messages in uncompressed audio files. Detailed analysis and experimental results are provided for the detection of spread spectrum watermarking and stochastic modulation steganography.

22 citations


Patent
02 May 2005
TL;DR: In this article, a method for deriving gamma for a display monitor that does not involve color matching tasks is presented, which includes displaying a test pattern to a user on the display monitor, including at least one of a pattern of alternating light and dark regions displayed to the user at different gamma correction levels.
Abstract: A method is presented for deriving gamma for a display monitor that does not involve color matching tasks. The method includes displaying a test pattern to a user on the display monitor. The test pattern includes at least one of a pattern of alternating light and dark regions displayed to the user at different gamma correction levels, or a grayscale character string displayed to the user at different digital gray levels against a background of two known luminance levels. Input is received from the user as to at least one of a gamma correction level that results in the pattern of alternating light and dark regions having light and dark regions of perceived equal size, or a digital gray level for the grayscale character string that results in maximum legibility of the text string against the two known background luminance levels. Gamma is derived for the display monitor based upon the user input.

19 citations


Proceedings ArticleDOI
17 Jan 2005
TL;DR: Examination of variation in average color for two-color halftoned images as a function of color-to-color misregistration distance shows that dot-on-dot/dot-off-dot color shifts were very high, while rotated dots screens exhibited very little color shift under the present idealized conditions.
Abstract: Color-to-color misregistration refers to misregistration between color separations in a printed or display image. Such misregistration in printed halftoned images can result in several image defects, a primary one being shifts in average color. The present paper examines the variation in average color for two-color halftoned images as a function of color-to-color misregistration distance. Dot-on-dot/dot-off-dot and rotated dot screen configurations are examined via simulation and supported by print measurements. The color and color shifts were calculated using a spectral Neugebauer model for the underlying simulations. As expected, dot-on-dot/dot-off-dot color shifts were very high, while rotated dots screens exhibited very little color shift under the present idealized conditions. The simulations also demonstrate that optical dot gain significantly reduces the color shifts seen in practice.

14 citations


Patent
Gaurav Sharma1, Stuart A. Schweid1
21 Apr 2005
TL;DR: In this article, the authors provide a message, generated based on a message authentication code (MAC), embedded in a look-up table associated with an image, which may be used to authenticate the image.
Abstract: System and methods provide a message, generated based on a message authentication code (MAC), embedded in a look-up table associated with an image. The embedding of the message does not affect the image. The message may be used to authenticate the image.

12 citations


Journal ArticleDOI
TL;DR: The work presents an overview of a color imaging system and its elements and then highlights techniques in the literature that attempt to account for system interactions for improved quality or performance.
Abstract: The work presents an overview of a color imaging system and its elements and then highlights techniques in the literature that attempt to account for system interactions for improved quality or performance. After that, presented in greater detail, are two specific examples of approaches that take into account interactions between elements that are normally treated independently. Finally, concluding remarks are presented.

Patent
Robert P. Loce1, Yeqing Zhang1, Gaurav Sharma1, Steven J. Harrington1, Peter A. Crean1 
20 Dec 2005
TL;DR: In this article, a system for enabling depth perception of image content in a rendered composite image, wherein illuminant/colorant depth discrimination encoding provides encoding of first and second source images in a composite image.
Abstract: A system for enabling depth perception of image content in a rendered composite image, wherein illuminant/colorant depth discrimination encoding provides encoding of first and second source images in a composite image, for the purposes of subsequent illuminant/colorant depth discrimination decoding. Composite image rendering allows for rendering the composite image in a physical form. Illuminant/colorant depth discrimination decoding allows recovery of the first and second source images, thus offering to an observer the perception of spatial disparity between at least one of the recovered source images and some or all of the remaining image content perceived in the rendered composite image.

Journal ArticleDOI
TL;DR: Recurrent pulmonary emboli resulted in chronic pulmonary hypertension and eventual death in a patient with chronic tetraplegia and was described in an unusual case of progressive pulmonary hypertension in a chronically paralyzed spinal cord injury patient.
Abstract: Case report. To describe an unusual case of progressive pulmonary hypertension due to recurrent pulmonary embolism in a chronically paralyzed spinal cord injury patient. Veterans Administration Hospital, West Roxbury, MA, USA. A 57-year-old man, tetraplegic, sensory incomplete and motor complete for 30 years due to a diving accident, complained of lightheadedness and shortness of breath intermittently for 7 years. Examination during the latest episode revealed anxiety, confusion, respirations 28 per min, blood pressure 80/60 mmHg, and arterial pH 7.41, 28 mmHg, 95 mmHg on 2 l of oxygen. A chest film 2 weeks earlier had revealed a right-sided cutoff of pulmonary vasculature; the current film showed right-sided pleural effusion. Review of EKGs showed a trend of increasing right axis deviation with recovery and recurrences during the previous 9 years and a current incomplete right bundle branch block with clockwise rotation and inverted T waves in V1–4. Computerized tomography with contrast material revealed small pulmonary emboli, but only in retrospect. The patient died shortly after scanning. The pulmonary arteries were free of thromboemboli on gross examination but medium and small-sized arteries were constricted or obliterated with thrombotic material microscopically. The estimated ages of the thromboemboli ranged from days to years. The right ventricle was hypertrophied; the coronary arteries were patent. Recurrent pulmonary emboli resulted in chronic pulmonary hypertension and eventual death in a patient with chronic tetraplegia.

Proceedings ArticleDOI
18 Mar 2005
TL;DR: Experimental results show that the proposed method can significantly improve accuracy and robustness over a global 2D parametric registration and can also outperform the local registration algorithm based on the Lucas-Kanade optical flow technique.
Abstract: Image registration is a fundamental task in both image processing and computer vision. We present a novel method for local image registration based on adaptive filtering techniques. We utilize an adaptive filter to estimate and track correspondences among multiple images containing overlapping views of common scene regions. Image pixels are traversed in an order established by space-filling curves, to preserve the contiguity and hence track locally varying registration changes. The algorithm differs from pre-existing work on image registration in that it requires only local information and relatively low computational effort. These characteristics render the method suitable for deployment in imaging sensor networks, toward which the current work is directed. We evaluate the performance of the proposed algorithm using images captured with a digital camera in various real-world scenarios. Experimental results show that the proposed method can significantly improve accuracy and robustness over a global 2D parametric registration and can also outperform the local registration algorithm based on the Lucas-Kanade optical flow technique (Lucas, B. and Kanade, T., 1981).

Proceedings ArticleDOI
17 Jan 2005
TL;DR: An ink interaction model is built for the light and dark inks, then a composite primary is constructed that smoothly transitions from the light ink to dark ink, preventing the blended ink from over inking, while ensuring a smooth transition in lightness, chroma, and hue.
Abstract: The use of four process inks (CMYK) is common practice in the graphic arts, and provides the foundation for many output device technologies. In commercial applications the number of inks are sometimes extended beyond the process inks depending on the customers’ requirements, and cost constraints. In inkjet printing extra inks have been used to both extend the color gamut, and/or improve the image quality in the highlight regions by using "light" inks. The addition of "light" inks are sometimes treated as an extension of the existing Cyan or Magenta inks, with the Cyan tone scale smoothly transitioning from the light to the dark ink as the required density increases, or are sometimes treated independently. If one is to treat the light ink as an extension of the dark ink, a simple blend can work well where the light and dark inks fall at the same hue angle, but will exhibit problems if the light and dark inks hues deviate significantly. The method documented in this paper addresses the problem where the hues of the light and dark inks are significantly different. An ink interaction model is built for the light and dark inks, then a composite primary is constructed that smoothly transitions from the light ink to dark ink, preventing the blended ink from over inking, while ensuring a smooth transition in lightness, chroma, and hue. The method was developed and tested on an XES (Xerox Engineering Systems) ColorGraphx X2 printer, on multiple substrates, and found to provide superior results to the alternative linear blending techniques.

Proceedings ArticleDOI
14 Nov 2005
TL;DR: A set theoretic framework for watermarking is introduced and its effectiveness is illustrated by designing a hierarchical semi-fragile watermark that is tolerant to compression and allows tamper localization.
Abstract: We introduce a set theoretic framework for watermarking and illustrate its effectiveness by designing a hierarchical semi-fragile watermark that is tolerant to compression and allows tamper localization. Using a quad-tree representation, a spatial resolution hierarchy is established on the image and a watermark is embedded corresponding to each node of the hierarchy. The watermarked image is determined so as to jointly satisfy the multiple constraints of watermark detectability, imperceptibility, and robustness to compression using the method of projections onto convex sets. The spatial hierarchy of watermarks provides a graceful trade-off between robustness and localization under JPEG compression: mild JPEG compression preserves watermarks at all levels of the hierarchy allowing fine localization of malicious changes while aggressive JPEG compression preserves watermarks at coarser levels of the hierarchy still assuring overall image integrity but giving up the capability for localization. Experimental results are presented to illustrate the effectiveness of the method.

Proceedings ArticleDOI
TL;DR: The proposed adaptive filtering framework has been shown by experimental results to give superior performance compared to global 2-D parametric registration and Lucas-Kanade optical flow technique when the image motion consists of mostly translational motion.
Abstract: We present a novel local image registration method based on adaptive filtering techniques. The proposed method utilizes an adaptive filter to track smoothly, locally varying changes in the motion field between the images. Image pixels are traversed following a scanning order established by Hilbert curves to preserve the contiguity in the 2-D image plane. We have performed experiments using both simulated images and real images captured by a digital camera. The proposed adaptive filtering framework has been shown by experimental results to give superior performance compared to global 2-D parametric registration and Lucas-Kanade optical flow technique when the image motion consists of mostly translational motion. The simulation experiments show that the proposed image registration technique can also handle small amounts of rotation, scale and perspectivity in the motion field.

Journal ArticleDOI
TL;DR: periodic changes in the depth of breathing were accompanied by periodic changes in amplitude of forehead cutaneous pulse, blood pressure, or apical cardiac impulse in all patients and cyclical pulsus alternans occurred in two patients.
Abstract: Case reports. To describe Cheyne–Stokes respiration (CSR) and associated circulatory abnormalities in three patients with spinal cord lesions. Veterans Administration Hospital, USA. One paraplegic patient with coronary artery disease in congestive heart failure, one tetraplegic patient with alcoholic cardiomyopathy and postural hypotension, and one tetraplegic complete patient with cardiomegaly, severe aortic atherosclerosis, and postural hypotension. Breathing activity was measured with a nasal thermistor or abdominal stretch transducer. Cardiac activity was estimated with a photoelectric sensor for cutaneous blood flow placed on the forehead or a piezoelectric transducer for pressure positioned over an artery or the cardiac apex. Tracings were drawn on a strip chart recorder. The subjects were at rest in semireclining positions. Survey times were 17–21 min, and cycling periods were 41–72 s. Periodic changes in the depth of breathing were accompanied by periodic changes in amplitude of forehead cutaneous pulse, blood pressure, or apical cardiac impulse in all patients. Peak circulation occurred at or following peak respiration. In addition, cyclical pulsus alternans occurred in two patients. Three spinal cord injury patients sustained CSR and circulatory periodicity associated with cardiac disease and postural hypotension.

Proceedings ArticleDOI
17 Jan 2005
TL;DR: An algorithmic framework for memory efficient, 'on-the-fly' halftoning in a progressive transmission environment that achieves significant memory efficiency by storing only the halftoned image and updating it in response to additional information received through progressive transmission.
Abstract: We describe and implement an algorithmic framework for memory efficient, ‘on-the-fly’ halftoning in a progressive transmission environment. Instead of a conventional approach which repeatedly recalls the continuous tone image from memory and subsequently halftones it for display, the proposed method achieves significant memory efficiency by storing only the halftoned image and updating it in response to additional information received through progressive transmission. Thus the method requires only a single frame-buffer of bits for storage of the displayed binary image and no additional storage is required for the contone data. The additional image data received through progressive transmission is accommodated through in-place updates of the buffer. The method is thus particularly advantageous for high resolution bi-level displays where it can result in significant savings in memory. The proposed framework is implemented using a suitable multi-resolution, multi-level modification of error diffusion that is motivated by the presence of a single binary frame-buffer. Aggregates of individual display bits constitute the multiple output levels at a given resolution. This creates a natural progression of increasing resolution with decreasing bit-depth.




Proceedings ArticleDOI
TL;DR: In this article, the combination of physical insight and mathematical signal processing tools has been shown to offer unique advantages in solving problems in imaging systems, and the authors illustrate and support this idea using examples from their research on imaging.
Abstract: Imaging devices operate at the physical interfaces corresponding to image capture and reproduction. The combination of physical insight and mathematical signal processing tools, therefore, offers unique advantages in solving problems in imaging systems. In this paper, we illustrate and support this idea using examples from our research on imaging, where the combination of physical insight, mathematical tools, and engineering ingenuity leads to elegant and effective solutions.

01 Jan 2005
TL;DR: A hierarchical authentication watermark is proposed for securely verifying the source and integrity of digital images and for determining altered image regions when the integrity verification fails, and a novel lossless data embedding method enables zero-distortion reconstruction of unmarked images upon extraction of the embedded payload.
Abstract: This thesis addresses authentication watermarking and compression of digital images under zero-distortion and maximum-per-sample-distortion constraints In this context, a hierarchical authentication watermark is proposed for securely verifying the source and integrity of digital images and for determining altered image regions when the integrity verification fails The proposed approach eliminates the security problems associated with previous independent block-based authentication watermarks, while sustaining their superior tamper localization properties Insertion of the authentication watermark induces a small, bounded, yet permanent loss of image fidelity, which is often undesirable A novel lossless data embedding method remedies this problem and enables zero-distortion reconstruction of unmarked images upon extraction of the embedded payload The reconstruction is achieved by compressing portions of the image data that are susceptible to watermarking distortion—such as the least significant image level—and transmitting the compressed descriptions as a part of the embedded payload An algorithm that utilizes the remainder of the image data as side-information is proposed for efficient compression of the least significant image levels This algorithm exploits both spatial and inter-level correlations within the image and achieves excellent compression—consequently, lossless data embedding—performance The algorithm serves also as the basis for a new level-embedded image compression method, which generates a scalable bit-stream, truncation of which corresponds to bit-plane truncation in the pixel domain Security and tamper localization capability of the hierarchical scheme are combined with the zero-distortion reconstruction property of the lossless data embedding method in a flexible framework to obtain a lossless authentication watermark As opposed to earlier lossless authentication methods that required recovery of the original image prior to validation, the new framework allows validation of the watermarked images before the reconstruction step For verified images, integrity of the reconstructed image is ensured by the uniqueness of the recovery procedure As a result, the framework offers computational efficiency, improved tamper-localization accuracy, and implementation flexibility

Proceedings ArticleDOI
08 Sep 2005
TL;DR: An informed watermark embedding method in fractional Fourier domain that provides improvement against synchronization attacks and Insertion of multiple bits without using a block based scheme is proposed.
Abstract: We propose an informed watermark embedding method in fractional Fourier domain Detectability and imperceptibility of the watermark sequence constraints as well as real-valuedness in spatial domain are imposed on the resulting image using a set theoretic framework Insertion of multiple bits without using a block based scheme is also a novel approach and provides improvement against synchronization attacks The watermarked image is determined using the method of projections onto convex sets (POCS) to simultaneously satisfy the multiple constraints A performance comparison between blind and informed embedding is illustrated and experimental results are presented to show the effectiveness of the informed method

Journal ArticleDOI
TL;DR: The components of FPLC-AIII (46 kDa; A represents antigenic protein) and IV (28 kDa) were most promising as the antibodies against these fractions inhibited sperm binding to zona pellucida even at a dilution of 1 : 1000 as tested by the sperm-zona binding assays.
Abstract: Contents Goat sperm surface proteins obtained from purified plasma membrane (PPM) vesicles (purity of membrane checked by marker enzymes and transmission electron microscopy) were size fractionated on an fast protein liquid chromatography (FPLC) gel filtration column. All the seven surface proteins (129, 100, 46, 28, 27, 18 and 10 kDa) obtained were further fractionated and purified on high-efficiency gel filtration (GFC-HPLC) as well as ion exchange (DEAE-HPLC) columns. Antibodies were generated against the PPM and the protein fractions. Such resolved and purified surface antigens were tested by Dot Blot Immunoassay and homologous in vitro sperm–zona binding assays. It was revealed that the binding of goat spermatozoa to homologous zona pellucida was inhibited by antisera raised against the five lower molecular weight surface antigens. Further, the components of FPLC-AIII (46 kDa; A represents antigenic protein) and IV (28 kDa) were most promising as the antibodies against these fractions inhibited sperm binding to zona pellucida even at a dilution of 1 : 1000 as tested by the sperm–zona binding assays.


Journal Article
TL;DR: The subject of "criminal negligence by doctors" is always a complex matter for medical fraternity and a great challenge before judiciary as mentioned in this paper, and a sudden spurt of cases of "negligence" (about 20000 a year as estimated by the IMA) and decision of two Judges Bench of SC in Dr. Suresh Gupta vs. Govt. of NCT of Delhi, on 4th August 2004, and another decision by three judges Bench of Apex Court, exactly after one year i.e. on August 5, 2005 in an appeal filed by Dr. Jacob Mat
Abstract: The subject of “criminal negligence by doctors” is always a complex matter for medical fraternity and a great challenge before judiciary. In recent years, sudden spurt of cases of “negligence” (about 20000 a year as estimated by the IMA) and decision of two Judges Bench of SC in Dr. Suresh Gupta vs. Govt. of NCT of Delhi, on 4th August 2004, and another decision by three Judges Bench of Apex Court, exactly after one year i.e. on August 5, 2005 in an appeal filed by Dr. Jacob Mathew of CMC, Ludhiana, Punjab, raises a fresh debate and gives an opportunity to medical fraternity for introspection about implementation of medical ethics, update of knowledge and enhancement of skill, but not an immunity against filing of ‘criminal negligence suits’ against them.

Proceedings Article
01 Jan 2005
TL;DR: This paper illustrates and support the idea that the combination of physical insight, mathematical tools, and engineering ingenuity leads to elegant and effective solutions in imaging systems.
Abstract: Imaging devices operate at the physical interfaces corresponding to image capture and reproduction. The combination of physical insight and mathematical signal processing tools, therefore, offers unique advantages in solving problems in imaging systems. In this paper, we illustrate and support this idea using examples from our research on imaging, where the combination of physical insight, mathematical tools, and engineering ingenuity leads to elegant and effective solutions.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated whether vibrating magnetostrictive nanowires, formed in nanowire arrays, can prevent protein and cell adhesion, and found that the vibrating nanwires showed a further reduction in protein adhesion compared to static wires.
Abstract: The research described here investigates the hypothesis that nanoarchitecture contained in a nanowire array is capable of attenuating the adverse host response, biofouling, generated when medical devices, such as sensors, are implanted in the body. This adverse host response generates an avascular fibrous mass transfer barrier between the device and the analyte of interest, disabling the sensor. Numerous studies have indicated that surface chemistry and architecture modulated the host response. These findings lead us to hypothesize that nanostructured surfaces will significantly inhibit the formation of an avascular fibrous capsule. We are investigating whether vibrating magnetostrictive nanowires, formed in nanowire arrays, can prevent protein and cell adhesion. Magnetostrictive nanowires are fabricated by electroplating a ferromagnetic metal alloy into the pores of a nanoporous alumina template. The ferromagnetic nanowires are made to vibrate by altering the magnetic field surrounding the wires. Enzyme-linked immunosorbent assay (ELISA) and other protein assays were used to study protein adhesion on the nanowire arrays. These results display a reduced protein adhesion per surface area of static nanowires. The vibrating nanowires show a further reduction in protein adhesion, compared to static wires. Studies were also preformed to investigate the effects of nanoarchitecture have on cell adhesion. These studies were performed with both static and vibrating nanowires. Preliminary protein adhesion studies have shown that a nanowire arrays modulate protein adhesion in vitro .