scispace - formally typeset
Search or ask a question

Showing papers by "Sos S. Agaian published in 2009"


Proceedings ArticleDOI
TL;DR: It is shown that the wavelet-based approach can usually detect the targets with fewer false-alarm regions than possible with standard approaches, and the stability of the optimal wavelets and detection performance variation is investigated, across perspective changes, image frame sample, and image scene content types.
Abstract: Detecting dim targets in infrared imagery remains a challenging task. Several techniques exist for detecting bright, high contrast targets such as CFAR detectors, edge detection, and spatial thresholding. However, these approaches often fail for detection of targets with low contrast relative to background clutter. In this paper we exploit the transient capture capability and directional filtering aspect of wavelets to develop a wavelet based image enhancement method. We develop an image representation, using wavelet filtered imagery, which facilitates dim target detection. We further process the wavelet-enhanced imagery using the Michelson visibility operator to perform nonlinear contrast enhancement prior to target detection. We discuss the design of optimal wavelets for use in the image representation. We investigate the effect of wavelet choice on target detection performance, and design wavelets to optimize measures of visual information on the enhanced imagery. We present numerical results demonstrating the effectiveness of the approach for detection of dim targets in real infrared imagery. We compare target detection performance to performance obtained using standard techniques such as edge detection. We also compare performance to target detection performed on imagery enhanced by optimizing visual information measures in the spatial domain. We investigate the stability of the optimal wavelets and detection performance variation, across perspective changes, image frame sample (for frames extracted from infrared video sequences), and image scene content types. We show that the wavelet-based approach can usually detect the targets with fewer false-alarm regions than possible with standard approaches.

42 citations


Proceedings ArticleDOI
13 Nov 2009
TL;DR: This paper introduces a new lossless approach, called EdgeCrypt, to encrypt medical images using the information contained within an edge map, which can fully protect the selected objects/regions within medical images or the entire medical images.
Abstract: Image encryption is an effective approach for providing security and privacy protection for medical images. This paper introduces a new lossless approach, called EdgeCrypt, to encrypt medical images using the information contained within an edge map. The algorithm can fully protect the selected objects/regions within medical images or the entire medical images. It can also encrypt other types of images such as grayscale images or color images. The algorithm can be used for privacy protection in the real-time medical applications such as wireless medical networking and mobile medical services.

36 citations


Proceedings ArticleDOI
11 Oct 2009
TL;DR: This research is to prove the feasibility of an application specific integrated circuit (ASIC) that performs a convolution on an acquired image in real time and uses less power consumption and has a delay of 20ns from input to output using 32nm process library.
Abstract: This paper presents a direct method of reducing convolution processing time using hardware computing and implementations of discrete linear convolution of two finite length sequences (NXN). This implementation method is realized by simplifying the convolution building blocks. The purpose of this research is to prove the feasibility of an application specific integrated circuit (ASIC) that performs a convolution on an acquired image in real time. The proposed implementation uses a modified hierarchical design approach, which efficiently and accurately speeds up computation; reduces power, hardware resources, and area significantly. The efficiency of the proposed convolution circuit is tested by embedding it in a top level FPGA. Simulation and comparison to different design approaches show that the circuit uses only 5mw that saves almost 35% of area and is four times faster than what is implemented in [5]. In addition, the presented circuit uses less power consumption and has a delay of 20ns from input to output using 32nm process library. It also provides the necessary modularity, expandability, and regularity to form different convolutions for any number of bits.

31 citations


Proceedings ArticleDOI
13 Nov 2009
TL;DR: A new powerful nonlinear filter called the alpha weighted quadratic filter for mammogram enhancement that can be used for automatic segmentation and excellent enhancement results can be obtained with no apriori knowledge of the mammogram contents.
Abstract: Mammograms are widely used to detect breast cancer in women. The quality of the image may suffer from poor resolution or low contrast due to the limitations of the X-ray hardware systems. Image enhancement is a powerful tool to improve the visual quality of mammograms. This paper introduces a new powerful nonlinear filter called the alpha weighted quadratic filter for mammogram enhancement. The user has the flexibility to design the filter by selecting all of the parameters manually or using an existing quantitative measure to select the optimal enhancement parameters. Computer simulations show that excellent enhancement results can be obtained with no apriori knowledge of the mammogram contents. The filter can also be used for automatic segmentation.

26 citations


Proceedings ArticleDOI
04 Dec 2009
TL;DR: Analysis and experimental results show that the proposed algorithms can fully encrypt all types of images, which makes them suitable for securing multimedia applications and shows they have the potential to be used to secure communications in a variety of wired/wireless scenarios and real-time application such as mobile phone services.
Abstract: This paper introduces a new concept for image encryption using a binary “key-image”. The key-image is either a bit plane or an edge map generated from another image, which has the same size as the original image to be encrypted. In addition, we introduce two new lossless image encryption algorithms using this key-image technique. The performance of these algorithms is discussed against common attacks such as the brute force attack, ciphertext attacks and plaintext attacks. The analysis and experimental results show that the proposed algorithms can fully encrypt all types of images. This makes them suitable for securing multimedia applications and shows they have the potential to be used to secure communications in a variety of wired/wireless scenarios and real-time application such as mobile phone services.

23 citations


Proceedings ArticleDOI
01 Nov 2009
TL;DR: A new effective image encryption algorithm using the Discrete Parametric Cosine Transform (DPCT) that can fully or partially encrypt different types of digital images with efficiency while preserving the quality of the images.
Abstract: This paper introduces a new effective image encryption algorithm using the Discrete Parametric Cosine Transform (DPCT) The new algorithm transforms images into the frequency domain using the DPCT with a set of parameters, and then converts images back into the spatial domain using the inverse DPCT with a different set of parameters to obtain the encrypted images Its security keys are the combination of the parameters of the DPCT and inverse DPCT The simulation results show that the algorithm can fully or partially encrypt different types of digital images with efficiency while preserving the quality of the images The algorithm can be used to protect different types of multimedia data It can be also used for simultaneous data encryption and compression by embedding it in a data compression process such as JPEG

23 citations


Proceedings ArticleDOI
13 Nov 2009
TL;DR: A noise-resilient edge detection algorithm for brain MRI images which makes up for the disadvantages of Canny algorithm, and can detect more edges of MRI brain images effectively.
Abstract: In this paper we introduce a noise-resilient edge detection algorithm for brain MRI images. Also, an improved edge detection based on Canny edge detection algorithm is proposed. Computer simulations show that the proposed algorithm is resilient to impulsive noise which makes up for the disadvantages of Canny algorithm, and can detect more edges of MRI brain images effectively. Also, the concept of images fusion is utilized for effective edge detection.

20 citations


Proceedings ArticleDOI
TL;DR: A new concept of image encryption which is based on edge information is presented, which can encrypt all 2D and 3D images and easily be implemented in mobile devices.
Abstract: This paper presents a new concept of image encryption which is based on edge information. The basic idea is to separate the image into the edges and the image without edges, and encrypt them using any existing or new encryption algorithm. The user has the flexibility to encrypt the edges or the image without edges, or both of them. In this manner, different security requirements can be achieved. The encrypted images are difficult for unauthorized users to decode, providing a high level of security. We also introduce a new lossless encryption algorithm using 3D Cat Map. This algorithm can fully encrypt 2D images in a straightforward one-step process. It simultaneously changes image pixel locations and pixel data. Experimental examples demonstrate the performance of the presented algorithm in image encryption. It can also withstand chosen-plaintext attack. The presented encryption approach can encrypt all 2D and 3D images and easily be implemented in mobile devices.

17 citations


Proceedings ArticleDOI
04 Dec 2009
TL;DR: An improved Canny edge detection algorithm and an edge preservation filtering procedure for Asphalt Concrete applications and it has been shown that the presented algorithm can not only eliminate noises effectively but also protect unclear edges.
Abstract: In this paper we introduce an improved Canny edge detection algorithm and an edge preservation filtering procedure for Asphalt Concrete (AC) applications. Datasets of AC images were randomly selected to test this algorithm. Computer simulations show that the improved algorithm can make up for the disadvantages of Canny algorithm, detect edges of AC images effectively, and is a less time-consuming process. Particularly, it has been shown that the presented algorithm can not only eliminate noises effectively but also protect unclear edges.

14 citations


Proceedings ArticleDOI
TL;DR: Simulation results and analysis verify that the algorithm shows good performance in object/image encryption and can withstand plaintext attacks and a new bit-plane decomposition method using the truncated Fibonacci pcode.
Abstract: This paper introduces a new recursive sequence called the truncated P-Fibonacci sequence, its corresponding binary code called the truncated Fibonacci p-code and a new bit-plane decomposition method using the truncated Fibonacci pcode. In addition, a new lossless image encryption algorithm is presented that can encrypt a selected object using this new decomposition method for privacy protection. The user has the flexibility (1) to define the object to be protected as an object in an image or in a specific part of the image, a selected region of an image, or an entire image, (2) to utilize any new or existing method for edge detection or segmentation to extract the selected object from an image or a specific part/region of the image, (3) to select any new or existing method for the shuffling process. The algorithm can be used in many different areas such as wireless networking, mobile phone services and applications in homeland security and medical imaging. Simulation results and analysis verify that the algorithm shows good performance in object/image encryption and can withstand plaintext attacks.

13 citations


Patent
07 Apr 2009
TL;DR: In this paper, a method for repairing a defect in a digital image to provide a restored image comprises determining a plurality of pixel locations to form a neighborhood relating to the defect and whether or not the neighborhood has a well-defined, dark border along its edge.
Abstract: Methods and apparatus for restoration of a digital image. In one embodiment, a method for repairing a defect in a digital image to provide a restored image comprises determining a plurality of pixel locations to form a neighborhood relating to the defect and whether or not the neighborhood has a well-defined, dark border along its edge. Should the neighborhood not have dark border, one embodiment of the method entails processing the neighborhood to bring the neighborhood approximately to uniform darkness, processing the neighborhood to match surroundings in the digital image, copying an edge of a neighborhood in the digital image into the processed neighborhood, processing pixels of the edge to repair the copied edge pixels, and outputting the restored image for display to a user. Should the neighborhood have a dark, well-defined border, one embodiment of the method entails processing the neighborhood as to locally enhance the neighborhood and match its surroundings in the digital image; processing the neighborhood's edge such that the edge also matches its surroundings in both the defect and the digital image; processing the neighborhood to invert its pixel values and then perform the last two steps once again; processing the neighborhood to increase its contrast and then perform the last three steps once again; processing the neighborhood to bring the neighborhood to a more uniform darkness; processing the uniform darkness neighborhood to match surroundings in the digital image; and outputting the restored image for display to a user.

Patent
26 Mar 2009
TL;DR: In this article, the authors perform visual sub-band decomposition of an image using human visual system characteristics to generate a plurality of subband decomposed images, independently processing the plurality of images with at least one application, and fusing the independently processed subband images to reconstruct an output image.
Abstract: Methods and apparatus for image processing include performing visual sub-band decomposition of an image using human visual system characteristics to generate a plurality of sub-band decomposed images, independently processing the plurality of sub-band decomposed images with at least one application, and fusing the independently processed sub-band decomposed images to reconstruct an output image.

Proceedings ArticleDOI
04 Dec 2009
TL;DR: A general model for reconstruction-based measures is established in order to alleviate the shortcomings of the reconstruction- based measures, followed by the formulation of a new non-reference measure for objective edge map evaluation.
Abstract: Edge detection has been used extensively as a preprocessing step for many computer vision tasks. Due to its importance in image processing and the highly subjective nature of human evaluation and visual comparison of edge detectors, it is desirable to formulate objective edge map evaluation measures. One would like to use such a measure to make comparisons of results using the same edge detector with different parameters as well as to make comparisons of results using different edge detectors. Reconstruction-based measures have the clear advantage that they effectively incorporate original image data. In this paper, a general model for reconstruction-based measures is established in order to alleviate the shortcomings of the reconstruction-based measures, followed by the formulation of a new non-reference measure for objective edge map evaluation. Experimental results illustrate the effectiveness of the new measure both as a means of selecting optimal edge detector parameters and as a means of determining the relative performance of edge detectors for a given image.

Proceedings ArticleDOI
04 Dec 2009
TL;DR: The alpha-trimmed method estimates steganographic messages within images in the spatial domain and provide flexibility for classifying various steganography methods in the JPEG compression domain results in better separability between clean and steganographers classes.
Abstract: In information security, steganalysis has been an important topic since evidences first indicated steganography has been used for covert communication. Among all digital files, numerous devices generate JPEG images due to the capability of compression and compatibility. A large number of JPEG steganography methods are also provided online for free usage. This has spawned significant research in the area of JPEG steganalysis. This paper introduces an image estimation technique utilizing the alpha-trimmed mean for distinguishing clean and steganography images. The hidden information is considered additive noise to the image. The alpha-trimmed method estimates steganographic messages within images in the spatial domain and provide flexibility for classifying various steganography methods in the JPEG compression domain. For three JPEG steganography methods along with three embedding message files applied to an image data set, the proposed method results in better separability between clean and steganographic classes. The results are based on comparisons between the presented method and two existing methods in which classification accuracies are increased by as much as 32%.

Proceedings ArticleDOI
TL;DR: Experimental results show that the new measure for objective edge map evaluation outperforms Pratt's FOM visually as it takes into account more features in its evaluation.
Abstract: Edge detection is an important preprocessing task which has been used extensively in image processing As many applications heavily rely on edge detection, effective and objective edge detection evaluation is crucial Objective edge map evaluation measures are an important means of assessing the performance of edge detectors under various circumstances and in determining the most suitable edge detector or edge detector parameters Quantifiable criteria for objective edge map evaluation are established relative to a ground truth, and the weaknesses and limitations in the Pratt's Figure of Merit (FOM), the objective reference-based edge map evaluation standard, are discussed Based on the established criteria, a new reference-based measure for objective edge map evaluation is presented Experimental results using synthetic images and their ground truths show that the new measure for objective edge map evaluation outperforms Pratt's FOM visually as it takes into account more features in its evaluation

Proceedings ArticleDOI
01 Nov 2009
TL;DR: A new image bit-plane decomposition method based on the Generalized P-Gray Code (GPGC) which is a parametric sequence suitable for any base, n, and based on this decomposition, two image encryption algorithms using GPGC are introduced.
Abstract: Image encryption is an effective method to protect multimedia information for different security purposes. In this paper, we introduce a new image bit-plane decomposition method based on the Generalized P-Gray Code (GPGC) which is a parametric sequence suitable for any base, n. Based on this decomposition, we introduce two image encryption algorithms using GPGC. The two algorithms allow for either full or partial encryption of images based on the choice of security keys: base n and distance parameter p. Experimental results show that the presented algorithms are lossless encryption methods, and that the original images can be completely reconstructed when the correct security keys are used. It is also shown that the presented algorithms can withstand the plaintext attacks.

Proceedings ArticleDOI
11 Oct 2009
TL;DR: A set of statistical features are generated using linear mixed effects models in conjunction with wavelet decomposition for image steganography detection, improving the number of correct predictions that an instance is clean or steganographic by as much as 38%.
Abstract: Current technology allows steganography applications to conceal any digital file inside of another digital file. Due to the large number of steganography tools available over the Internet, a particular threat exists when criminals use steganography to conceal their activities within digital images in cyber space. In this paper, a set of statistical features are generated using linear mixed effects models in conjunction with wavelet decomposition for image steganography detection. It is important to generate features capable of distinguishing between a set of clean and steganography images for steganalysts in commercial industry, Department of Defense, government as well as law enforcement. In the experimental results, seven sets of images are used to measure the performance of the proposed method, a clean set and two JPEG steganography methods with three different embedding file sizes to create steganography images. The number of correct predictions that an instance is clean or steganographic are improved by as much as 38% when using the proposed linear mixed effects models compared to the linear fixed effects models.

Proceedings ArticleDOI
TL;DR: Experimental results show that the new generalized set of kernels can improve edge detection results by combining the usefulness of both lower and higher dimension kernels.
Abstract: Edge detection is an important image processing task which has been used extensively in object detection and recognition. Over the years, many edge detection algorithms have been established, with most algorithms largely based around linear convolution operations. In such methods, smaller kernel sizes have generally been used to extract fine edge detail, but suffer from low noise tolerance. The use of higher dimension kernels is known to have good implications for edge detection, as higher dimension kernels generate coarser scale edges. This suppresses noise and proves to be particularly important for detection and recognition systems. This paper presents a generalized set of kernels for edge and line detection which are orthogonal to each other to yield nxn kernels for any odd dimension n. Some of the kernels can also be generalized to form mxn rectangular kernels. In doing so, it unifies small and large kernel approaches in order to reap the benefits of both. It is also seen that the Frei and Chen orthogonal kernel set is a single instance of this new generalization. Experimental results show that the new generalized set of kernels can improve edge detection results by combining the usefulness of both lower and higher dimension kernels.

Book ChapterDOI
01 Jan 2009
TL;DR: This chapter first reviews a series of noise-like steganography methods and results of using advanced clean image estimation techniques for active warden based steganalysis are presented.
Abstract: Modern digital steganography has evolved a number of techniques to embed information near invisibly into digital media. Many of the techniques for information hiding result in a set of changes to the cover image that appear, for all intents and purposes to be noise. This chapter presents information for the reader to understand how noise is intentionally and unintentionally used in information hiding. This chapter first reviews a series of noise-like steganography methods. From these techniques the problems faced by the active warden can be posed in a systematic way. Results of using advanced clean image estimation techniques for active warden based steganalysis are presented. This chapter is concluded with a discussion of the future of steganography.

Journal ArticleDOI
TL;DR: A novel circuit technique to generate a reduced voltage swing (RVS) signals for active power reduction on main buses and clocks is proposed, achieved without performance degradation, without extra power supply requirement, and with minimum area overhead.
Abstract: We propose a novel circuit technique to generate a reduced voltage swing (RVS) signals for active power reduction on main buses and clocks. This is achieved without performance degradation, without extra power supply requirement, and with minimum area overhead. The technique stops the discharge path on the net that is swinging low at a certain voltage value. It reduces active power on the target net by as much as 33% compared to traditional full swing signaling. The logic 0 voltage value is programmable through control bits. If desired, the reduced-swing mode can also be disabled. The approach assumes that the logic 0 voltage value is always less than the threshold voltage of the nMOS receivers, which eliminate the need of the low to high voltage translation. The reduced noise margin and the increased leakage on the receiver transistors using this approach have been addressed through the selective usage of multithreshold voltage (MTV) devices and the programmability of the low voltage value.

Proceedings ArticleDOI
11 May 2009
TL;DR: Experimental results via computer simulations show that the proposed algorithm can outperform current image fusion techniques both by qualitative and quantitative means.
Abstract: Image fusion algorithms attempt to combine multiple registered images into a single image in a way which retains the most pertinent information from each of the images to be fused. In this paper, a new edge-based image fusion algorithm using the Parameterized Logarithm Image Processing (PLIP) model is presented. Coarse approximation and edge information from the images to be fused are extracted using PLIP primitives and separately processed. The results are fused to yield the reconstructed fusion result. The Logarithmic Michelson contrast measure by entropy (log AMEE) is used to quantitatively access the quality of image fusion results. Experimental results via computer simulations show that the proposed algorithm can outperform current image fusion techniques both by qualitative and quantitative means.

Proceedings ArticleDOI
TL;DR: This paper discusses two general approaches for data protection, steganography and cryptography, and demonstrates how to integrate such algorithms with a mobile-toserver link being used by many applications.
Abstract: Modern mobile devices are some of the most technologically advanced devices that people use on a daily basis and the current trends indicate continuous growth in mobile phone applications. Nowadays phones are equipped with cameras that can capture still images and video, they are equipped with software that can read, convert, manipulate, communicate and save multimedia in multiple formats. This tremendous progress increased the volumes of communicated sensitive information which should be protected against unauthorized access. This paper discusses two general approaches for data protection, steganography and cryptography, and demonstrates how to integrate such algorithms with a mobile-toserver link being used by many applications.

Proceedings ArticleDOI
11 Oct 2009
TL;DR: The proposed RVS scheme achieves reduced active power consumption, minimum performance degradation and minimum area overhead (without extra power supply network and a minimum number of extra transistors).
Abstract: We propose Reduced Voltage Swing (RVS) signaling (by elevating the logic 0 voltage) as opposed to Low Voltage Swing (LVS) signaling (which reduces the logic 1 voltage). We propose an inverter which generates RVS signals, and an extension with programmable logic for adjusted logic 0 voltage. The proposed RVS scheme achieves reduced active power consumption, minimum performance degradation and minimum area overhead (without extra power supply network and a minimum number of extra transistors). Application of multi-threshold voltage design further alleviates compromises on noise margin and leakage. Experimental results based on SPICE simulation show that RVS clocking achieves an average of 37% active power consumption reduction, 8% performance degradation.

01 Jan 2009
TL;DR: A new effective image encryption algorithm using the Discrete Parametric Cosine Transform (DPCT) that can fully or partially encrypt different types of digital images with efficiency while preserving the quality of the images.
Abstract: This paper introduces a new effective image encryption algorithm using the Discrete Parametric Cosine Transform (DPCT). The new algorithm transforms images into the frequency domain using the DPCT with a set of parameters, and then converts images back into the spatial domain using the inverse DPCT with a different set of parameters to obtain the encrypted images. Its security keys are the combination of the parameters of the DPCT and inverse DPCT. The simulation results show that the algorithm can fully or partially encrypt different types of digital images with efficiency while preserving the quality of the images. The algorithm can be used to protect different types of multimedia data. It can be also used for simultaneous data encryption and compression by embedding it in a data compression process such as JPEG.

Proceedings ArticleDOI
11 Oct 2009
TL;DR: A novel digital forensics tool is developed by combining wavelet invariant with spatial moments and proved to be very efficient in detection similarities between a target image and a large image database even when the target image is noisy, scaled or mirrored.
Abstract: A novel digital forensics tool is developed by combining wavelet invariant with spatial moments. A forensic printed circuit board image matching system is presented that is capable of probing a large database of digital images of circuit boards and compare them for similarity to provide investigation leads for electronic crimes digital forensic science investigations. The developed system has been implemented, and proved to be very efficient in detection similarities between a target image and a large image database even when the target image is noisy, scaled or mirrored.

Patent
07 Apr 2009
TL;DR: In this article, a method for repairing a defect in a digital image to provide a restored image comprises determining a plurality of pixel locations to form a neighborhood relating to the defect and whether or not the neighborhood has a well-defined, dark border along its edge.
Abstract: Methods and apparatus for restoration of a digital image. In one embodiment, a method for repairing a defect in a digital image to provide a restored image comprises determining a plurality of pixel locations to form a neighborhood relating to the defect and whether or not the neighborhood has a well-defined, dark border along its edge. Should the neighborhood not have dark border, one embodiment of the method entails processing the neighborhood to bring the neighborhood approximately to uniform darkness, processing the neighborhood to match surroundings in the digital image, copying an edge of a neighborhood in the digital image into the processed neighborhood, processing pixels of the edge to repair the copied edge pixels, and outputting the restored image for display to a user. Should the neighborhood have a dark, well-defined border, one embodiment of the method entails processing the neighborhood as to locally enhance the neighborhood and match its surroundings in the digital image; processing the neighborhood's edge such that the edge also matches its surroundings in both the defect and the digital image; processing the neighborhood to invert its pixel values and then perform the last two steps once again; processing the neighborhood to increase its contrast and then perform the last three steps once again; processing the neighborhood to bring the neighborhood to a more uniform darkness; processing the uniform darkness neighborhood to match surroundings in the digital image; and outputting the restored image for display to a user.

Proceedings ArticleDOI
11 Oct 2009
TL;DR: A developing class of algorithms to view and manipulate medical images on mobile devices and mainly PDA handhelds to view 2-D single medical imaging scans, internal 3-D anatomical details of a simulated straight line-cut, and the reconstruction of the original scanned object e.g. the original head image are illustrated.
Abstract: The prompt delivery of biomedical images for emergency diagnosis purpose is an important issue in health care organizations. This paper is aimed a developing class of algorithms to view and manipulate medical images on mobile devices and mainly PDA handhelds. We illustrate our method on human brain scans to view: 2-D single medical imaging scans, multi frames/slices medical imaging scans, internal 3-D anatomical details of a simulated straight line-cut, and the reconstruction of the original scanned object e.g. the original head image.

Proceedings ArticleDOI
11 Oct 2009
TL;DR: A hardware implementation of the parametric image-processing framework that will accurately process images and speed up computation for addition, subtraction, and multiplication and the design of arithmetic circuits including parallel counters, adders and multipliers based in two high performance threshold logic gate implementations that are developed.
Abstract: This Parameterized Digital Electronic Arithmetic (PDEA) model replaces linear operations with non-linear ones. In this paper we introduce a hardware implementation of the parametric image-processing framework that will accurately process images and speed up computation for addition, subtraction, and multiplication. Particularly, the paper presents the design of arithmetic circuits including parallel counters, adders and multipliers based in two high performance threshold logic gate implementations that we have developed. We will also explore new microprocessor architectures to take advantage of arithmetic. The experiments executed have shown that the algorithm provides faster and better enhancements from those described in the literature. Its potential applications include computer graphics, digital signal processing and other multimedia applications.