scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Electronic Testing in 2018"


Journal ArticleDOI
TL;DR: This work explores the defense and attack mechanisms for hardware that are based on machine learning and identifies suitable machine learning algorithms for each category of hardware security problems.
Abstract: Recently, machine learning algorithms have been utilized by system defenders and attackers to secure and attack hardware, respectively. In this work, we investigate the impact of machine learning on hardware security. We explore the defense and attack mechanisms for hardware that are based on machine learning. Moreover, we identify suitable machine learning algorithms for each category of hardware security problems. Finally, we highlight some important aspects related to the application of machine learning to hardware security problems and show how the practice of applying machine learning to hardware security problems has changed over the past decade.

61 citations


Journal ArticleDOI
TL;DR: A logic testing based HT detection method using an advised genetic algorithm which creates effective test vectors, the so-called TRIAGE (hardware TR ojan detectI on using an A dvised G enetic algorithm based logic tE sting).
Abstract: Today, outsourced manufacturing of integrated circuit designs are prone to a range of malicious modifications of the circuitry called Hardware Trojans. HTs can alter the functionality of a circuit, leak secret information and initiate other possible malicious actions. HTs are activated in a very rare condition known by an intruder. Therefore, a group of HT detection methods tries to activate the HT circuitry by crafting test vectors. In this paper, we propose a logic testing based HT detection method using an advised genetic algorithm which creates effective test vectors, the so-called TRIAGE (hardware TR ojan detectI on using an A dvised G enetic algorithm based logic tE sting). The key contribution of this paper is to present a proper fitness function for the genetic algorithm providing better evaluation of the test vectors. The controllability, observability and transition probability factors of rare nodes have been considered in the fitness function. Simulation results indicate 80% reduction in generation time for test sets (on average) as compared to the previous work. On the other hand, reduced generation time for test vectors has been associated with an increase in trigger coverage. The coverage of the TRIAGE method for very hard to trigger Trojans increases by about 23% due to high efficiency of the proposed fitness function for the genetic algorithm.

36 citations


Journal ArticleDOI
TL;DR: Security of the analyzed chaos based pseudo-random number generator (PRNG) for video encryption is much lower than expected and it should be used with caution.
Abstract: Recently, chaos based pseudo-random number generator (PRNG) for video encryption was proposed. Security analysis presented in this paper reveals serious problems. Chaotic maps used in analyzed PRNG do not enhance its security due to a considerable number of initial values that lead to fixed points. Also, based on 6 known iterations, an attacker can reconstruct a secret key used in the working stage of the analyzed PRNG based on attack whose complexity is much smaller than the estimated key space. Therefore, security of the analyzed PRNG is much lower than expected and it should be used with caution. Some potential improvements of the analyzed PRNG are proposed which could eliminate perceived shortcomings of the original version.

27 citations


Journal ArticleDOI
TL;DR: Comparisons with previous adders reveal that proposed 5 × 5 module behaves well in circuits, especially the high degree of fault tolerance and the relatively small area, complexity and QCA cost, thereby making it more suitable for practical realizations in large circuit designs.
Abstract: Since conventional CMOS technology has met its development bottleneck, an alternative technology, quantum-dot cellular automata (QCA), attracted researchers’ attention and was studied extensively. The manufacturing process of QCA, however, is immature for commercial production because of the high defect rate. Seeking for designs that display excellent performance shows significant potentials for practical realizations. In the paper we propose a 5 × 5 module, which not only can implement three-input majority gate but also can realize five-input majority gate by adding another two inputs. A comprehensive analysis is made in terms of area, number of cells, energy dissipation and fault tolerance against single-cell omission defects. In order to testify the superiority of the proposed designs, preexisting related designs are tested and compared. Weighing up above four kinds of factors and technical feasibility, proposed majority gates perform fairly well. Further, we take full adders and multi-bit adders as illustrations to display the practical application of proposed majority gates. The detailed comparisons with previous adders reveal that proposed 5 × 5 module behaves well in circuits, especially the high degree of fault tolerance and the relatively small area, complexity and QCA cost, thereby making it more suitable for practical realizations in large circuit designs.

25 citations


Journal ArticleDOI
TL;DR: How specific methods for test and reliability can be used to improve the characteristics of approximate computing in terms of power consumption, area, life expectancy and precision is focused on.
Abstract: This paper presents an overview of test and reliability approaches for approximate computing architectures. We focus on how specific methods for test and reliability can be used to improve the characteristics of approximate computing in terms of power consumption, area, life expectancy and precision. This paper does not address specification and design of approximate hardware/software/algorithms, but provides an in-depth knowledge on how the reliability and test related techniques can be efficiently used to maximize the benefits of approximate computing.

20 citations


Journal ArticleDOI
TL;DR: The simulation results show that the aging-related Bit Error Rate in an arbiter-PUF with high switching activity can be 11 times worse than the Bit Error rate in the same PUF when there is no activity in 20 months.
Abstract: Physically Unclonable Functions (PUFs) are mainly used for generating unique keys to identify electronic devices. These entities mainly benefit from the process variations occurring during the device manufacturing. To be able to use PUFs to identify electronic devices or to utilize them in cryptographic applications, the reliability of PUFs needs to be assured under a wide variety of environmental conditions and aging mechanisms, including the switching activity of the PUFs’ internal signals. In practice, it is important to evaluate aging effects as early as possible, preferentially at design time. In this paper, we evaluate the impact of aging on two types of delay-PUFs (arbiter-PUFs and loop-PUFs) with different switching activities. This work takes advantage of both simulation tool and silicon tests on a 65nm ASIC implementation. To expedite the simulation process and get rid of conducting simulations of multiple delay-element PUFs, we propose an extrapolation method to evaluate the effect of BTI (Bias Temperature-Instability) and HCI (Hot Carrier Injection) aging under different switching activities on PUFs with multiple delay elements using the aging effects on single delay-element PUFs. The results show that switching activity (expressed in terms of transitions/time) has a limited impact on delay chains of considered delay-PUFs, while it has a greater impact on the arbiter (RS latch) of the arbiter-PUF. The simulation results show that the aging-related Bit Error Rate in an arbiter-PUF with high switching activity can be 11 times worse than the Bit Error Rate in the same PUF when there is no activity in 20 months.

18 citations


Journal ArticleDOI
TL;DR: The framework responsible for generating and proving a simplified SAT-based formula of digital circuits for Automatic Test Pattern Generation (ATPG) proposes, presenting an efficient method to apply the Boolean Constraint Propagation on-the-fly while the generation is running on the GPU.
Abstract: This paper presents a novel framework comprises of a Propositional Satisfiability (SAT) encoder and solver. The framework responsible for generating and proving a simplified SAT-based formula of digital circuits for Automatic Test Pattern Generation (ATPG) proposes. The parallel algorithms introduced in this work are aimed at both combinational and sequential circuits and optimized on NVIDIA General-Purpose Graphics Processing Unit (GPGPU) paradigm. The SAT encoder presents an efficient method to apply the Boolean Constraint Propagation (BCP) on-the-fly while the generation is running on the GPU. The simplified formula is further proved for satisfiability using an improved parallel solver on GPU. The proposed encoder executes 93 times faster compared to the sequential counterpart. The test generation algorithm using the GPU-accelerated framework delivers about 5.86 speedup on an average compared to the state-of-the-art Lingeling solver. Moreover, the SAT encoder reduced the run time for fault detection by 6.53 and 11.42% on an average when applied to the proposed and the conventional CUD@SAT solvers, respectively, offering promising related work for the future research.

16 citations


Journal ArticleDOI
TL;DR: The physical randomness of the flying capacitors in the multi-phase on-chip switched-capacitor (SC) voltage converter is exploited as a novel strong physical unclonable function (PUF) primitive for IoT authentication to maintain a high security level against both side-channel and machine-learning attacks.
Abstract: The physical randomness of the flying capacitors in the multi-phase on-chip switched-capacitor (SC) voltage converter is exploited as a novel strong physical unclonable function (PUF) primitive for IoT authentication. Moreover, for the strong PUF we devised, an approximated constant input power is achieved against side-channel attacks and a non-linear transformation block is utilized to scramble the high linear relationship between the input challenges and output responses against machine-learning attacks. The results show that the novel strong PUF primitive we designed achieves a nearly 51.3% inter-Hamming distance (HD) and 98.5% reliability while maintaining a high security level against both side-channel and machine-learning attacks.

14 citations


Journal ArticleDOI
TL;DR: Different harmonic cancellation strategies are presented and analyzed with the goal of simplifying the practical on-chip implementation of the scaling weights and statistical behavioral simulations are provided in order to demonstrate the feasibility of the proposed approach.
Abstract: Harmonic cancellation strategies have been recently presented as a promising solution for the efficient on-chip implementation of accurate sinusoidal signal generators. Classical harmonic cancellation techniques consist in combining a set of time-shifted and scaled versions of a periodical signal in such a way that some of the harmonic components of the resulting signal are cancelled. This signal manipulation strategy can be easily implemented using digital resources to provide a set of phase-shifted digital square-wave signals and a summing network for scaling and combining the phase-shifted square-waves. A critical aspect in the practical implementation of the harmonic cancellation technique is the stringent accuracy required for the scaling weight ratios between the different phase-shifted signals. Small variations between these weights due to mismatch and process variations will reduce the effectiveness of the technique and increase the magnitude of undesired harmonic components. In this work, different harmonic cancellation strategies are presented and analyzed with the goal of simplifying the practical on-chip implementation of the scaling weights. Statistical behavioral simulations are provided in order to demonstrate the feasibility of the proposed approach.

12 citations


Journal ArticleDOI
TL;DR: This paper describes the conception, implementation, and evaluation of Column-Line-Code (CLC), a novel algorithm for the detection and correction of MCU in memory devices, which combines extended Hamming code and parity bits and proves the CLCs have high MCU correction efficacy with reduced area, power and delay costs.
Abstract: As the microelectronics technology continuously advances to deep submicron scales, the occurrence of Multiple Cell Upset (MCU) induced by radiation in memory devices becomes more likely to happen. The implementation of a robust Error Correction Code (ECC) is a suitable solution. However, the more complex an ECC, the more delay, area usage and energy consumption. An ECC with an appropriate balance between error coverage and computational cost is essential for applications where fault tolerance is heavily needed, and the energy resources are scarce. This paper describes the conception, implementation, and evaluation of Column-Line-Code (CLC), a novel algorithm for the detection and correction of MCU in memory devices, which combines extended Hamming code and parity bits. Besides, this paper evaluates the variation of the 2D CLC schemes and proposes an additional operation to correct more MCU patterns called extended mode. We compared the implementation cost, reliability level, detection/correction rate and the mean time to failure among the CLC versions and other correction codes, proving the CLCs have high MCU correction efficacy with reduced area, power and delay costs.

12 citations


Journal ArticleDOI
TL;DR: This article proposes the implementation and the validation of a full-fledged HF tag architecture using an enhanced mutual authentication protocol using a FPGA platform and shows a low overhead compared to previous existing security hardware solutions.
Abstract: Radio Frequency IDentification (RFID) is used in many applications such as access control, transport, ticketing and contactless payment. The full-fledged High Frequency (HF) tags are the most popular RFID tags for these applications that require relatively high cost security operations. However, these HF tags are threatened by many passive attacks such as eavesdropping, desynchronization and ElectroMagnetic (EM) Side Channel Attacks (SCA). In this article, we propose the implementation and the validation of a full-fledged HF tag architecture using an enhanced mutual authentication protocol. This is achieved using a FPGA platform. Security analysis against Electromagnetic Attack (EMA) and desynchronization attacks on the original protocol are presented. Then enhancements at the protocol level are proposed to overcome these attacks. The implementation of these security enhancements shows a low overhead (+22 LUTs) compared to previous existing security hardware solutions (+598 LUTs).

Journal ArticleDOI
TL;DR: This work presents a complete RHBD flow methodology employing enclosed-layout transistors (ELTs) and guard rings, transparent to the designer, and fully compatible with commercial CAD tools and standard fabrication processes.
Abstract: Ionizing radiation degrades the electrical characteristics of MOS devices, reducing their reliability, performance, and lifetime; therefore, hardening techniques are required for the proper functioning of those devices when exposed to harsh environments. Nonetheless, in the context of design flow automation, necessary to synthesize complex digital circuits, there is a lack of reliable foundry-provided Radiation Hardening by Design (RHBD) cell libraries. In this work, a complete RHBD flow methodology employing enclosed-layout transistors (ELTs) and guard rings, transparent to the designer, and fully compatible with commercial CAD tools and standard fabrication processes is presented. The proposed flow includes the automated calculation of the effective aspect ratio of the ELTs for annular and rectangular topologies, and a template proposal for digital cells, as well as series and parallel arrangements. Moreover, calculation of aspect ratio between pull-up and down networks and output buffers sizing using Logical Effort (LE) methodology, i.e., timing optimization accounting for typical commercial digital design constraints, is considered. Test structures, enclosing single n,pMOS devices, series and parallel arrangements, inverter cells, ring oscillators, and output buffers, were fabricated in two different technology nodes (600 nm and 180 nm). The experimental results were compared to SPICE simulations performed using the models here implemented. The results indicate that the flow methodology is feasible to implement and fully compatible with the CAD tools employed for circuit design. Besides, two case studies were first silicon-proven, presenting fully functional behavior under typical conditions.

Journal ArticleDOI
TL;DR: A greed-based strategy, where the instruction sequences that detect the freshly identified faults are preserved throughout the evolutionary process to identify the hard-to-test faults of the processor and the overall coverage is improved.
Abstract: Software-based self-testing (SBST) is introduced for at-speed testing of processors, which is difficult with any of the external testing techniques. Evolutionary approaches are used for the automatic synthesis of SBST programs. However, a number of hard-to-detect faults remain unidentified by these autogenerated test programs. Also, these approaches have considered fault models which have low correlation with the gate-level fault models. This paper presents a greed-based strategy, where the instruction sequences that detect the freshly identified faults are preserved throughout the evolutionary process to identify the hard-to-test faults of the processor. Subsequently, the overall coverage is also improved. A selection probability is estimated from the testability properties of the processor components and assigned to every instruction to accelerate the test synthesis. The range of performance and scalability are comprehensively evaluated on a configurable MIPS processor and a full-fledged 7-stage pipeline SPARC V8 Leon3 soft processor using behavioral fault models. The efficacy of our approach was explained by demonstrating the correlation between behavioral faults and gate-level faults of MIPS processor for the proposed scheme. Experimental results show that improved coverages of 96.32% for the MIPS processor and 95.8% for the Leon3 processor are achieved in comparison with the conventional methods, which have about 90% coverage on the average.

Journal ArticleDOI
TL;DR: A heuristic method is proposed that effectively utilizes a threshold for unprotected input vectors to generate good enough combinations of approximate modules for ATMR, which accomplishes higher fault coverage and reduced area overhead compared with previously proposed approaches.
Abstract: Approximate triple modular redundancy (ATMR) is sought for logic masking of soft errors while effectuating lower area overhead than conventional TMR through the introduction of approximate modules. However, the use of approximate modules instigates reduced fault coverage in ATMR. In this work, we target better design tradeoffs in ATMR by proposing a heuristic method that effectively utilizes a threshold for unprotected input vectors to generate good enough combinations of approximate modules for ATMR, which accomplishes higher fault coverage and reduced area overhead compared with previously proposed approaches. The key concept is to employ logic optimization techniques of prime implicant (PI) expansion and reduction for successively obtaining approximate modules such that the combination of three approximate modules appropriately functions as an ATMR. For an ATMR to function appropriately, blocking is used to ensure that at each input vector, through the prime implicant (PI) expansion and reduction technique, only one approximate module differ from the original circuit. For large circuits, clustering is utilized and comparative analysis indicates that higher fault coverage is attained through the proposed ATMR scheme while preserving the characteristic feature of reduced area overhead. With a small percentage of unprotected input vectors, we achieved substantial decrease in transistor count and greater fault detection, i.e., an improvement of up to 26.1% and 42.1%, respectively.

Journal ArticleDOI
TL;DR: It is shown that the ROBDD based 2×1 mux implemented circuit is fully testable under multiple stuck-at fault model and can be derived from the Disjoint Sum of Product expression which allows test pattern generation at design time, eliminating the need of an ATPG after the synthesis stage.
Abstract: Automatic test pattern generation (ATPG) is the next step after synthesis in the process of chip manufacturing The ATPG may not be successful in generating tests for all multiple stuck-at faults since the number of fault combinations is large Hence a need arises for highly testable designs which have 100% fault efficiency under the multiple stuck-at fault(MSAF) model In this paper we investigate the testability of ROBDD based 2×1 mux implemented combinational circuit design We show that the ROBDD based 2×1 mux implemented circuit is fully testable under multiple stuck-at fault model Principles of pseudoexhaustive testing and multiple stuck-at fault testing of two level AND-OR gates are applied to one sub-circuit(2×1 mux) We show that the composite test vector set derived for all 2×1 muxes is capable of detecting multiple stuck-at faults of the circuit as a whole Algorithms to derive test set for multiple stuck-at faults are demonstrated The multiple stuck-at fault test set is larger than the single stuck-at fault test set We show that the multiple stuck-at fault test set can be derived from the Disjoint Sum of Product expression which allows test pattern generation at design time, eliminating the need of an ATPG after the synthesis stage

Journal ArticleDOI
TL;DR: The detectability of bridge defects in FinFET based logic cells that make use of Middle-Of-Line (MOL) interconnections and multi-fin andMulti-finger design strategies is investigated and it is shown that these defects are difficult to be detected.
Abstract: Since 22nm technology node, FinFET technology is an attractive candidate for high-performance and power-efficient applications. This is achieved due to better channel control in FinFET devices obtained by wrapping a metal gate around a thin fin. In this paper, we investigate the detectability of bridge defects in FinFET based logic cells that make use of Middle-Of-Line (MOL) interconnections and multi-fin and multi-finger design strategies. The use of MOL to build the logic cells impacts the possible bridge defect locations and the likelihood of occurrence of the defect. Some defect locations unlikely to appear in planar CMOS now become more likely to occur due to the use of MOL. It is shown that these defects are difficult to be detected. The detectability of bridge defects has been analyzed for gates with different drive strengths and fan-in, and also extended to the different type of gates. A metric called Bridge Defect Criticality (BDC) is used to identify the most harmful bridge defects. This metric depends on the degree of detectability and likelihood of occurrence of a bridge defect. More design and/or test effort may be dedicated to those defects with higher a value of the BDC metric to improve product quality.

Journal ArticleDOI
TL;DR: The utilization of machine learning techniques are proposed to automate the diagnosis of design trace dump as well as helping in bug localization during post-silicon validation.
Abstract: As the size of hardware (HW) design increases significantly, a huge amount of data is generated during the design simulation, emulation or prototyping. Debugging large HW designs becomes a tedious, time consuming and a bottleneck task within the function verification activities. This paper proposes the utilization of machine learning techniques to automate the diagnosis of design trace dump as well as helping in bug localization during post-silicon validation. Our framework starts by signal selection algorithm that identifies which signals to monitor during design execution. Signal selection depends on signal types as well as their connectivity network. The design is then executed and the trace dump is saved for offline analysis. Big-Data processing technique, namely, Map-Reduce is used to overcome the challenge of processing huge trace dump resulted from design running on FPGA prototype. K-means Clustering method is applied to group trace segments that are very similar and to identify the ones with a rare occurrence during the design execution. Additionally, we propose a bug localization framework in which X-means clustering is used to group the passing regression tests in clusters such that buggy tests can be detected when they fail to be assigned to any of the trained clusters. Our experimental results demonstrate the feasibility of the proposed approach in guiding the debugging effort using a group of industrial HW designs and its ability to detect multiple design injected defects using mutation-based-testing method.

Journal ArticleDOI
TL;DR: This work has implemented 1024 proposed FFs distributed in an H-tree clock network driven by a resonant clock-generator that generates a 1–5 GHz sinusoidal clock signal and simulation results show a power reduction of 93% on the clock tree and total power saving of up to 74% as compared to the same implementation using the conventional square-wave clocking scheme and FFs.
Abstract: An energy recovery or resonant clocking scheme is very attractive for saving the clock power in nanoscale ASICs and systems-on-chips, which have increased functionality and die sizes. The technology scaling followed Moore’s law, that lowers node capacitance and supply voltage, making nanoscale integrated circuits more vulnerable to radiation-induced single event upsets (SEUs) or soft errors. In this work, we propose soft-error robust flip-flops (FFs) capable of working with a sinusoidal resonant clock to save the overall chip power. The proposed conditional-pass Quatro (CPQ) FF and true single phase clock energy recovery (TSPCER) FF are based on a unique soft error robust latch, which we refer to as a Quatro latch. The proposed C2-DICE FF is based on a dual interlocked cell (DICE) latch. In addition to the storage cell, each FF consists of a unique input-stage and a two-transistor, two-input output buffer. In each FF with a sinusoidal clock, the transfer unit passes the data to the Quatro and DICE latches. The latches store the data values at two storage nodes and two redundant nodes, the latter enabling recovery from a particle-induced transient with or without multiple-node charge sharing. Post-layout simulations in 65nm CMOS technology show that the FF exhibits as much as 82% lower power-delay product compared to recently reported soft error robust FFs. We implemented 1024 proposed FFs distributed in an H-tree clock network driven by a resonant clock-generator that generates a 1–5 GHz sinusoidal clock signal. The simulation results show a power reduction of 93% on the clock tree and total power saving of up to 74% as compared to the same implementation using the conventional square-wave clocking scheme and FFs.

Journal ArticleDOI
TL;DR: A workload-aware SVA method (WSVA) that encapsulates the workload change into the aging estimation using an LUT-based approach is presented and an NBTI and leakage co-optimization strategy based on an integer linear programming (ILP) approach is proposed to obtain the optimal input vector in standby mode.
Abstract: Supply voltage assignment (SVA) can alleviate the performance aging induced by the negative bias temperature instability (NBTI) effect However, due to the random characteristic of an actual system workload, it is difficult to estimate the aging rate and control the supply voltage reasonably To solve this problem, we present a workload-aware SVA method (WSVA) that encapsulates the workload change into the aging estimation using an LUT-based approach Moreover, an NBTI and leakage co-optimization strategy based on an integer linear programming (ILP) approach is proposed to obtain the optimal input vector in standby mode Simulation experiments on multiple benchmark circuits demonstrate that the LUT-based approach can track the dynamic change of the workload online and provide an accurate aging estimate for SVA with little computation cost Compared with the SVA method without considering the workload, the proposed aging estimation approach and the optimal input vector selection strategy in the WSVA framework can enable the CMOS circuit conserve additional power dissipation while guaranteeing the performance requirements

Journal ArticleDOI
TL;DR: A high performance Modified Static Segment approximate Multiplier (MSSM) is proposed in this paper that increases the accuracy based on the negating lower order significant information of input operands using Significance Estimator Logic Circuit (SELC).
Abstract: Achieving high accuracy has become a key design objective in high quantity digital data computing devices To enhance the accuracy, a high performance Modified Static Segment approximate Multiplier (MSSM) is proposed in this paper It increases the accuracy based on the negating lower order significant information of input operands using Significance Estimator Logic Circuit (SELC) The performance of proposed MSSM is compared with the existing approximate multipliers such as a Dynamic Segment approximate Multiplier (DSM) and Static Segment approximate Multiplier (SSM) for all input combinations These multipliers are implemented and simulated using Xilinx 142 ISE In MSSM method, 99% of average computational accuracy can be achieved for a 16-bit multiplication even with an 8 × 8-bit multiplier from all combinations of input operands instead of 95% of average computational accuracy from 61% of input operand pair in the existing SSM method The proposed 16-bit MSSM offers a savings of 8345% LUTs, 3878% power and it exhibits 2440% less delay, 06% less computational accuracy than the existing DSM

Journal ArticleDOI
TL;DR: A golden-free detection method that exploits the bit power consistency of processor and decomposition methods of power signal are proposed, which showed that the differences between the two methods were very small.
Abstract: Processor is the core chip of modern information system, which is severely threatened by hardware Trojan. Side-channel analysis is the most promising method for hardware Trojan detection. However, most existing detection methods require golden chips as reference, which significantly increases the test cost and complexity. In this paper, we propose a golden-free detection method that exploits the bit power consistency of processor. For the data activated processor hardware Trojan, the power model of processor is modified. Two decomposition methods of power signal are proposed: the differential bit power consistency analysis and the contradictory equations solution. With the proposed method, each bit power can be calculated. The bit consistency based detection algorithms are proposed, the deviation boundaries are obtained by statistical analysis. Experimental measurements were done on field programmable gate array chip with open source 8051 core and hardware Trojans. The results showed that the differences between the two methods were very small. The data activated processor hardware Trojans were detected successfully.

Journal ArticleDOI
TL;DR: In this paper, the authors performed reliability tests and analysis of a selected 3D-printed actuator, namely an electromechanical scanner, targeted towards scanning incoming light onto the target, which is particularly useful for barcoding, display, and opto-medical tissue imaging applications.
Abstract: Recent advances in the field of stereolithography based manufacturing, have led to a number of 3D-printed sensor and actuator devices, as a cost-effective and low fabrication complexity alternative to micro-electro-mechanical counterparts. Yet the reliability of such 3D-printed dynamic structures have yet to be explored. Here we perform reliability tests and analysis of a selected 3D-printed actuator, namely an electromechanical scanner. The scanner is targeted towards scanning incoming light onto the target, which is particularly useful for barcoding, display, and opto-medical tissue imaging applications. We monitor the deviations in the fundamental mechanical resonance, scan-line, and the quality factor on a number of scanners having different device thicknesses, for a total duration of 5 days (corresponding to 20–80 million cycles, depending on the device operating frequency). A total of 9 scanning devices, having 10 mm × 10 mm die size were tested, with a highlight on device-device variability, as well as the effect of device thickness itself. An average standard deviation of < ~%10 (with respect to the mean) was observed for all tested parameters among scanners of the same type (an indicator device to device variability), while an average standard deviation of less than about 10 percent (with respect to the mean) was observed for all parameters for the duration of the entire test (as an indicator of device reliability), for a total optical scan angle of 5 degrees.

Journal ArticleDOI
TL;DR: The Multiple Missing Cell (MMC) defect, which is very natural at nanoscale, causes the sizable difference in functionality compared to Single Missing Cell consideration described in literature, and hence, must be considered while test generation.
Abstract: Considering the limitations of CMOS technology, the Quantum-dot Cellular Automata (QCA) is emerging as one of the alternatives for Integrated Circuit (IC) Technology. A lot of work is being carried out for design, fabrication and testing of QCA circuits. In this paper, we have worked on defect analysis, fault models development and deriving various properties for QCA Majority Voter (MV) to effectively generate the test patterns for QCA circuits. It has been shown that unlike CMOS technology, single missing cell consideration is not enough for QCA technology. We have presented that the Multiple Missing Cell (MMC) defect, which is very natural at nanoscale, causes the sizable difference in functionality compared to Single Missing Cell consideration described in literature, and hence, must be considered while test generation. The proposed MMC is supported by exhaustive simulation results as well as kink energy based mathematical analysis. Further, Verilog fault models are proposed which can be used for the functional, timing verification and activation of faults caused by MMC defect. The effect of MMC on output is analyzed in stand-alone MV as well as when MV is a part of circuit. At the end, we have proposed the test properties of MV when being used as MV itself, as AND gate or OR gate. These properties may be further helpful in development of test generation algorithms.

Journal ArticleDOI
TL;DR: A methodical approach to forecast the consequences of the influence of pulsed electromagnetic fields on electronic devices based on three defined conditions of the disruption of device functioning is considered.
Abstract: In this paper, we have considered a methodical approach to forecast the consequences of the influence of pulsed electromagnetic fields on electronic devices based on three defined conditions of the disruption of device functioning. The approach is based on the use of key parameters of pulse disturbances in critical circuits of electronic devices. Depending on the problem being solved and features of the object being tested, the key parameters can be: the amplitude of a voltage pulse in a critical circuit of an object; the Joule integral; the energy; the frequency of repetition of influencing pulses; the probability of a bit error; the number of errors in a transferred data packet; the data rate, etc.

Journal ArticleDOI
TL;DR: A blind calibration algorithm based on Fast Fourier Transform Algorithm is proposed for the gain, offset and timing mismatches in a two-channel TIADC system that needs no extra circuits and no training signal and can dynamically track the changes of the mismatches.
Abstract: Time-interleaved analog-to-digital(TIADC) is an effective way to improve the sampling rate of an analog-to-digital converter(ADC) system. However, the unavoidable timing mismatch, gain mismatch and offset mismatch significantly degrade the performance of TIADC. In this paper, a blind calibration algorithm based on Fast Fourier Transform Algorithm(FFT) is proposed for the gain, offset and timing mismatches in a two-channel TIADC system. The explicit amplitude relationships between the input signal and the spurs caused by mismatches are derived in the frequency domain. With the explicit amplitude relationships, the frequency component of the input signal, which has the maximal energy, is used to estimate the gain and timing mismatches. The amplitude spectrum of the spur caused by offset mismatch is used to estimate the offset mismatch. The proposed algorithm needs no extra circuits and no training signal and can dynamically track the changes of the mismatches. Simulations show that the estimation errors are no more than 4%. Finally, a two-channel TIADC prototype is used to verify and demonstrate the proposed algorithm.

Journal ArticleDOI
TL;DR: This work shows that alternate test strategies can be implemented on-chip using analog/RF Built-In Self-Test (BIST) circuitry and model refinement and dynamic adaptation can be achieved based on an automatic on- chip learning structure.
Abstract: Analog/RF alternate test schemes have been extensively studied in the past decade with the goal of replacing time-consuming and expensive specification tests with low-cost alternate measurements. A common approach in analog/RF alternate test is to build non-linear regression models to map the specification tests to alternate measurements, or to learn a pass/fail separation boundary directly in the space of alternate measurements. Among various challenges that have been discussed in alternate test, the model stationarity is a major bottle-neck that prevents test engineers from deploying it in long-term applications. In this work, we show that alternate test strategies can be implemented on-chip using analog/RF Built-In Self-Test (BIST) circuitry. Moreover, model refinement and dynamic adaptation can be achieved based on an automatic on-chip learning structure. Effectiveness of the proposed approach is demonstrated using experimental results from an RF Low Noise Amplifier (LNA) and its BIST implementation.

Journal ArticleDOI
TL;DR: From the experimental result and theoretical analysis, it is concluded that the dielectric properties of indium phosphate manifest nonlinear behavior under the electronic field intensity of 105$10^{5}$V/m.
Abstract: In this letter, a microwave cavity for investigating the effect of external microwave fields on the dielectric behavior of semiconductor material is proposed. We use a dual-mode rectangular cavity where the stimulus and test signals are supplied by two different swept frequency microwave sources. By adjusting the power level of the stimulus signal, the intensity of microwave field in the cavity is changed. Two band-stop filters are introduced to isolate the signals coming from the stimulus signal. Measurement results show that the dielectric properties of indium phosphate manifest nonlinear behavior under the electronic field intensity of $10^{5}$ V/m. From the experimental result and theoretical analysis, we conclude that the nonlinear behavior is caused by the material’s inherent characteristics.

Journal ArticleDOI
TL;DR: An optimized encoding algorithm is involved in conjunction with any LFSR-reseeding scheme to effectively reduce test storage and power consumption and the proposed scheme achieves a high test compression efficiency than the existing methods while significantly reduces test power consumption with acceptable area overhead for most Benchmark circuits.
Abstract: Massive test data volume and excessive test power consumption have become two strict challenges for very large scale integrated circuit testing. In BIST architecture, the unspecified bits are randomly filled by LFSR reseeding-based test compression scheme, which produces enormous switching activities during circuit testing, thereby causing high test power consumption for scan design. To solve the above thorny problem, LFSR reseeding-oriented low-power test-compression architecture is developed, and an optimized encoding algorithm is involved in conjunction with any LFSR-reseeding scheme to effectively reduce test storage and power consumption, it includes test cube-based block processing, dividing into hold partition sets and updating hold partition sets. The main contributions is to decrease logic transitions in scan chains and reduce specified bit in test cubes generated via LFSR reseeding. Experimental results demonstrate that the proposed scheme achieves a high test compression efficiency than the existing methods while significantly reduces test power consumption with acceptable area overhead for most Benchmark circuits.

Journal ArticleDOI
TL;DR: Efficient logical to physical address remapping techniques are proposed in this paper to reconstruct the constituent cells of codewords such that faulty cells can be evenly distributed into different codeword.
Abstract: Error correction code (ECC) and built-in self-repair (BISR) techniques have been widely used for improving the yield and reliability of embedded memories. The targets of these two schemes are transient faults and hard faults, respectively. Recently, ECC is also considered as a promising solution for correcting hard error to further enhance the fabrication yield of memories. However, if the number of faulty bits within a codeword is greater than the protection capability of the adopted ECC scheme, the protection will become void. In order to cure this drawback, efficient logical to physical address remapping techniques are proposed in this paper. The goal is to reconstruct the constituent cells of codewords such that faulty cells can be evenly distributed into different codewords. A heuristic algorithm suitable for built-in implementation is presented for address remapping analysis. The corresponding built-in remapping analysis circuit is then derived. It can be easily integrated into the conventional built-in self-repair (BISR) module. A simulator is developed to evaluate the hardware overhead and repair rate. According to experimental results, the repair rate can be improved significantly with negligible hardware overhead.

Journal ArticleDOI
TL;DR: A time varying dynamic model to solve the multimode fault diagnosis problem is proposed and fault diagnosis based on this model is realized by means of inference calculation given the test result, which is formulated as an optimization problem.
Abstract: Dynamic fault diagnosis must consider complex fault situations such as fault evolution, coupling, unreliable tests and so on. Previous dynamic fault diagnostic models and inference algorithms are mainly designed for the steady state systems, which are not suitable for the multimode systems. In this paper, a time varying dynamic model to solve the multimode fault diagnosis problem is proposed. Its structure and formulation are presented. Fault diagnosis based on this model is realized by means of inference calculation given the test result, which is formulated as an optimization problem. A new algorithm to solve this problem is proposed. Simulation experiments on different scenarios are carried out to validate the model and the algorithm. As an example, the case of a satellite electrical power system is studied in detail. Both the simulation result and the application result show that the method proposed in this paper can be used to solve the dynamic fault diagnosis problem for multimode systems considering the complex circumstances such as uncertain tests and system delay.