scispace - formally typeset
Search or ask a question

Showing papers in "Iet Information Security in 2011"


Journal ArticleDOI
TL;DR: The author shows that the improved scheme provides strong authentication with the use of verifying biometric, password as well as random nonces generated by the user and the server as compared to that for the Li-Hwang's scheme and other related schemes.
Abstract: The author first reviews the recently proposed Li-Hwang's biometric-based remote user authentication scheme using smart cards; then shows that the Li-Hwang's scheme has some design flaws in their scheme. In order to withstand those flaws in their scheme, an improvement of their scheme is further proposed. The author also shows that the improved scheme provides strong authentication with the use of verifying biometric, password as well as random nonces generated by the user and the server as compared to that for the Li-Hwang's scheme and other related schemes.

228 citations


Journal ArticleDOI
TL;DR: The authors establish a link between the correlation coefficient and the conditional entropy in side-channel attacks and show that both measures are equally suitable to compare devices with respect to their susceptibility to DPA attacks.
Abstract: In this study, the authors examine the relationship between and the efficiency of different approaches to standard (univariate) differential power analysis (DPA) attacks. The authors first show that, when fed with the same assumptions about the target device (i.e. with the same leakage model), the most popular approaches such as using a distance-of-means test, correlation analysis and Bayes attacks are essentially equivalent in this setting. Differences observed in practice are not because of differences in the statistical tests but because of statistical artefacts. Then, the authors establish a link between the correlation coefficient and the conditional entropy in side-channel attacks. In a first-order attack scenario, this relationship allows linking currently used metrics to evaluate standard DPA attacks (such as the number of power traces needed to perform a key recovery) with an information theoretic metric (the mutual information). The authors results show that in the practical scenario defined formally in this study, both measures are equally suitable to compare devices with respect to their susceptibility to DPA attacks. Together with observations regarding key and algorithm independence the authors consequently extend theoretical strategies for the sound evaluation of leaking devices towards the practice of side-channel attacks.

191 citations


Journal ArticleDOI
TL;DR: The authors propose a new method that uses single-class learning to detect unknown malware families based on examining the frequencies of the appearance of opcode sequences to build a machine-learning classifier using only one set of labelled instances within a specific class of either malware or legitimate software.
Abstract: Malware is any type of malicious code that has the potential to harm a computer or network. The volume of malware is growing at a faster rate every year and poses a serious global security threat. Although signature-based detection is the most widespread method used in commercial antivirus programs, it consistently fails to detect new malware. Supervised machine-learning models have been used to address this issue. However, the use of supervised learning is limited because it needs a large amount of malicious code and benign software to be labelled first. In this study, the authors propose a new method that uses single-class learning to detect unknown malware families. This method is based on examining the frequencies of the appearance of opcode sequences to build a machine-learning classifier using only one set of labelled instances within a specific class of either malware or legitimate software. The authors performed an empirical study that shows that this method can reduce the effort of labelling software while maintaining high accuracy.

56 citations


Journal ArticleDOI
TL;DR: Experimental results have proven the effectiveness of the steganalysis method for detecting the covert channel in the compressed VoIP speech and accurately estimate the embedded message length.
Abstract: A network covert channel is a passage along which information leaks across the network in violation of security policy in a completely undetectable manner. This study reveals our findings in analysing the principle of G.723.1 codec that there are `unused` bits in G.723.1 encoded audio frames, which can be used to embed secret messages. A novel steganalysis method that employs the second detection and regression analysis is suggested in this study. The proposed method can not only detect the hidden message embedded in a compressed voice over Internet protocol (VoIP) speech, but also accurately estimate the embedded message length. The method is based on the second statistics, that is, doing a second steganography (embedding information in a sampled speech at an embedding rate followed by embedding another information at a different level of data embedding) in order to estimate the hidden message length. Experimental results have proven the effectiveness of the steganalysis method for detecting the covert channel in the compressed VoIP speech.

46 citations


Journal ArticleDOI
TL;DR: This paper gives an original construction of 2 m -variable-balanced RSBFs with maximum AI, and improves the construction to obtain more 2 m-variable- balanced RSBF with maximumAI, and these new RSBs have higher non-linearity than all previously obtained R SBFs.
Abstract: Rotation symmetric Boolean functions (RSBFs) that are invariant under circular translation of indices have been used as components of different cryptosystems. In this paper, even-variable-balanced RSBFs with maximum algebraic immunity (AI) are investigated. At first, we give an original construction of 2 m -variable-balanced RSBFs with maximum AI. Then we improve the construction to obtain more 2 m -variable-balanced RSBFs with maximum AI, and these new RSBFs have higher non-linearity than all previously obtained RSBFs. Further, we generalise our construction of 2 m -variable RSBFs to a new construction that can generate any even-variable RSBFs.

41 citations


Journal ArticleDOI
TL;DR: A robust watermarking scheme for multiple cover images and multiple owners is proposed that makes use of the visual cryptography technique, transform domain technique, chaos technique, noise reduction technique and error correcting code technique to enhance the robustness of the scheme.
Abstract: Watermarking is a technique to protect the copyright of digital media such as image, text, music and movie. In this study, a robust watermarking scheme for multiple cover images and multiple owners is proposed. The proposed scheme makes use of the visual cryptography (VC) technique, transform domain technique, chaos technique, noise reduction technique and error correcting code technique where the VC technique provides the capability to protect the copyright of multiple cover images for multiple owners, and the rest of the techniques are applied to enhance the robustness of the scheme.

40 citations


Journal ArticleDOI
TL;DR: This study proposes a new CIVCS that can be based on any VCS, including those with a general access structure, and shows that it can avoid all the above drawbacks.
Abstract: Most cheating immune visual cryptography schemes (CIVCS) are based on a traditional visual cryptography scheme (VCS) and are designed to avoid cheating when the secret image of the original VCS is to be recovered. However, all the known CIVCS have some drawbacks. Most usual drawbacks include the following: the scheme needs an online trusted authority, or it requires additional shares for the purpose of verification, or it has to sacrifice the properties by means of pixel expansion and contrast reduction of the original VCS or it can only be based on such VCS with specific access structures. In this study, the authors propose a new CIVCS that can be based on any VCS, including those with a general access structure, and show that their CIVCS can avoid all the above drawbacks. Moreover, their CIVCS does not care about whether the underlying operation is OR or XOR.

40 citations


Journal ArticleDOI
TL;DR: Experimental results show that with a clock speed of 40 MHz, an IrisCode is obtained in less than 523 ms from an image of 640x480 pixels, which is just 20% of the total time needed by a software solution running on the same microprocessor embedded in the architecture.
Abstract: This paper describes the implementation of an iris recognition algorithm based on hardware-software co-design. The system architecture consists of a general-purpose 32- bit microprocessor and several slave coprocessors that accelerate the most intensive calculations. The whole iris recognition algorithm has been implemented on a low-cost Spartan 3 FPGA, achieving significant reduction in execution time when compared to a conventional software-based application. Experimental results show that with a clock speed of 40 MHz, an IrisCode is obtained in less than 523 ms from an image of 640x480 pixels, which is just 20% of the total time needed by a software solution running on the same microprocessor embedded in the architecture.

38 citations


Journal ArticleDOI
TL;DR: This study shows that the initialisation procedure of the two ciphers admits a sliding property, resulting in several sets of related-key pairs, and questions the validity of the security proofs of protocols that are based on the assumption that SNOW 3G and SNOW 2.0 behave like perfect random functions of the key- IV.
Abstract: SNOW 3G is a stream cipher chosen by the 3rd Generation Partnership Project (3GPP) as a crypto-primitive to substitute KASUMI in case its security is compromised. SNOW 2.0 is one of the stream ciphers chosen for the ISO/IEC standard IS 18033-4. In this study, the authors show that the initialisation procedure of the two ciphers admits a sliding property, resulting in several sets of related-key pairs. In case of SNOW 3G, a set of 232 related-key pairs is presented, whereas in the case of SNOW 2.0, several such sets are found, out of which the largest are of size 264 and 2192 for the 128-bit and 256-bit variant of the cipher, respectively. In addition to allowing related-key recovery attacks against SNOW 2.0 with 256-bit keys, the presented properties reveal non-random behaviour that yields related-key distinguishers and also questions the validity of the security proofs of protocols that are based on the assumption that SNOW 3G and SNOW 2.0 behave like perfect random functions of the key-IV.

31 citations


Journal ArticleDOI
TL;DR: The authors propose a new audio hash function based on the non-negative matrix factorisation (NMF) of mel-frequency cepstral coefficients (MFCCs) that achieves better performances, in terms of perceptual robustness and discrimination, than the available SVD- MFCCs-based hash function.
Abstract: Robust audio hash function defines a feature vector that characterises the audio signal, independent of content preserving manipulations, such as MP3 compression, amplitude boosting/cutting, low-pass filtering etc. In this study, the authors propose a new audio hash function based on the non-negative matrix factorisation (NMF) of mel-frequency cepstral coefficients (MFCCs). Their work is motivated by the fact that the orthogonality constraints in the singular value decomposition (SVD) make the low-rank singular vectors of audio with distinct local difference be the same. Thus, the available hash function based on SVD of MFCCs cannot achieve satisfactory discrimination. Although the non-negative constraints of NMF result in the basis that captures the local feature of the audio, thereby significantly reducing misclassification. Experimental results over large audio databases demonstrate that the proposed scheme achieves better performances, in terms of perceptual robustness and discrimination, than the available SVD-MFCCs-based hash function.

28 citations


Journal ArticleDOI
TL;DR: The authors approach has successfully scored high detection rate for tested metamorphic virus classes and very low false-positive errors and the system is also able to learn new patterns of viruses for future recognition.
Abstract: Metamorphic virus recognition is the most challenging task for antivirus software, because such viruses are the hardest to detect as they change their appearance and structure on each new infection. In this study, the authors present an effective system for metamorphic virus recognition based on statistical machine learning techniques. The authors approach has successfully scored high detection rate for tested metamorphic virus classes and very low false-positive errors. The system is also able to learn new patterns of viruses for future recognition. The authors conclude the results of their simulation with results analysis and future enhancements in the system to detect other virus classes.

Journal ArticleDOI
TL;DR: An active non-invasive attack to inject faults during the execution of the algorithm and setup time violation attacks by under-powering and overclocking an application-specific integrated circuit are presented.
Abstract: Fault attacks are real threats against hardware implementations of robust cryptographic algorithms such as advanced encryption standard (AES). The authors present an active non-invasive attack to inject faults during the execution of the algorithm and describe setup time violation attacks by under-powering and overclocking an application-specific integrated circuit. Then a security evaluation is presented against setup time violation attacks of several AES architectures on two field programmable gate arrays (FPGA) brands, namely Altera Stratix and Xilinx Virtex5. The authors notice that the architecture of the substitution box greatly impacts the faults statistics. These statistics are furthermore different depending on the FPGA vendor, and also notice that it is more difficult to inject single fault in the most recent technology. Also, the use-cases show how difficult it is to predict the most vulnerable resource in an FPGA. Finally, a low-cost countermeasure against this kind of attack is presented.

Journal ArticleDOI
Xinchao Li1, Ju Liu1, Jiande Sun1, Xiaohui Yang1, Wei Liu2 
TL;DR: Experimental results indicate that the step projection-based approach can incorporate the perceptual model with STDM framework in a better way, thereby providing a significant improvement in image fidelity.
Abstract: Quantisation index modulation (QIM) is an important class of watermarking methods, which has been widely used in blind watermarking applications. It is well known that spread transform dither modulation (STDM), as an extension of QIM, has good performance in robustness against random noise and re-quantisation. However, the quantisation step-sizes used in STDM are random numbers not taking features of the image into account. The authors present a step projection-based approach to incorporate the perceptual model with STDM framework. Four implementations of the proposed algorithm are further presented according to different modified versions of the perceptual model. Experimental results indicate that the step projection-based approach can incorporate the perceptual model with STDM framework in a better way, thereby providing a significant improvement in image fidelity. Compared with the former proposed modified schemes of STDM, the author's best performed implementation provides powerful resistance against common attacks, especially in robustness against Gauss noise, salt and pepper noise and JPEG compression.


Journal ArticleDOI
TL;DR: An approach to the security assessment of the information systems of critical infrastructures is presented, based on the faithful reconstruction of the evaluated information system in a computer security laboratory followed by simulations of possible threats against the system.
Abstract: This study presents an approach to the security assessment of the information systems of critical infrastructures. The approach is based on the faithful reconstruction of the evaluated information system in a computer security laboratory followed by simulations of possible threats against the system. The evidence collected during the experiments, stored and organised using a proprietary system InSAW, may later be used for the creation of trust cases which provide valuable information for the end users of the infrastructure. Another new proposal is MAlSim - mobile agent-based simulator of malicious software (viruses, worms, etc). To the best of the authors- knowledge, such a simulator has not been proposed before. The present approach was applied to the verification of the security of industrial control systems and power plants. In the study, one of the experiments related to the security study of an information system of a power plant, a simulation of zero-day worm attack, is described.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed several criteria on P and its inversion P-1 to characterise the existence of 3/4-round impossible differentials for Rijndael and ARIA.
Abstract: Impossible differential cryptanalysis is a very popular tool for analysing the security of modern block ciphers and the core of such attack is based on the existence of impossible differentials. Currently, most methods for finding impossible differentials are based on the miss-in-the-middle technique and they are very ad hoc. In this study, the authors concentrate on substitution–permutation network (SPN) ciphers whose diffusion layer is defined by a linear transformation P. Based on the theory of linear algebra, the authors propose several criteria on P and its inversion P-1 to characterise the existence of 3/4-round impossible differentials. The authors further discuss the possibility to extend these methods to analyse 5/6-round impossible differentials. Using these criteria, impossible differentials for reduced-round Rijndael are found that are consistent with the ones found before. New 4-round impossible differentials are discovered for block cipher ARIA. Many 4-round impossible differentials are firstly detected for a kind of SPN cipher that employs a 32×32 binary matrix proposed at ICISC 2006 as its diffusion layer. It is concluded that the linear transformation should be carefully designed in order to protect the cipher against impossible differential cryptanalysis.

Journal ArticleDOI
TL;DR: The authors demonstrate that nanoscale phenomenon can be applied not only in device level but also in high layer applications, such as secure computation, as well as comparing with classical secure computation algorithms.
Abstract: Traditionally, the authors could establish secure computation protocols using variants of public key cryptology whose security is based on hard mathematics problems However, classical protocols will become insecure owing to the emergence of quantum algorithms like Shors In this paper, the authors demonstrate that nanoscale phenomenon can be applied not only in device level but also in high layer applications, such as secure computation The authors study the possibility of performing secure computation by building non-local machines based on quantum entanglement and non-locality, which are phenomena available only at the nanometre scale Comparing with classical secure computation algorithms, the security of this protocol is based on physical laws, instead of any unproven mathematic conjecture

Journal ArticleDOI
TL;DR: Three popular low-cost irreducible polynomials - trinomial, pentanomial and all-one-polynomial - are proposed and designed in this study and results indicate that the proposed non-XOR architecture can reduce space complexity by 22-, compared with that of the traditional design.
Abstract: Finite field arithmetic has been widely used in many cryptosystems, particularly in the elliptic curve cryptosystem (ECC) and the advanced encryption standard (AES) as a method for speeding up their encryption/decryption processes. Low-cost design for finite field arithmetic is more attractive for various mobile applications. It is a factor that a large number of Exclusive OR (XOR) gates have been used in the arithmetic operations under the traditional finite field arithmetic implementation. Thus, the cost of the traditional finite field arithmetic cannot be effectively lowered, because a typical XOR gate design consists of 12 transistors. To address this, a novel non-XOR approach consisting of eight transistors, for realising low-cost polynomial basis (PB) multiplier over GF(2 m ) was developed in this study. The authors proposed that non-XOR architecture for bit-parallel PB multiplier uses the multiplexer function instead of the traditional XOR function in its design. Based on the proposed non-XOR methodology, three popular low-cost irreducible polynomials - trinomial, pentanomial and all-one-polynomial - are proposed and designed in this study. The results indicate that the proposed non-XOR architecture can reduce space complexity by 22-, compared with that of the traditional design.

Journal ArticleDOI
TL;DR: In this article, the authors take a close look at Kerberos' encryption, and confirm that most of the options in the current version provably provide privacy and authenticity, although some require slight modifications which they suggest.
Abstract: Kerberos is a widely deployed network authentication protocol currently being considered for standardisation. Many works have analysed its security, identifying flaws and often suggesting fixes, thus promoting the protocol's evolution. Several recent results present successful, formal methods-based verifications of a significant portion of the current version, v.5 and some even imply security in the computational setting. For these results to hold, encryption in Kerberos should satisfy strong cryptographic security notions. However, prior to the authors' work, none of the encryption schemes currently deployed as part of Kerberos, nor their proposed revisions, were known to provably satisfy such notions. The authors take a close look at Kerberos' encryption, and they confirm that most of the options in the current version provably provide privacy and authenticity, although some require slight modifications which they suggest. The authors' results complement the formal methods-based analysis of Kerberos that justifies its current design.

Journal ArticleDOI
TL;DR: This study presents a new impossible differential attack on a reduced version of Camellia-256 without FL / FL -1 functions and whitening, and introduces a new extension of the hash table technique and exploits it to attack 16 rounds of Camella-256.
Abstract: Camellia, a 128-bit block cipher that has been accepted by ISO/IEC as an international standard, is increasingly being used in many cryptographic applications. In this study, the authors present a new impossible differential attack on a reduced version of Camellia-256 without FL / FL -1 functions and whitening. First, the authors introduce a new extension of the hash table technique and then exploit it to attack 16 rounds of Camellia-256. When, in an impossible differential attack, the size of the target subkey space is large and the filtration, in the initial steps of the attack, is performed slowly, the extended hash table technique will be very useful. The proposed attack on Camellia-256 requires 2 124.1 known plaintexts and has a running time equivalent to about 2 249.3 encryptions. In terms of the number of attacked rounds, our result is the best published attack on Camellia-256.

Journal ArticleDOI
TL;DR: This study describes the design and implementation of a system that identifies the writer using offline Arabic handwritten text with an excellent test accuracy of identification rated up to 96% for Arabic text.
Abstract: The identification of a person on the basis of scanned images of handwriting is a useful biometric technique with application in forensic document analysis. This study describes the design and implementation of a system that identifies the writer using offline Arabic handwritten text. The key point is using multiple features to capture different aspects of handwriting individuality and to operate at different level of analysis with the aim of improving identification performance. Fuzzy logic (FL) and genetic algorithm (GA) have been used in a complementary fashion to fuse (combine) extracted features as well as to deal with the ambiguity of human judgment of handwritings similarity. GA is used to help construct and tune fuzzy membership functions that are necessary to categorise the strength of existence of handwritings features similarity through FL, with the purpose of yielding high correct identification rates. The final results indicate and clarify that the proposed system achieves an excellent test accuracy of identification rated up to 96% for Arabic text.

Journal ArticleDOI
TL;DR: This study presents an extension of O LSR, called COD-OLSR, which provides security for OLSR in the case of incorrect message generation attacks which can occur in two forms (identity spoofing and link spoofing).
Abstract: The design of routing protocols for mobile ad hoc networks rarely contemplates, in most cases, hostile environments. Consequently, it is common to add security extensions afterwards. One of the most important routing protocols is the optimised link state routing (OLSR), which in its specification assumes the trust of all nodes in the network, making it vulnerable to different kinds of attacks. This study presents an extension of OLSR, called COD-OLSR, which provides security for OLSR in the case of incorrect message generation attacks which can occur in two forms (identity spoofing and link spoofing). This is one of its main features, which takes into account the current topology of the node sending the message. The behaviour of COD-OLSR against different attackers in a variety of situations is evaluated. The simulation results show that COD-OLSR adds a slight overhead to OLSR and barely affects performance. The results also show that COD-OLSR is an interesting alternative to provide integrity in OLSR compared with classical mechanisms making use of cryptography, which is more complex and has a high overhead.

Journal ArticleDOI
TL;DR: The authors have proposed two new steganalytic approaches through exploring the distortions that have been introduced into the qDCT coefficient histogram and the dependencies existed in the intra-block and inter-block sense, respectively, these two alternative steganalysers can detect JPEG-CES effectively.
Abstract: Recently, a new high-performance JPEG steganography with a complementary embedding strategy (JPEG-CES) was presented. It can disable many specific steganalysers such as the Chi-square family and S family detectors, which have been used to attack J-Steg, JPHide, F5 and OutGuess successfully. In this work, a study on the security performance of JPEG-CES is reported. Our theoretical analysis demonstrates that in this algorithm, the number of the different quantised discrete cosine transform (qDCT) coefficients and the symmetry of the qDCT coefficient histogram both will be disturbed when the secret message is embedded. Moreover, the intrinsic sign and magnitude dependencies that existed in intra-block and inter-block qDCT coefficients will be disturbed too. Thus it may be detected by some modern universal steganalysers which can catch these disturbances. In this work, the authors have proposed two new steganalytic approaches. Through exploring the distortions that have been introduced into the qDCT coefficient histogram and the dependencies existed in the intra-block and inter-block sense, respectively, these two alternative steganalysers can detect JPEG-CES effectively. In addition, via merging the features of these two steganalysers, a more reliable classifier can be obtained.

Journal ArticleDOI
TL;DR: The proposed SCAL GNB is the first normal basis multiplier to have both on-line error-detection and off-line testing capabilities, and can detect both permanent and transient faults.
Abstract: This work develops a novel self-checking alternating logic (SCAL) bit-parallel Gaussian normal basis (GNB) multiplier with type-t over GF(2m). The proposed GNB multiplier is with both concurrent error-detection and off-line testing capabilities. The concurrent error-detection capability can give countermeasure to fault-based cryptanalysis. The off-line testing capability supports the design-for-test property. The proposed SCAL GNB multiplier can detect both permanent and transient faults. The proposed SCAL GNB is the first normal basis multiplier to have both on-line error-detection and off-line testing capabilities.

Journal ArticleDOI
TL;DR: In removing watermark energy from 100 randomly selected watermarked images in which watermarks were embedded using the ‘broken arrows (BA)’ algorithm proposed for the second breaking the authors' watermarking system (BOWS-2) contest, the mean PSNR of 100 predicted images is 24.1 dB and the proposed approach successfully removed watermarks from 90 of these images.
Abstract: Most watermark-removal methods treat watermarks as noise and apply denoising approaches to remove them. However, denoising methods remove not only this watermark energy, but also some of the energy of the original image. A trade-off therefore exists: if not enough of the watermark is removed, then it will still be detected by probabilistic methods, but if too much is removed, the image quality will be noticeably poor. To solve this problem, the relationships among the energies of the original image, the watermark and the watermarked image are initially determined using stochastic models. Then the energy of the watermark is estimated using just-noticeable-distortion (JND). Finally, the watermark energy is removed from the watermarked image using the energy distribution of its Eigen-images. The experimental results show that the proposed approach yields a mean peak signal-to-noise ratio (PSNR) of the predicted images that is 2.2 dB higher than that obtained using the adaptive Wiener filter, and a mean normalised correlation (NC) value of the extracted watermarks that is 0.27 lower than that obtained using the adaptive Wiener filter. In removing watermark energy from 100 randomly selected watermarked images in which watermarks were embedded using the ‘broken arrows (BA)’ algorithm proposed for the second breaking our watermarking system (BOWS-2) contest, the mean PSNR of 100 predicted images is 24.1 dB and the proposed approach successfully removed watermarks from 90 of these images. This result exceeds the minimum requirement of PSNR 20 dB for the BOWS-2 contest. Clearly, the proposed approach is a very effective watermark-removal approach for removing watermarks.

Journal ArticleDOI
TL;DR: The authors propose an efficient image secret sharing scheme that can resist cheating attacks and allows an authorised participant to reveal a lossless secret image and to further restore the valued host image without distortion.
Abstract: The secret-sharing mechanism has been widely applied to the e-commerce, communications and multimedia fields. With sufficient shares, the involved participants can cooperate to reveal secret data. Fraudulent participants, however, may provide a fake shadow in order to fool others. Consequently, cheating prevention has become a critical essential for secret sharing systems. In this article, the authors propose an efficient image secret sharing scheme that can resist cheating attacks. The simulator shows that the novel scheme is sensitive to cheating detection and cheater identification. In particular, the new method allows an authorised participant to reveal a lossless secret image and to further restore the valued host image without distortion. The reversibility of the secret sharing system provides practicability and widespread potential for preserving medical images, military images and artistic images.

Journal ArticleDOI
TL;DR: A construction is proposed for a 4-4 linear diffusion layer that can intermix four words of arbitrary size with branch number 5 and extended for 8-8 diffusion layer using low-cost linear functions to show the efficiency of the proposed diffusion layer.
Abstract: One of the most important structures used in modern block ciphers is the substitution-permutation network (SPN) structure. Many block ciphers with this structure widely use Maximun Distance Separable (MDS) matrices over finite fields as their diffusion layers, for example, advanced encryption standard (AES) uses a 4-4 MDS matrix as the main part of its diffusion layer and the block cipher Khazad has an involutory 8-8 matrix. In this study, first a construction is proposed for a 4-4 linear diffusion layer that can intermix four words of arbitrary size with branch number 5. Then extend this idea for 8-8 diffusion layer using low-cost linear functions. In this construction, first, certain binary linear combinations of inputs are fed into two or three different invertible linear functions and then combined using XOR operation. In order to show the efficiency of the proposed diffusion layer, the authors exploit it in a nested SPN structure and compare its efficiency with some well-known diffusion layers such as the diffusion layer of Hierocrypt.