scispace - formally typeset
Search or ask a question

Showing papers on "Image file formats published in 2020"


Proceedings ArticleDOI
14 Jun 2020
TL;DR: This work introduces the Smartphone Photography Attribute and Quality (SPAQ) database, consisting of 11,125 pictures taken by 66 smartphones, where each image is attached with so far the richest annotations, and makes the first attempts to train blind image quality assessment (BIQA) models constructed by baseline and multi-task deep neural networks.
Abstract: As smartphones become people’s primary cameras to take photos, the quality of their cameras and the associated computational photography modules has become a de facto standard in evaluating and ranking smartphones in the consumer market. We conduct so far the most comprehensive study of perceptual quality assessment of smartphone photography. We introduce the Smartphone Photography Attribute and Quality (SPAQ) database, consisting of 11,125 pictures taken by 66 smartphones, where each image is attached with so far the richest annotations. Specifically, we collect a series of human opinions for each image, including image quality, image attributes (brightness, colorfulness, contrast, noisiness, and sharpness), and scene category labels (animal, cityscape, human, indoor scene, landscape, night scene, plant, still life, and others) in a well-controlled laboratory environment. The exchangeable image file format (EXIF) data for all images are also recorded to aid deeper analysis. We also make the first attempts using the database to train blind image quality assessment (BIQA) models constructed by baseline and multi-task deep neural networks. The results provide useful insights on how EXIF data, image attributes and high-level semantics interact with image quality, how next-generation BIQA models can be designed, and how better computational photography systems can be optimized on mobile devices. The database along with the proposed BIQA models are available at https://github.com/h4nwei/SPAQ.

146 citations


Journal ArticleDOI
TL;DR: This paper proposes a method to estimate the downscaling factors of pre-JPEG compressed images in the presence of image downscaled after JPEG compressions and adopts the difference image extremum interval histogram and combines the spectral method to obtain the final estimation.
Abstract: Resampling detection is one of the most important topics in image forensics, and the most widely used method in resampling detection is spectral analysis. Since JPEG is the most widely used image format, it is reasonable that the resampling operation is processed on JPEG images. JPEG block artifacts bring severe interference to spectrum-based methods and degrade the detection performance. In addition, the spectral characteristics of the downscaling scenarios are very weak. The detection of downscaling still presents a considerable challenge to forensic applications. In this paper, we propose a method to estimate the downscaling factors of pre-JPEG compressed images in the presence of image downscaling after JPEG compressions. We first analyze the spectrum of scaled images and give an exact formulation of how the scaling factors influence the appearance of periodic artifacts. The expected positions of the characteristic resampling peaks are analytically derived. For the downscaling scenario, the shifted JPEG block artifacts produce periodic peaks, which cause misdetection in the characteristic peak. We find that the interval between the adjacent extrema of difference images obeys the geometric distribution and the distribution has periodic peaks for JPEG images. Hence, we adopt the difference image extremum interval histogram and combine the spectral method to obtain the final estimation. The experimental results demonstrate that the proposed detection method outperforms some state-of-the-art methods.

47 citations


Journal ArticleDOI
TL;DR: The results show a slight influence of image format and compression levels in flat or slightly flat surfaces; in the case of a complex 3D model, instead, the choice of a format became important and processing times were found to also play a key role, especially in point cloud generation.
Abstract: The aim of this study is to evaluate the degradation of the accuracy and quality of the images in relation to the TIFF format and the different compression level of the JPEG format compared to the raw images acquired by UAV platform. Experiments were carried out using DJI Mavic 2 Pro and Hasselblad L1D-20c camera on three test sites. Post-processing of images was performed using software based on structure from motion and multi-view stereo approaches. The results show a slight influence of image format and compression levels in flat or slightly flat surfaces; in the case of a complex 3D model, instead, the choice of a format became important. Across all tests, processing times were found to also play a key role, especially in point cloud generation. The qualitative and quantitative analysis, carried out on the different orthophotos, allowed to highlight a modest impact in the use of the TIFF format and a strong influence as the JPEG compression level increases.

29 citations


Posted Content
TL;DR: This paper will demonstrate that this strategy produces stego-images that have minimal distortion, high embedding efficiency, reasonably good stEGo-image quality and robustness against 3 well-known targeted steganalysis tools.
Abstract: Digital steganography is becoming a common tool for protecting sensitive communications in various applications such as crime(terrorism) prevention whereby law enforcing personals need to remotely compare facial images captured at the scene of crime with faces databases of known criminals(suspects); exchanging military maps or surveillance video in hostile environment(situations); privacy preserving in the healthcare systems when storing or exchanging patient medical images(records); and prevent bank customers accounts(records) from being accessed illegally by unauthorized users. Existing digital steganography schemes for embedding secret images in cover image files tend not to exploit various redundancies in the secret image bit-stream to deal with the various conflicting requirements on embedding capacity, stego-image quality, and un-detectibility. This paper is concerned with the development of innovative image procedures and data hiding schemes that exploit, as well as increase, similarities between secret image bit-stream and the cover image LSB plane. This will be achieved in two novel steps involving manipulating both the secret and the cover images,prior to embedding, to achieve higher 0:1 ratio in both the secret image bit-stream and the cover image LSB plane. The above two steps strategy has been exploited to use a bit-plane(s) mapping technique, instead of bit-plane(s) replacement to make each cover pixel usable for secret embedding. This paper will demonstrate that this strategy produces stego-images that have minimal distortion, high embedding efficiency, reasonably good stego-image quality and robustness against 3 well-known targeted steganalysis tools.

26 citations


Journal ArticleDOI
TL;DR: The research represents the comparison of file size in case of the original file, wavelet code, Huffman code and proposed algorithm and it has been observed that file size after compression is found least in proposed work that is 1.12.

23 citations


Journal ArticleDOI
TL;DR: MalJPEG is presented, the first machine learning-based solution tailored specifically at the efficient detection of unknown malicious JPEG images, which statically extracts 10 simple yet discriminative features from the JPEG file structure and leverages them with a machine learning classifier, in order to discriminate between benign and malicious JPEG image.
Abstract: In recent years, cyber-attacks against individuals, businesses, and organizations have increased. Cyber criminals are always looking for effective vectors to deliver malware to victims in order to launch an attack. Images are used on a daily basis by millions of people around the world, and most users consider images to be safe for use; however, some types of images can contain a malicious payload and perform harmful actions. JPEG is the most popular image format, primarily due to its lossy compression. It is used by almost everyone, from individuals to large organizations, and can be found on almost every device (on digital cameras and smartphones, websites, social media, etc.). Because of their harmless reputation, massive use, and high potential for misuse, JPEG images are used by cyber criminals as an attack vector. While machine learning methods have been shown to be effective at detecting known and unknown malware in various domains, to the best of our knowledge, machine learning methods have not been used particularly for the detection of malicious JPEG images. In this paper, we present MalJPEG, the first machine learning-based solution tailored specifically at the efficient detection of unknown malicious JPEG images. MalJPEG statically extracts 10 simple yet discriminative features from the JPEG file structure and leverages them with a machine learning classifier, in order to discriminate between benign and malicious JPEG images. We evaluated MalJPEG extensively on a real-world representative collection of 156,818 images which contains 155,013 (98.85%) benign and 1,805 (1.15%) malicious images. The results show that MalJPEG, when used with the LightGBM classifier, demonstrates the highest detection capabilities, with an area under the receiver operating characteristic curve (AUC) of 0.997, true positive rate (TPR) of 0.951, and a very low false positive rate (FPR) of 0.004.

23 citations


Journal ArticleDOI
TL;DR: The Nutil software is an open access and stand-alone executable software that enables automated transformations, post-processing, and analyses of 2D section images using multi-core processing (OpenMP).
Abstract: With recent technological advances in microscopy and image acquisition of tissue sections, further developments of tools are required for viewing, transforming, and analyzing the ever-increasing amounts of high-resolution data produced. In the field of neuroscience, histological images of whole rodent brain sections are commonly used for investigating brain connections as well as cellular and molecular organization in the normal and diseased brain, but present a problem for the typical neuroscientist with no or limited programming experience in terms of the pre- and post-processing steps needed for analysis. To meet this need we have designed Nutil, an open access and stand-alone executable software that enables automated transformations, post-processing, and analyses of 2D section images using multi-core processing (OpenMP). The software is written in C++ for efficiency, and provides the user with a clean and easy graphical user interface for specifying the input and output parameters. Nutil currently contains four separate tools: (1) A transformation toolchain named "Transform" that allows for rotation, mirroring and scaling, resizing, and renaming of very large tiled tiff images. (2) "TiffCreator" enables the generation of tiled TIFF images from other image formats such as PNG and JPEG. (3) A "Resize" tool completes the preprocessing toolset and allows downscaling of PNG and JPEG images with output in PNG format. (4) The fourth tool is a post-processing method called "Quantifier" that enables the quantification of segmented objects in the context of regions defined by brain atlas maps generated with the QuickNII software based on a 3D reference atlas (mouse or rat). The output consists of a set of report files, point cloud coordinate files for visualization in reference atlas space, and reference atlas images superimposed with color-coded objects. The Nutil software is made available by the Human Brain Project (https://www.humanbrainproject.eu) at https://www.nitrc.org/projects/nutil/.

21 citations


Journal ArticleDOI
TL;DR: The paper proposes converting the 2D + X data volume into a single meta-image file format, prior to machine learning frameworks, and provides a 3 category video database involving non-violent, moderate and extreme violence actions.

20 citations


Journal ArticleDOI
Sophy Ai1, Jangwoo Kwon1
15 Jan 2020-Sensors
TL;DR: A new convolutional network, Attention U-net (the integration of an attention gate and a U-nets network), which is able to work on common file types with primary support from deep learning to solve the problem of surveillance camera security in smart city inducements without requiring the raw image file from the camera, and it can perform under the most extreme low-light conditions.
Abstract: Low-light image enhancement is one of the most challenging tasks in computer vision, and it is actively researched and used to solve various problems. Most of the time, image processing achieves significant performance under normal lighting conditions. However, under low-light conditions, an image turns out to be noisy and dark, which makes subsequent computer vision tasks difficult. To make buried details more visible, and reduce blur and noise in a low-light captured image, a low-light image enhancement task is necessary. A lot of research has been applied to many different techniques. However, most of these approaches require much effort or expensive equipment to perform low-light image enhancement. For example, the image has to be captured in a raw camera file in order to be processed, and the addressing method does not perform well under extreme low-light conditions. In this paper, we propose a new convolutional network, Attention U-net (the integration of an attention gate and a U-net network), which is able to work on common file types (.PNG, .JPEG, .JPG, etc.) with primary support from deep learning to solve the problem of surveillance camera security in smart city inducements without requiring the raw image file from the camera, and it can perform under the most extreme low-light conditions.

20 citations


Journal ArticleDOI
TL;DR: An obstetric image diagnostic platform based on cloud computing technology that has fast data processing speed and convenient use, which greatly reduces the cost of medical equipment and improves efficiency.
Abstract: The deep learning methods in the field of computer vision and big data are becoming more and more mature. Through the application of big data and deep learning technology, the diagnosis of artificial intelligence medical image can be realized, which provides a new opportunity for the automatic analysis of obstetrics medical image and the assistance of doctors to realize high-precision intelligent diagnosis of diseases. The current medical obstetric image diagnosis platform mainly targets low-resolution medical obstetric image files, and does not consider the data-sharing problem of the distributed file system in different storage nodes, which greatly reduces the efficiency of obstetric image storage and diagnosis. Based on this, this article designs an obstetric image diagnostic platform based on cloud computing technology. First, a medical imaging platform was designed by combining cloud computing technology, caching technology, and a distributed file system. Secondly, the use of contrast-enhanced ultrasound technology provides a more accurate ultrasound image for assessing the structure, size, location, and developmental abnormalities of the placenta. Finally, the effectiveness of the obstetric imaging diagnostic platform proposed in this paper is verified by experiments. The results show that the platform has fast data processing speed and convenient use, which greatly reduces the cost of medical equipment and improves efficiency. The hospital only needs to collect the obstetric image of the patient at the front end, transfer it to the cloud for image processing, and finally diagnose the disease.

20 citations


Journal ArticleDOI
TL;DR: This work is dealing with both the passive forgery (splicing and copy-move) simultaneously and the proposed model gives good detection accuracy and high generalization capability which is independent of image formats.
Abstract: Digital images are being used as a prominent carrier of visual information in this age of digitization. Images become more and more omnipresent in everyday life. The images can be easily manipulated due to the accessibility of many internet tools and advanced software. Previously many techniques have been developed to authenticate the images. But all the previous techniques have high dimension of feature vectors. Here, a low dimensional DCT and DWT based features have been introduced to authenticate the images. In this work, we are dealing with both the passive forgery (splicing and copy-move) simultaneously. Features are extracted through image statistics and pixel correlation from DCT and DWT domain. Ensemble classifier has been selected for training and testing. The classifier classifies whether the given images are forged or authentic. Further, it also classifies the forgery in spliced or copy-move. If there is copy-move, the proposed work also perform the region detection using a novel key-point based method. The proposed model gives good detection accuracy and high generalization capability which is independent of image formats. Experimental results demonstrate the performance of proposed work against different post-processing operations like scaling, rotation, and Gaussian noise. Also, the comparative results against different existing methods show the effectiveness of the proposed model.


Journal ArticleDOI
TL;DR: The proposed algorithm is the first Steganography algorithm that can work for multiple cover image formats and utilized concepts like capacity pre-estimation, adaptive partition schemes and data spreading to embed secret data with enhanced security.
Abstract: This paper presents an image Steganography algorithm that can work for cover images of multiple formats. Having a single algorithm for multiple image types provides several advantages. For example, we can apply uniform security policies across all image formats, we can adaptively select the most suitable cover image based on data length, network bandwidth and allowable distortions, etc. We present our algorithm based on the abstract concept of image components that can be adapted for JPEG, Bitmap, TIFF and PNG cover images. To the best of our knowledge, the proposed algorithm is the first Steganography algorithm that can work for multiple cover image formats. In addition, we have utilized concepts like capacity pre-estimation, adaptive partition schemes and data spreading to embed secret data with enhanced security. The proposed method is tested for robustness against Steganalysis with favorable results. Moreover, comparative results for the proposed algorithm are very promising for three different cover image formats.

Proceedings ArticleDOI
02 Jul 2020
TL;DR: This system uses the android phone to capture the image of the document and further steps are done by OCR, which offers 90% accuracy for handwritten documents and gives the easiest way to edit or share the recognized data.
Abstract: Developing an android application for character recognition to read the text from an image is a big area of research. Nowadays, there is a trend of storing information from the handwritten documents for future use. A simple way to store the information is image capturing of the handwritten document and save it in image format. The method to transform handwritten data into electronic format is 'Optical Character Recognition'. It involves several steps including pre-processing, segmentation, feature extraction and post-processing. Many researchers have been used OCR for recognizing character. This system uses the android phone to capture the image of the document and further steps are done by OCR. The main challenge is to recognize the characters from different styles of handwriting. Thus, a system is designed that recognizes the handwritten data to obtain an editable text. The output of this system depends upon the data that has to be written by the writer. Our system offers 90% accuracy for handwritten documents and gives the easiest way to edit or share the recognized data.

Journal ArticleDOI
TL;DR: The results presented in this article show that lossy image compression can impair the efficiency of edge detection by up to six percent.

Journal ArticleDOI
TL;DR: A robust and blind color image steganography algorithm, using fractal cover images, Singular Value Decomposition (SVD), Integer Wavelet Transform (IWT), and Discrete Wavelet transform (DWT) to hide the presence of secret information, is proposed in this paper.

Journal ArticleDOI
24 Mar 2020
TL;DR: Experimental results from observed listening tests show that there is no significant difference between the stego audio reconstructed from the novel technique and the original signal.
Abstract: We present a novel robust and secure steganography technique to hide images into audio files aiming at increasing the carrier medium capacity. The audio files are in the standard WAV format, which is based on the LSB algorithm, while images are compressed by the GMPR technique which is based on the Discrete Cosine Transform and high-frequency minimization encoding algorithm. The method involves compression–encryption of an image file by the GMPR technique followed by hiding it into audio data by appropriate bit substitution. The maximum number of bits without significant effect on audio signal for LSB audio steganography is 6 LSBs. The encrypted image bits are hidden into variable and multiple LSB layers in the proposed method. Experimental results from observed listening tests show that there is no significant difference between the stego-audio reconstructed from the novel technique and the original signal. A performance evaluation has been carried out according to quality measurement criteria of signal-to-noise ratio and peak signal-to-noise ratio.

Journal ArticleDOI
TL;DR: MagellanMapper is a software suite designed for visual inspection and end‐to‐end automated processing of large‐volume, 3D brain imaging datasets in a memory‐efficient manner and leverages established open‐source computer vision libraries.
Abstract: MagellanMapper is a software suite designed for visual inspection and end-to-end automated processing of large-volume, 3D brain imaging datasets in a memory-efficient manner. The rapidly growing number of large-volume, high-resolution datasets necessitates visualization of raw data at both macro- and microscopic levels to assess the quality of data, as well as automated processing to quantify data in an unbiased manner for comparison across a large number of samples. To facilitate these analyses, MagellanMapper provides both a graphical user interface for manual inspection and a command-line interface for automated image processing. At the macroscopic level, the graphical interface allows researchers to view full volumetric images simultaneously in each dimension and to annotate anatomical label placements. At the microscopic level, researchers can inspect regions of interest at high resolution to build ground truth data of cellular locations such as nuclei positions. Using the command-line interface, researchers can automate cell detection across volumetric images, refine anatomical atlas labels to fit underlying histology, register these atlases to sample images, and perform statistical analyses by anatomical region. MagellanMapper leverages established open-source computer vision libraries and is itself open source and freely available for download and extension. © 2020 Wiley Periodicals LLC. Basic Protocol 1: MagellanMapper installation Alternate Protocol: Alternative methods for MagellanMapper installation Basic Protocol 2: Import image files into MagellanMapper Basic Protocol 3: Region of interest visualization and annotation Basic Protocol 4: Explore an atlas along all three dimensions and register to a sample brain Basic Protocol 5: Automated 3D anatomical atlas construction Basic Protocol 6: Whole-tissue cell detection and quantification by anatomical label Support Protocol: Import a tiled microscopy image in proprietary format into MagellanMapper.

Journal ArticleDOI
TL;DR: The proposed method proved to be more efficient as it outperformed the existing methods in terms of the character level percentage accuracy and was able to extract English character-based texts from images with complex backgrounds with 69.7% word- level accuracy and 81.9% character-level accuracy.
Abstract: Extracting texts from images with complex backgrounds is a major challenge today Many existing Optical Character Recognition (OCR) systems could not handle this problem As reported in the literature, some existing methods that can handle the problem still encounter major difficulties with extracting texts from images with sharp varying contours, touching word and skewed words from scanned documents and images with such complex backgrounds There is, therefore, a need for new methods that could easily and efficiently extract texts from these images with complex backgrounds, which is the primary reason for this work This study collected image data and investigated the processes involved in image processing and the techniques applied for data segmentation It employed an adaptive thresholding algorithm to the selected images to properly segment text characters from the image’s complex background It then used Tesseract, a machine learning product, to extract the text from the image file The images used were coloured images sourced from the internet with different formats like jpg, png, webp and different resolutions A custom adaptive algorithm was applied to the images to unify their complex backgrounds This algorithm leveraged on the Gaussian thresholding algorithm The algorithm differs from the conventional Gaussian algorithm as it dynamically generated the blocksize to apply threshing to the image This ensured that, unlike conventional image segmentation, images were processed area-wise (in pixels) as specified by the algorithm at each instance The system was implemented using Python 36 programming language Experimentation involved fifty different images with complex backgrounds The results showed that the system was able to extract English character-based texts from images with complex backgrounds with 697% word-level accuracy and 819% character-level accuracy The proposed method in this study proved to be more efficient as it outperformed the existing methods in terms of the character level percentage accuracy

Proceedings ArticleDOI
26 May 2020
TL;DR: A comparative performance evaluation of the newly proposed AV1 Image File Format (AVIF) vs. other state-of-the art image codecs, for natural, synthetic and gaming images, finds AVIF results in the best overall performance.
Abstract: This paper presents a comparative performance evaluation of the newly proposed AV1 Image File Format (AVIF) vs other state-of-the art image codecs, for natural, synthetic and gaming images The codecs are compared in terms of Rate-quality curves and BD-Rate savings considering different quality metrics AVIF results in the best overall performance considering both 4:2:0 and 4:4:4 chroma sub-sampling encoded images

Journal ArticleDOI
TL;DR: In this paper, session key dependent image encryption (SKE) is proposed, where the session key is the function of an original secret key (known for a pair of sender and receiver one time forever at the beginning).
Abstract: Increased use of internet demands substantial protection for secret image file from any adversary, specifically during transmission. In the field of cryptography there are two role models: cryptographer and crypt-analyst/attacker. The cryptographer develops techniques to make sure certain safety and security for transmissions while the crypt-analyst attempts to undo the former’s work by cracking the same. The basic goal of our scheme is to design an image encryption model which is extra challenging against any attack. In our research article, we have introduced session key dependent image encryption technique wherein the session key is the function of an original secret key (known for a pair of sender and receiver one time forever at the beginning) and the present secret image to be encrypted. Additionally the scheme does not require extracting and remembering of session keys to construct the subsequent session keys although the keys change during each transmission. Besides, in our scheme a double encryption technique is required, which once again confirms that the technique we propose is more robust than the conventional image encryption techniques known till date and is capable of resisting cyber-attacks of such kinds.

Proceedings ArticleDOI
01 Jan 2020
TL;DR: A new algorithm is proposed which will satisfy the aim of steganography and will be compared with BLIND HIDE steganographic algorithm on the basis of accuracy, precision, recall and f1-score.
Abstract: Steganography is the technique of hiding data under image to prevent it from being unintentionally accessed by anyone else. This process involves a plain text and an image file. By looking at the need of steganography we have proposed a new algorithm which will satisfy the aim of steganography. In our algorithm, we will have a cover image file and the message. Then the cover image’s pixel will be taken into consideration. In that we will embed each bit of secret text. This process will be continued until the last bit of secret text. After this step, the data is hidden under the image. Then we will send this image file to our client and client will have reverse process to retrieve original text from the image. We will then compare our algorithm with BLIND HIDE steganography algorithm on the basis of accuracy, precision, recall and f1-score. We will also check for the output image quality generated by both algorithms on structural similarity measure to reach proper consensus.

Journal ArticleDOI
TL;DR: This paper proposes ASSAF, a novel deep neural network architecture composed of a convolutional denoising autoencoder and a Siamese neural network, specially designed to detect steganography in JPEG images, and evaluates its novel architecture using the BOSSBase dataset.

Journal ArticleDOI
TL;DR: The prototype system based on proposed architecture is fully compliant with the DICOM standard, which can be seamlessly integrated with other existing medical systems or mobile applications, and used in various scenarios such as diagnosis, research, and education.

Proceedings ArticleDOI
01 Dec 2020
TL;DR: In this paper, a combination of file reduction and scheduling techniques from a system design level is used to ensure that image transfer timing constraints are satisfied, and the optimization of image file size and algorithms for image transfer are vital to ensure quality of service levels and expected time requirements for services.
Abstract: Images are an important part of digital communication in this era. The optimization of image file size and algorithms for image transfer are vital to ensure quality of service levels and expected time requirements for services. This paper describes and evaluates a combination of file reduction and scheduling techniques from a system design level to ensure that image transfer timing constraints are satisfied.

Journal ArticleDOI
30 Sep 2020-Symmetry
TL;DR: The proposed ImageDetox method can be utilized to neutralize malicious code hidden in an image file even in the absence of any prior information regarding the signatures or characteristics of the code and prevent security threats resulting from the concealment of confidential information in image files with the aim of leaking such threats.
Abstract: Malicious codes may cause virus infections or threats of ransomware through symmetric encryption. Moreover, various bypassing techniques such as steganography, which refers to the hiding of malicious code in image files, have been devised. Unknown or new malware hidden in an image file in the form of malicious code is difficult to detect using most representative reputation- or signature-based antivirus methods. In this paper, we propose the use of ImageDetox method to neutralize malicious code hidden in an image file even in the absence of any prior information regarding the signatures or characteristics of the code. This method is composed of four modules: image file extraction, image file format analysis, image file conversion, and the convergence of image file management modules. To demonstrate the effectiveness of the proposed method, 30 image files with hidden malicious codes were used in an experiment. The malicious codes were selected from 48,220 recent malicious codes purchased from VirusTotal (a commercial application programming interface (API)). The experimental results showed that the detection rate of viruses was remarkably reduced. In addition, image files from which the hidden malicious code had previously been removed using a nonlinear transfer function maintained nearly the same quality as that of the original image; in particular, the difference could not be distinguished by the naked eye. The proposed method can also be utilized to prevent security threats resulting from the concealment of confidential information in image files with the aim of leaking such threats.

Proceedings ArticleDOI
23 Oct 2020
TL;DR: The results showed that faces could be recognized using the eigenface algorithm with an average accuracy rate of 85% and the purpose of this research was to build face recognition software using eigenfaces.
Abstract: The eigenface algorithm is a collection of eigenvectors used for face recognition through computers. The face recognition system is part of image processing that recognizes faces based on imagery that is stamped and stored in an image file in JPEG format. Face recognition problems can be solved through the implementation of an algorithm. The algorithm used in this study is the eigenface algorithm. The input image is stamped through a laptop camera with a size of 320 pixels x 240 pixels and reduced to 100 pixels x 100 pixels to be saved as a master file of the face with various facial expressions is a forward-facing position without a smile, facing forward a thin smile, facing forward with a big smile, head tilted to the left, and head tilted to the right. The purpose of this research is to build face recognition software using eigenface algorithms. The results showed that faces could be recognized using the eigenface algorithm with an average accuracy rate of 85%.

Book ChapterDOI
17 Jun 2020
TL;DR: The Gaussian Mixture Models (GMMs) are proposed to use to augment more data very similar to the original Numerical dataset, and the results demonstrated that the Mean Absolute Error decreases meaning that the regression model became more accurate.
Abstract: One of the biggest challenges in training supervised models is the lack of amount of labeled data for training the model and facing overfitting and underfitting problems. One of the solutions for solving this problem is data augmentation. There have been many developments in data augmentation of the image files, especially in medical image type datasets, by doing some changes on the original file such as Random cropping, Filliping, Rotating, and so on, in order to make a new sample file. Or use Deep Learning models to generate similar samples like Generative Adversarial Networks, Convolutional Neural Networks and so on. However, in numerical dataset, there have not been enough advances. In this paper, we are proposing to use the Gaussian Mixture Models (GMMs) to augment more data very similar to the original Numerical dataset. The results demonstrated that the Mean Absolute Error decreases meaning that the regression model became more accurate.

Journal ArticleDOI
01 Feb 2020
TL;DR: The purpose of this application is to prepare data or files to be hidden on the cover image of the file to help the confidentiality of information or data on heavy equipment companies.
Abstract: Electronic documents are information that is permitted or stored in a way that is requested by a computer or other electronic device to be installed, assigned or processed. These documents consist of text, graphics or spreadsheets. For the current technological developments that improve progress, security is very important in companies that are difficult to avoid the follow-up of information by parties who are not responsible. One method that can be used to obtain digital documents is using Steganography and Cryptography technology by using Discrete Cosine Transform (DCT) technology and Advanced Desktop Encryption Standard (AES-192) algorithm based on Java Desktop. The purpose of this application is to prepare data or files to be hidden on the cover image of the file. Before inserting with the closing image file, the file is encrypted with a symmetrical key using the AES-192 algorithm. The benefits obtained in this application, the confidentiality of information or data on this heavy equipment company can be difficult with good and safe. With this application it is expected to help the confidentiality of information or data on heavy equipment companies. .

Journal ArticleDOI
TL;DR: This article presents a method that converts data from text files like txt, json, html, py to images (image files) in png format, and results obtained for the test files confirm that presented method enables to reduce the need for disk space, as well as to hide data in an image file.
Abstract: In the era of ubiquitous digitization, the Internet of Things (IoT), information plays a vital role. All types of data are collected, and some of this data are stored as text files. An important aspect—regardless of the type of data—is related to file storage, especially the amount of disk space that is required. The less space is used on storing data sets, the lower is the cost of this service. Another important aspect of storing data warehouses in the form of files is the cost of data transmission needed for file transfer and its processing. Moreover, the data that are stored should be minimally protected against access and reading by other entities. The aspects mentioned above are particularly important for large data sets like Big Data. Considering the above criteria, i.e., minimizing storage space, data transfer, ensuring minimum security, the main goal of the article was to show the new way of storing text files. This article presents a method that converts data from text files like txt, json, html, py to images (image files) in png format. Taking into account such criteria as the output size of the file, the results obtained for the test files confirm that presented method enables to reduce the need for disk space, as well as to hide data in an image file. The described method can be used for texts saved in extended ASCII and UTF-8 coding.