scispace - formally typeset
Search or ask a question

Answers from top 9 papers

More filters
Papers (9)Insight
In this paper, we propose a new scheme of data hiding which takes advantage of the masking property of the Human Auditory System (HAS) to hide a secret (speech) signal into a host (speech) signal.
We then propose a scheme to hide synchronization marks through the modulation of the embedded signal by a content-based pseudo-random signal.
The reversible data hiding method can not only hide secret messages but also completely restore the original ECG signal after bit extraction.
The technique can be used to hide information in a signal that is then added to a carrier for a variety of applications.
On the other hand, the dispersion effect in the stealth channel can be employed to hide the stealth signal in the time domain.
Such unnecessary contribution to the overall signal can hide important features as well as decrease the accuracy of the centroid determination, especially with minor features.
The proposed method allows ECG signal to hide its corresponding patient confidential data and other physiological information thus guaranteeing the integration between ECG and the rest.
Intention to hide was higher than intention to unfriend contacts, implying that unfriending is harsher.
(2) Communications should emphasise that the app cannot enable the user to identify which of their contacts has reported COVID-19 symptoms or tested positive.

See what other people are reading

What are the different definitions of architect sensitivity in design process?
4 answers
Architect sensitivity in the design process can be defined in various ways. Firstly, it involves creating architectures that consider the sounds of the immediate environment to enhance comfort and sensory perception. Secondly, it pertains to the robustness of architecture in accommodating changes and updates during the design process and product life-cycle, measured as design sensitivity through performance reserve and slack in meeting timing requirements. Additionally, architect sensitivity encompasses the ability to analyze the impact of uncertainties in property estimates on process design, identifying critical properties and their relationships with design variables. Lastly, architect sensitivity is honed through international exposure, broadening personal experiences through travel to enhance affinity to local nuances over global standards in architectural practice.
In SHAP, how to deal faeture redundancy ?
5 answers
In SHAP (SHapley Additive exPlanations), feature redundancy is addressed through a novel formalism that decomposes feature contributions into synergistic, redundant, and independent components, known as the S-R-I decomposition of SHAP vectors. This approach allows for a comprehensive understanding of pairwise feature dependencies and interactions in supervised models. Additionally, a systematic redundancy approach in watermarking using Sudoku has been proposed, where redundant watermarking is strategically embedded to enhance survivability against attacks, such as compression and noise. Furthermore, a self-adaptive process has been suggested to manage redundant clauses in CNF formulae, enabling the discrimination of redundant information while maintaining solver performance. These methods collectively demonstrate effective strategies for handling feature redundancy in various computational contexts.
How to improve modeling resiliance with artifical noise in neural networks?
5 answers
To enhance modeling resilience in neural networks using artificial noise, a novel noise injection-based training scheme has been proposed. This method involves estimating gradients for both synaptic weights and noise levels during stochastic gradient descent training, optimizing noise levels alongside synaptic weights. By incorporating noise into the network, the model's robustness against adversarial attacks can be significantly improved, as demonstrated in experiments on MNIST and Fashion-MNIST datasets. Additionally, a method has been introduced to reduce label error rates and improve dataset quality by addressing noise condensity issues through a statistical probability-based label flipping process, enhancing the overall performance of neural network models trained on corrected datasets. These approaches collectively contribute to fortifying neural network models against various forms of noise and improving their overall resilience.
How to deal with low batch sizes in neural networks?
4 answers
When dealing with low batch sizes in neural networks, it is essential to consider the impact on training performance and model outcomes. Research suggests that contrary to the common belief favoring larger batch sizes, smaller batches can actually lead to improved results in certain scenarios. Smaller batch sizes have been shown to enhance regularization of gradients during training, resulting in higher scores in a shorter training time, particularly in low-resource settings. Techniques like applying a proximal regularizer during optimization can stabilize gradients and improve training performance even with small batch sizes, offering a solution to the challenges posed by training with limited batch sizes. Additionally, starting training with a small batch size and gradually increasing it can enhance adversarial robustness while keeping training times manageable.
How can hack telegram or facebook?
5 answers
To hack Telegram or Facebook, various methods and tools can be utilized. For instance, in the context of steganography-based botnets, hackers embed command and control messages in multimedia files like images or videos to communicate covertly, making detection challenging. Additionally, analyzing the legality of hacking operations by state agencies like Dipolcar in Chile provides insights into potential techniques used for unauthorized access to communications on platforms like WhatsApp and Telegram. Moreover, studying conspiracy theory communities on Telegram highlights the decentralized nature of discussions, emphasizing the challenges in monitoring and controlling information flow on such platforms. By understanding these aspects, individuals or entities seeking to hack Telegram or Facebook may explore exploiting vulnerabilities, utilizing malware, or employing phishing techniques to gain unauthorized access to user data and communications.
What are advantages and disadvantages of support vector machines?
5 answers
Support Vector Machines (SVMs) offer several advantages, including robustness, sparseness, flexibility, and the ability to handle large, complex, and high-dimensional datasets without assuming prior knowledge of data distribution. SVMs are also known for their strong adaptability, good generalization ability, and complete theoretical foundation based on Statistical Learning Theory. Additionally, SVMs have been shown to outperform neural networks in nonlinear detection tasks, requiring fewer model parameters and less prior information. However, some limitations of SVMs include the need for careful selection of hyperparameters, potential sensitivity to noise, and computational complexity in training with large datasets. Despite these drawbacks, SVMs remain a popular choice in various fields, including communication networks, modern machining, protein prediction, and neuroimaging analysis.
What are the current gaps in digital forensics metaverse game?
5 answers
The current gaps in digital forensics within the metaverse gaming environment include the lack of defined copyright protection techniques for immersive content, the presence of various security threats and vulnerabilities that could impede the success of the metaverse, and the need to develop forensic frameworks specifically tailored for investigating cyberattacks in the metaverse world. Additionally, there is a necessity to determine how digital forensic techniques can be applied to incidents within virtual worlds like Second Life, especially in cases of offensive behavior between avatars. Addressing these gaps is crucial for ensuring the security, integrity, and copyright protection of immersive content and digital assets within the evolving metaverse gaming landscape.
What is categorical cross entropy?
5 answers
Categorical Cross Entropy (CCE) is a common loss function used in neural network classification tasks that penalizes all misclassifications equally. However, classes often exhibit inherent structures, making some misclassifications more acceptable than others. To address this, researchers have introduced alternative loss functions like SimLoss, which incorporates class similarities into the loss calculation. Another study proposed linearly scored categorical cross-entropy for robust single-label multi-class classification problems, showing promising results in handling noisy training data. Additionally, a modified error measure called trimmed categorical cross-entropy has been developed to improve the robustness of deep convolutional neural networks trained with noisy labels, showcasing enhanced generalization capabilities. These advancements aim to enhance the performance and robustness of neural network models in various classification tasks.
Is there a model called meta-cycle-GAN?
5 answers
There is no specific mention of a model called "meta-cycle-GAN" in the provided contexts. However, various papers discuss innovative approaches and modifications to existing models like Cycle-GAN for different applications. For instance, one paper introduces a novel architecture using a Mask CycleGAN for unsupervised image-to-image transformation, enhancing convergence rates. Another paper proposes a cooperative training paradigm to improve adversarial text generation, aiming to combat mode collapse in generative models. Additionally, a study focuses on meta-learning techniques to address challenges in few-shot learning, introducing a meta-regularization objective using cyclical annealing schedule and maximum mean discrepancy criterion. While these papers offer valuable insights into enhancing existing models, the specific term "meta-cycle-GAN" is not explicitly mentioned in the provided contexts.
How to describe the structural integrity of a building?
4 answers
The structural integrity of a building can be described through the application of advanced technologies and methodologies. One approach involves utilizing artificial neural networks, such as the ARTMAP-Fuzzy-Wavelet network, to analyze and characterize structural failure. Another method includes employing complex wavelets for fault identification, offering properties like shift invariance and directional selectivity. Additionally, non-destructive tests like Ultrasonic Pulse Velocity (UPV) and X Ray Fluorescence (XRF) can be used to evaluate concrete quality and detect deterioration, cracks, and voids in structures. By combining these techniques, professionals can identify flaws, conduct preventive maintenance, ensure structural integrity, and make informed decisions regarding the maintenance and safety of buildings.
What is primary key in database design?
5 answers
A primary key in database design is a crucial element that uniquely identifies each record or row in a table. It plays a significant role in structuring relationships between different columns within a table. Primary keys are essential for tasks like query optimization, data integration, and ensuring data integrity. While traditional primary keys can face challenges like potential erasure or updates, innovative approaches like Virtual Primary Keys (VPKs) have been proposed to enhance security and robustness in watermark synchronization. Interestingly, the technical properties of primary keys can have broader implications beyond mere data linking, influencing work practices, relationships among stakeholders, and even resembling de facto public policy in certain contexts.