scispace - formally typeset
Search or ask a question
Author

Seema Ghisingh

Bio: Seema Ghisingh is an academic researcher. The author has contributed to research in topics: Sound recording and reproduction & Violin. The author has an hindex of 1, co-authored 1 publications receiving 11 citations.

Papers
More filters
Proceedings ArticleDOI
01 Dec 2016
TL;DR: Among the features used, after MFCC, ZCR proved to be the optimal feature for the classification of drum instrument, and the most significant feature for classifying Guitar, Violin and Drum is MFCC as it gives the better accurate results.
Abstract: Identification of musical instruments from the acoustic signal using speech signal processing methods is a challenging problem. Further, whether this identification can be carried out by a single musical note, like humans are able to do, is an interesting research issue that has several potential applications in the music industry. Attempts have been made earlier using the spectral and temporal features of the music acoustic signals. The process of identifying the musical instrument from monophonic audio recording basically involves three steps — pre-processing of music signal, extracting features from it and then classifying those. In this paper, we present an experiment-based comparative study of different features for classifying few musical instruments. The acoustic features, namely, the Mel-Frequency Cepstral Coefficients (MFCCs), Spectral Centroids (SC), Zero-Crossing Rate (ZCR) and signal energy are derived from the music acoustic signal using different speech signal processing methods. A Support Vector Machine (SVM) classifier is used with each feature for the relative comparisons. The classification results using different combinations of training by features from different music instrument and testing with another/same type of music instruments are compared. Our results indicate that the most significant feature for classifying Guitar, Violin and Drum is MFCC as it gives the better accurate results. Also, the feature which gives better accuracy results for the drum instrument is ZCR. Among the features used, after MFCC, ZCR proved to be the optimal feature for the classification of drum instrument.

11 citations


Cited by
More filters
Proceedings ArticleDOI
01 Jul 2017
TL;DR: An audio signal classification system based on Linear Predictive Coding and Random Forests is presented for multiclass classification with imbalanced datasets and achieves an overall correct classification rate of 99.25%.
Abstract: The goal of this work is to present an audio signal classification system based on Linear Predictive Coding and Random Forests. We consider the problem of multiclass classification with imbalanced datasets. The signals under classification belong to the class of sounds from wildlife intruder detection applications: birds, gunshots, chainsaws, human voice and tractors. The proposed system achieves an overall correct classification rate of 99.25%. There is no probability of false alarms in the case of birds or human voices. For the other three classes the probability is low, around 0.3%. The false omission rate is also low: around 0.2% for birds and tractors, a little bit higher for chainsaws (0.4%), lower for gunshots (0.14%) and zero for human voices.

16 citations

Proceedings ArticleDOI
01 Dec 2016
TL;DR: This study provides scientific evidences for the common perception that alpha binaural beats and thus music can help a person in achieving a relaxed state of mind i.e., meditative state in a better way.
Abstract: Human brain contains of approximately 100 billion neurons. Each neuron communicates with few ten thousands of other neurons in order to carry messages in the brain. Significant electrical activity is produced in the brain over synaptic joints of such neurons sending signals at very low frequencies below (50 Hz), thereby forming the brainwave pattern. The brainwaves are categorized as delta, theta, alpha and beta, as per different frequency ranges. In this paper, the effect of binaural beats on human mind is presented. Alpha binaural beats of 10 Hz are produced by creating the auditory illusion of 10 Hz in the brain by playing the binaural beats of 370 Hz and 380 Hz for left and right ear respectively. Binaural beats are effective only when heard through an earphone. In order to examine the effects of binaural beats on human brain, 10 people are subjected to these beats for 3 minutes. Using Matlab, the attention level and meditation levels are measured by alpha brainwaves, and the comparison graphs are plotted. Relative comparison is carried out for each persons state while listening to the binaural beats. This study provides scientific evidences for the common perception that alpha binaural beats and thus music can help a person in achieving a relaxed state of mind i.e., meditative state in a better way.

13 citations

Proceedings ArticleDOI
01 Aug 2017
TL;DR: In this article, the effect of music on the states of human mind is examined by observing the changes in the alpha and beta brainwaves patterns, which are compared for attention and meditation states.
Abstract: Music is known to affect different states of the human mind, for example, in calming one's mind and leading to a blissful state. In this study, the effect of music on the states of human mind is examined by observing the changes in the alpha and beta brainwaves patterns. These changes are compared for ‘attention’ and ‘meditation’ state of mind. An electroencephalograph was used to record the brainwaves. Three experiments are carried out in a controlled environment. First, the effect of binaural beats, i.e., perceptual beat frequency created in human mind by the differential of two audio beats played to human ears, is examined. Secondly, the effects of 8 different music genres on states of mind are examined. Lastly, the effects of classical music of 5 different eminent musicians are examined. Parameters mean, standard deviation and normalized standard deviation are derived, for carrying out the comparative analysis of attention vs. meditation state. Results indicate larger effect of classical music for meditation state of mind, than for attention. The study reveals that classical music indeed helps in achieving relaxed or meditative state of human mind.

7 citations

Proceedings ArticleDOI
01 Dec 2016
TL;DR: The F0 contour has been studied to capture the melodic trends in a Sargam progression, and the results have been compared with the output of state of the art package PRAAT to indicate usefulness of the standard techniques and the constraints posed towards the singing voice analysis.
Abstract: Pitch extraction from a multi pitched music signal significantly relies on the training data for tasks like enhanced music-voice separation. This paper aims at identifying characteristic temporal and spectral features, using speech processing techniques that can help obtain crucial information, leading to a better understanding of the music structure. Towards this goal, the F0 contour has been studied to capture the melodic trends in a Sargam progression, and the results have been compared with the output of state of the art package PRAAT. Effects of pre-emphasising in enhancing the tracking are also discussed. A method is proposed through which the transition trends in the note progression can be validated for correctness and the results are encouraging in characterising the progression. Spectral analysis is done to get some insight into the harmonic behaviour in conjunction with the signal energy pattern. This is followed by the LP Analysis that tells about the Swara Pronunciation. The results observed indicate usefulness of the standard techniques and the constraints posed towards the singing voice analysis.

6 citations

Journal ArticleDOI
15 Jun 2021
TL;DR: An audio classification model utilizing Convolutional Neural Network that determines the sound produced by violin and classifies the used technique and outperformed the benchmark model.
Abstract: Playing violin requires both left and right hands that move into one another to produce one distinctive sound. While some violin players improve their hearing and recognize these techniques, it can be difficult for some people. Although there are names and categories for each violin technique, distinctions sometimes become ambiguous. This paper presents an audio classification model utilizing Convolutional Neural Network (CNN) that determines the sound produced by violin and classifies the used technique. The dataset used was gathered from real violin players who were tasked to record themselves playing one specific technique. The recorded tracks were then carefully trimmed to remove the noise. The pre-processed recordings served as an input to a benchmark CNN model. To fully optimize the CNN model, we modified the architecture of the model and tweaked the hyper-parameters. A comparative analysis between the two models was discussed in the latter part of this paper. The result of the analysis showed that our proposed model with an average of 94.8 % accuracy outperformed the benchmark model with an average of 87.6% accuracy. Using stratified cross-validation of five folds, we were able to measure the accuracy, training time, and predicting time of the models. A paired t-test with a p -value of 0.01 that shows a significance between the performance of the two models.

6 citations