Other affiliations: International Institute of Information Technology, Indian Institute of Technology Dhanbad, Indian Institute of Technology Kharagpur
Bio: Soumen Bag is an academic researcher from Indian Institutes of Technology. The author has contributed to research in topics: Character (mathematics) & Feature extraction. The author has an hindex of 11, co-authored 66 publications receiving 539 citations. Previous affiliations of Soumen Bag include International Institute of Information Technology & Indian Institute of Technology Dhanbad.
TL;DR: A production inventory model with flexibility and reliability consideration is developed in an imprecise and uncertain mixed environment to introduce demand as a fuzzy random variable in an imperfect production process.
Abstract: The classical inventory control models assume that items are produced by perfectly reliable production process with a fixed set-up cost. While the reliability of the production process cannot be increased without a price, its set-up cost can be reduced with investment in flexibility improvement. In this paper, a production inventory model with flexibility and reliability (of production process) consideration is developed in an imprecise and uncertain mixed environment. The aim of this paper is to introduce demand as a fuzzy random variable in an imperfect production process. Here, the set-up cost and the reliability of the production process along with the production period are the decision variables. Due to fuzzy-randomness of the demand, expected average profit of the model is a fuzzy quantity and its graded mean integration value (GMIV) is optimized using unconstraint signomial geometric programming to determine optimal decision for the decision maker (DM). A numerical example has been considered to illustrate the model.
TL;DR: This paper proposes a novel shape decomposition-based segmentation technique to decompose the compound characters into prominent shape components, which reduces the classification complexity in terms of less number of classes to recognize, and at the same time improves the recognition accuracy.
Abstract: Proper recognition of complex-shaped handwritten compound characters is still a big challenge for Bangla OCR systems. In this paper, we propose a novel shape decomposition-based segmentation technique to decompose the compound characters into prominent shape components. This shape decomposition reduces the classification complexity in terms of less number of classes to recognize, and at the same time improves the recognition accuracy. The decomposition is done at the segmentation area where the two basic shapes are joined to form a compound character. We use chain code histogram feature set with multi-layer perceptron (MLP) based classifier with backpropagation learning for classification. On experimentation, the proposed method is observed to provide good recognition accuracy comparing with other existing methods.
TL;DR: A review of OCR work on Indian scripts, mainly on Bangla and Devanagari—the two most popular scripts in India, and the various methodologies and their reported results are presented.
Abstract: The past few decades have witnessed an intensive research on optical character recognition (OCR) for Roman, Chinese, and Japanese scripts. A lot of work has been also reported on OCR efforts for various Indian scripts, like Devanagari, Bangla, Oriya, Tamil, Telugu, Malayalam, Kannada, Gurmukhi, Gujarati, etc. In this paper, we present a review of OCR work on Indian scripts, mainly on Bangla and Devanagari—the two most popular scripts in India. We have summarized most of the published papers on this topic and have also analysed the various methodologies and their reported results. Future directions of research in OCR for Indian scripts have been also given.
TL;DR: A novel Convolutional Neural Network, viz.
Abstract: Magnetic Resonance Images (MRI) are often contaminated by rician noise at the acquisition time. This type of noise typically deteriorates the performance of disease diagnosis by a human observer or an automated system. Thus, it is necessary to remove the rician noise from MRI scans as a preprocessing step. In this letter, we propose a novel Convolutional Neural Network (CNN), viz. CNN-DMRI, for denoising of MRI scans. The network uses a set of convolutions to separate the image features from the noise. The network also employs encoder-decoder structure for preserving the prominent features of the image while ignoring unnecessary ones. The training of the network is carried out in an end-to-end way by utilizing residual learning scheme. The performance of the proposed CNN has been tested qualitatively and quantitatively on one simulated and four real MRI datasets. Extensive experimental findings suggest that the proposed network can denoise MRI images effectively without losing crucial image details.
TL;DR: The novelty of the approach lies in the formulation of appropriate rules of character decomposition for segmenting the character skeleton into stroke segments and then grouping them for extraction of meaningful shape components.
Abstract: In this paper we propose a novel character recognition method for Bangla compound characters. Accurate recognition of compound characters is a difficult problem due to their complex shapes. Our strategy is to decompose a compound character into skeletal segments. The compound character is then recognized by extracting the convex shape primitives and using a template matching scheme. The novelty of our approach lies in the formulation of appropriate rules of character decomposition for segmenting the character skeleton into stroke segments and then grouping them for extraction of meaningful shape components. Our technique is applicable to both printed and handwritten characters. The proposed method performs well for complex-shaped compound characters, which were confusing to the existing methods. HighlightsThe proper recognition of compound characters is a difficult problem due to their complex shapes.In this paper, we propose a novel character recognition method for Bangla compound characters.Our strategy is to decompose the compound character into simpler shape components.Our technique is applicable to printed and handwritten characters.Experiment is done on printed and handwritten Bangla compound characters.
TL;DR: In this article, the authors present a review of lot-size models which focus on coordinated inventory replenishment decisions between buyer and vendor and their impact on the performance of the supply chain.
Abstract: This article reviews lot-size models which focus on coordinated inventory replenishment decisions between buyer and vendor and their impact on the performance of the supply chain. These so-called joint economic lot size (JELS) models determine order, production and shipment quantities from the perspective of the supply chain with the objective of minimizing total system costs. This paper first describes the problem studied, introduces the methodology of the review and presents a descriptive analysis of the selected papers. Subsequently, papers are categorized and analyzed with respect to their contribution to the coordination of different echelons in the supply chain. Finally, the review highlights gaps in the existing literature and suggests interesting areas for future research.
TL;DR: This paper addresses current topics about document image understanding from a technical point of view as a survey and proposes methods/approaches for recognition of various kinds of documents.
Abstract: The subject about document image understanding is to extract and classify individual data meaningfully from paper-based documents. Until today, many methods/approaches have been proposed with regard to recognition of various kinds of documents, various technical problems for extensions of OCR, and requirements for practical usages. Of course, though the technical research issues in the early stage are looked upon as complementary attacks for the traditional OCR which is dependent on character recognition techniques, the application ranges or related issues are widely investigated or should be established progressively. This paper addresses current topics about document image understanding from a technical point of view as a survey. key words: document model, top-down, bottom-up, layout structure, logical structure, document types, layout recognition
TL;DR: A new CNN architecture for brain tumor classification of three tumor types is presented, simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images and two databases.
Abstract: The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in image segmentation and classification is the convolutional neural network (CNN). We present a new CNN architecture for brain tumor classification of three tumor types. The developed network is simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images. The performance of the network was evaluated using four approaches: combinations of two 10-fold cross-validation methods and two databases. The generalization capability of the network was tested with one of the 10-fold methods, subject-wise cross-validation, and the improvement was tested by using an augmented image database. The best result for the 10-fold cross-validation method was obtained for the record-wise cross-validation for the augmented data set, and, in that case, the accuracy was 96.56%. With good generalization capability and good execution speed, the new developed CNN architecture could be used as an effective decision-support tool for radiologists in medical diagnostics.
TL;DR: This paper considers the multi-objective reliability redundancy allocation problem of a series system where the reliability of the system and the corresponding designing cost are considered as two different objectives and a fuzzy multi- objective optimization problem (FMOOP) is formulated from the original crisp optimization problem.
Abstract: This paper considers the multi-objective reliability redundancy allocation problem of a series system where the reliability of the system and the corresponding designing cost are considered as two different objectives. Due to non-stochastic uncertain and conflicting factors it is difficult to reduce the cost of the system and improve the reliability of the system simultaneously. In such situations, the decision making is difficult, and the presence of multi-objectives gives rise to multi-objective optimization problem (MOOP), which leads to Pareto optimal solutions instead of a single optimal solution. However in order to make the model more flexible and adaptable to human decision process, the optimization model can be expressed as fuzzy nonlinear programming problems with fuzzy numbers. Thus in a fuzzy environment, a fuzzy multi-objective optimization problem (FMOOP) is formulated from the original crisp optimization problem. In order to solve the resultant problem, a crisp optimization problem is reformulated from FMOOP by taking into account the preference of decision maker regarding cost and reliability goals and then particle swarm optimization is applied to solve the resulting fuzzified MOOP under a number of constraints. The approach has been demonstrated through the case study of a pharmaceutical plant situated in the northern part of India.
TL;DR: A novel deep learning technique for the recognition of handwritten Bangla isolated compound character is presented and a new benchmark of recognition accuracy on the CMATERdb 22.214.171.124 dataset is reported.
Abstract: In this work, a novel deep learning technique for the recognition of handwritten Bangla isolated compound character is presented and a new benchmarkof recognition accuracy on the CMATERdb 126.96.36.199 dataset is reported. Greedy layer wise training of Deep Neural Network has helped to make significant strides in various pattern recognition problems. We employ layerwise training to Deep Convolutional Neural Networks (DCNN) in a supervised fashion and augment the training process with the RMSProp algorithm to achieve faster convergence. We compare results with those obtained from standard shallow learning methods with predefined features, as well as standard DCNNs. Supervised layerwise trained DCNNs are found to outperform standard shallow learning models such as Support Vector Machines as well as regular DCNNs of similar architecture by achieving error rate of 9.67% thereby setting a new benchmark on the CMATERdb 188.8.131.52 with recognition accuracy of 90.33%, representing an improvement of nearly 10%.