scispace - formally typeset
Search or ask a question

Showing papers on "Personal computer published in 2021"


Journal ArticleDOI
TL;DR: The proposed strategy combined with the proposed prediction model can handle over 50% of the total yearly load requirement but also shows a significant decrease in electricity bill and carbon dioxide compared to residential buildings without hybrid energy systems and hybrid energy system without energy management strategy.

50 citations


Journal ArticleDOI
TL;DR: In this article, a 1000-L MFC was built from transparent polyester and electrodes were from reticulated vitreous carbon and four power management devices were connected to an ensemble of 64 MFC units and assembled as a 12m long MFC.

48 citations


Journal ArticleDOI
TL;DR: A systematic review of the literature on Artificial Intelligence applied to investments in the stock market based on a sample of 2326 papers from the Scopus website between 1995 and 2019 is presented in this article.
Abstract: The application of Artificial Intelligence (AI) to financial investment is a research area that has attracted extensive research attention since the 1990s, when there was an accelerated technological development and popularization of the personal computer Since then, countless approaches have been proposed to deal with the problem of price prediction in the stock market This paper presents a systematic review of the literature on Artificial Intelligence applied to investments in the stock market based on a sample of 2326 papers from the Scopus website between 1995 and 2019 These papers were divided into four categories: portfolio optimization, stock market prediction using AI, financial sentiment analysis, and combinations involving two or more approaches For each category, the initial introductory research to its state-of-the-art applications are described In addition, an overview of the review leads to the conclusion that this research area is gaining continuous attention and the literature is becoming increasingly specific and thorough

42 citations


Journal ArticleDOI
TL;DR: In this paper, a new model of Bayesian neural networks is proposed to detect the events of compact binary coalescence in the observational data of gravitational waves (GW) and identify the full length of the event duration including the inspiral stage.
Abstract: We propose a new model of Bayesian neural networks to not only detect the events of compact binary coalescence in the observational data of gravitational waves (GW) but also identify the full length of the event duration including the inspiral stage. This is achieved by incorporating the Bayesian approach into the convolutional, long short-term memory, fully connected deep neural network classifier, which integrates together the convolutional neural network (CNN) and the long short-term memory recurrent neural network (LSTM). Our model successfully detect all seven binary black hole events in the LIGO Livingston O2 data, with the periods of their GW waveforms correctly labeled. The ability of a Bayesian approach for uncertainty estimation enables a newly defined `awareness' state for recognizing the possible presence of signals of unknown types, which is otherwise rejected in a non-Bayesian model. Such data chunks labeled with the awareness state can then be further investigated rather than overlooked. Performance tests with 40,960 training samples against 512 chunks of 8-second real noise mixed with mock signals of various optimal signal-to-noise ratio $0\ensuremath{\le}{\ensuremath{\rho}}_{\mathrm{opt}}\ensuremath{\le}18$ show that our model recognizes 90% of the events when ${\ensuremath{\rho}}_{\mathrm{opt}}g7$ (100% when ${\ensuremath{\rho}}_{\mathrm{opt}}g8.5$) and successfully labels more than 95% of the waveform periods when ${\ensuremath{\rho}}_{\mathrm{opt}}g8$. The latency between the arrival of peak signal and generating an alert with the associated waveform period labeled is only about 20 seconds for an unoptimized code on a moderate GPU-equipped personal computer. This makes our model possible for nearly real-time detection and for forecasting the coalescence events when assisted with deeper training on a larger dataset using the state-of-art HPCs.

36 citations


Journal ArticleDOI
TL;DR: In this article, the execution of PCA (Priniciple Component Analysis), GMM (Gaussian Mixture Models), GLCM (Gray Level Co-Occurrence Matrix), and SVM (Support Vector Machines) to perceive seven distinctive outward appearances of two people, for example, angry, sad, happy, disgust, neutral, fear, and surprise in database.
Abstract: Face mining is characterized as the revelation of picture designs in a given congregation of pictures. It is an exertion that generally attracts upon information PC (Personal Computer) vision, picture handling, information mining, AI (Artificial Intelligence), database, and human-made reasoning. Facial acknowledgement breaks down and contemplates the examples from the images of the facial. Facial component extraction is a programmed acknowledgment of human faces by recognizing its highlights, for example, eyebrows, eyes, and lips. In this paper, we are assessing the execution of PCA (Priniciple Component Analysis), GMM (Gaussian Mixture Models), GLCM (Gray Level Co-Occurrence Matrix), and SVM (Support Vector Machines) to perceive seven distinctive outward appearances of two people, for example, angry, sad, happy, disgust, neutral, fear, and surprise in database. Our point is to talk about the best systems that work best for facial acknowledgement. The present investigation demonstrates the plausibility of outward appearance acknowledgement for viable applications like surveillance and human PC communication.

34 citations


Journal ArticleDOI
TL;DR: In this paper, an embedded cryptosystem based on a pseudo-random number generator (PRNG) was proposed for real-time RGB images encryption on a machine-to-machine (M2M) scheme, using message queuing telemetry transport (MQTT) protocol over WiFi network and through Internet.
Abstract: Four chaotic maps are used herein as case study to design an embedded cryptosystem based on a pseudo-random number generator (PRNG). The randomness of the sequences is enhanced by applying the m o d 1023 function and verified by analyzing bifurcation diagrams, the maximum Lyapunov exponent, and performing NIST SP 800-22 and TestU01 statistical tests. The PRNG is applied in a simple algorithm for real-time RGB images encryption on a machine-to-machine (M2M) scheme, using message queuing telemetry transport (MQTT) protocol over WiFi network and through Internet. The cryptanalysis confirms that the proposed image encryption scheme is robust to resist most of the existing attacks, such as statistical histograms, entropy, key-space, correlation of adjacent pixels, and differential attacks. The implementation of the proposed cryptosystem is done using enhanced sequences from the Logistic 1D map, and it reaches a throughput of up to 47.44 Mbit/s using a personal computer with a 2.9 GHz clock, and 10.53 Mbit/s using a Raspberry Pi 4. As a result, our proposed embedded cryptosystem is suitable to increase the security in the transmission of RGB images in real-time through WiFi networks and Internet.

31 citations


Journal ArticleDOI
TL;DR: An embedded lightweight SSVEP-BCI electric wheelchair with a hybrid hardware-driven visual stimulator is designed, which combines the advantages of liquid crystal display (LCD) and light-emitting diode (LED) to achieve lower energy consumption than the traditional LCD stimulator.

31 citations


Journal ArticleDOI
TL;DR: In this paper, the imperfect structure of litz wires is considered when calculating losses due to an excitation current (skin losses) and external magnetic fields (proximity losses).
Abstract: This article presents a fast numerical calculation method of realistic power losses for high-frequency litz wires. Explicitly, the imperfect structure of litz wires is considered when calculating losses due to an excitation current (skin losses) and external magnetic fields (proximity losses). Calculations of litz wires with more than 1000 strands were performed on a personal computer and have been validated by measurements up to 10 MHz. In the calculation, the impact of the bundle structure on skin and proximity losses is examined. The method allows to select a suitable litz wire for a specific application or to design a litz wire considering realistic twisting structures.

29 citations


Journal ArticleDOI
TL;DR: In this paper, a minimizer-space de Bruijn graph-based representation of 661,405 bacterial genomes, comprising 16 million nodes and 45 million edges, was constructed for anti-microbial resistance (AMR) genes in 12min.
Abstract: Summary DNA sequencing data continue to progress toward longer reads with increasingly lower sequencing error rates. Here, we define an algorithmic approach, mdBG, that makes use of minimizer-space de Bruijn graphs to enable long-read genome assembly. mdBG achieves orders-of-magnitude improvement in both speed and memory usage over existing methods without compromising accuracy. A human genome is assembled in under 10 min using 8 cores and 10 GB RAM, and 60 Gbp of metagenome reads are assembled in 4 min using 1 GB RAM. In addition, we constructed a minimizer-space de Bruijn graph-based representation of 661,405 bacterial genomes, comprising 16 million nodes and 45 million edges, and successfully search it for anti-microbial resistance (AMR) genes in 12 min. We expect our advances to be essential to sequence analysis, given the rise of long-read sequencing in genomics, metagenomics, and pangenomics. Code for constructing mdBGs is freely available for download at https://github.com/ekimb/rust-mdbg/ .

28 citations


Journal ArticleDOI
TL;DR: In this article, an embedded algorithm for the detection of the QRS complex of an ECG signal is presented, which is based on the shape and appearance of the signal and extracts certain characteristics like the shape of the complex, its slope, trend, and the duration between two successive QRS complexes, and then use them to increase detection accuracy.
Abstract: Electrocardiogram (ECG) is one of the most useful medical examinations for the monitoring of cardiovascular diseases. The position and the duration of the QRS complex on the ECG signal are very important in the diagnosis of these diseases. Even though several R-peak (and hence QRS complex) detection algorithms are available, most are based on complex computations that require off-line processing on a personal computer (PC). However, with the advances in wearable devices and telemedicine, an algorithm that can run efficiently on a microcontroller (or embedded system) is needed. This article presents the development of an embedded algorithm for the detection of the QRS complex of an ECG signal. The algorithm is based on the shape and appearance of the signal. It extracts certain characteristics like the shape of the QRS complex, its slope, trend, and the duration between two successive QRS complexes, and then use them to increase detection accuracy. First, the R-peak is detected through the application of three levels of tests using adaptive thresholds. Second, from each R position, the positions of Q and S are detected using three other tests. To evaluate the performance of the algorithm, the MIT-BIH database was used and the sensitivity, positive prediction, and F1 score were used as evaluation metrics. The algorithm obtained average F1 scores of 99.67%, 99.73%, and 99.83% for the MIT-BIH Arrhythmia, Pacemaker Rhythm, and the Normal Sinus Rhythm Databases, respectively. Both normal and abnormal ECG signals were used in this performance test. The algorithm was then implemented on a microcontroller system, and its accuracy and run time were evaluated. The obtained F1 score results were the same as on the personal computer (PC) and an average run time of $16.23~\mu \text{s}$ per sample was obtained. The performance of the algorithm was also compared to other commonly used algorithms. The proposed algorithm has great potential in wearable systems for long-term monitoring.

27 citations


Journal ArticleDOI
TL;DR: An integrated chip for quantum information and quantum physics research at the Microscale and Department of Modern Physics, University of Science and Technology of China.
Abstract: Quantum random number generators (QRNGs) can produce true random numbers. Yet, the two most important QRNG parameters highly desired for practical applications, i.e., speed and size, have to be compromised during implementations. Here, we present the fastest and miniaturized QRNG with a record real-time output rate as high as 18.8 Gbps by combining a photonic integrated chip and the technology of optimized randomness extraction. We assemble the photonic integrated circuit designed for vacuum state QRNG implementation, InGaAs homodyne detector and high-bandwidth transimpedance amplifier into a single chip using hybrid packaging, which exhibits the excellent characteristics of integration and high-frequency response. With a sample rate of 2.5 GSa/s in a 10-bit analog-to-digital converter and subsequent paralleled postprocessing in a field programmable gate array, the QRNG outputs ultrafast random bitstreams via a fiber optic transceiver, whose real-time speed is validated in a personal computer.

Journal ArticleDOI
TL;DR: This work uses powerful mathematical tools stemming from optimal transport theory to transform them into an efficient algorithm to reconstruct the fluctuations of the primordial density field, built on solving the Monge-Amp\`ere-Kantorovich equation.
Abstract: We leverage powerful mathematical tools stemming from optimal transport theory and transform them into an efficient algorithm to reconstruct the fluctuations of the primordial density field, built on solving the Monge-Ampere-Kantorovich equation. Our algorithm computes the optimal transport between an initial uniform continuous density field, partitioned into Laguerre cells, and a final input set of discrete point masses, linking the early to the late Universe. While existing early universe reconstruction algorithms based on fully discrete combinatorial methods are limited to a few hundred thousand points, our algorithm scales up well beyond this limit, since it takes the form of a well-posed smooth convex optimization problem, solved using a Newton method. We run our algorithm on cosmological N-body simulations, from the AbacusCosmos suite, and reconstruct the initial positions of O(10 7) particles within a few hours with an off-the-shelf personal computer. We show that our method allows a unique, fast and precise recovery of subtle features of the initial power spectrum, such as the baryonic acoustic oscillations.

Journal ArticleDOI
TL;DR: In this paper, a low-cost pressure sensor array consisting of conductive fabric and conductive wires was deployed as a bedsheet with 32 rows and 32 columns resulting in 1024 nodes.
Abstract: Sleeping is an indispensable activity of human beings. Sleeping postures have a significant effect on sleeping quality and health. A real-time low-cost sleeping posture recognition system with high privacy and good user experience is desired. In this article, we propose a sleeping posture recognition system based on a low-cost pressure sensor array which consists of conductive fabric and conductive wires. The sensor array is deployed as a bedsheet with 32 rows and 32 columns resulting in 1024 nodes. An Arduino Nano performs data collection using a 10-bit Analog to Digital Converter (ADC). The sampling rate of the overall sensor array is 0.4 frame/sec. Six health-related sleeping postures of five participants can be recognized by a shallow Convolutional Neural Network (CNN) deployed on a Personal Computer (PC). The system accuracy achieved 84.80% using the standard training-test method and 91.24% using the transfer learning-based subject-specific method. The real-time processing speed achieved 434 us/frame.

Journal ArticleDOI
TL;DR: In this article, the authors study the potential of ML-enabled workflows for several vital signs such as heart and respiratory rates, cough, blood pressure, and oxygen saturation for COVID-19 infected as well as quarantined individuals.
Abstract: The COVID-19 pandemic has overwhelmed the existing healthcare infrastructure in many parts of the world. Healthcare professionals are not only over-burdened but also at a high risk of nosocomial transmission from COVID-19 patients. Screening and monitoring the health of a large number of susceptible or infected individuals is a challenging task. Although professional medical attention and hospitalization are necessary for high-risk COVID-19 patients, home isolation is an effective strategy for low and medium risk patients as well as for those who are at risk of infection and have been quarantined. However, this necessitates effective techniques for remotely monitoring the patients’ symptoms. Recent advances in Machine Learning (ML) and Deep Learning (DL) have strengthened the power of imaging techniques and can be used to remotely perform several tasks that previously required the physical presence of a medical professional. In this work, we study the prospects of vital signs monitoring for COVID-19 infected as well as quarantined individuals by using DL and image/signal-processing techniques, many of which can be deployed using simple cameras and sensors available on a smartphone or a personal computer, without the need of specialized equipment. We demonstrate the potential of ML-enabled workflows for several vital signs such as heart and respiratory rates, cough, blood pressure, and oxygen saturation. We also discuss the challenges involved in implementing ML-enabled techniques.

Proceedings ArticleDOI
29 Mar 2021
TL;DR: In this article, a two-layer blockchain-driven FL framework, called ChainsFL, is proposed, which is composed of multiple Raft-based shard networks and a Direct Acyclic Graph (DAG)-based main chain (layer-2), where layer-l limits the scale of each shard for a small range of information exchange, and layer-2 allows each SHard to update and share the model in parallel and asynchronously.
Abstract: Despite the advantages of Federated Learning (FL), such as devolving model training to intelligent devices and preserving data privacy, FL still faces the risk of the single point of failure and attack from malicious participants. Recently, blockchain is considered a promising solution that can transform FL training into a decentralized manner and improve security during training. However, traditional consensus mechanisms and architecture for blockchain can hardly handle the large-scale FL task due to the huge resource consumption, limited throughput, and high communication complexity. To this end, this paper proposes a two-layer blockchain-driven FL framework, called as ChainsFL, which is composed of multiple Raft-based shard networks (layer-l) and a Direct Acyclic Graph (DAG)-based main chain (layer-2) where layer-l limits the scale of each shard for a small range of information exchange, and layer-2 allows each shard to update and share the model in parallel and asynchronously. Furthermore, FL procedure in a blockchain manner is designed, and the refined DAG consensus mechanism to mitigate the effect of stale models is proposed. In order to provide a proof-of-concept implementation and evaluation, the shard blockchain base on Hyperledger Fabric is deployed on the self-made gateway as layer-l, and the self-developed DAG-based main chain is deployed on the personal computer as layer-2. The experimental results show that ChainsFL provides acceptable and sometimes better training efficiency and stronger robustness comparing with the typical existing FL systems.

Journal ArticleDOI
TL;DR: In this article, the authors explored public university student's perceptions towards online classes during the COVID-19 pandemic in Bangladesh through an online survey and found that most students were facing difficulty participating in virtual classes and could not communicate with their friends correctly during online classes.
Abstract: The severe disease outbreak COVID-19 pandemic impacted public health and safety and the educational systems worldwide. For fear of the further spread of diseases, most educational institutions, including Bangladesh, have postponed their face-to-face teaching. Therefore, this study explores public university student's perceptions towards online classes during the COVID-19 pandemic in Bangladesh. Data were collected among students of Islamic University, Kushtia, Bangladesh, through an online survey. The study followed both a qualitative and quantitative approach, where the survey technique was used as an instrument of data collection. Results showed that most students were facing difficulty participating in virtual classes and could not communicate with their friends correctly during online classes. They faced challenges in online schooling, and the majority of the students preferred conventional types of learning to virtual classes and did not understand the content of virtual classes easily. The study also explored that most students did not feel comfortable in online classes. Still, considering the present pandemic situation, they decided to participate in online classes to continue schooling. Besides, the study discovered that female students showed better real perceptions than male students regarding online classes, and urban students have more optimistic appreciation than rural students. Moreover, laptop or personal computer users showed more positive perceptions towards online education than mobile users. Furthermore, Broadband/ Wi-Fi users have more positive perceptions than mobile network users. These findings would be an essential guideline for governments, policymakers, technology developers, and university authorities for making better policy choices in the future.

Journal ArticleDOI
TL;DR: In this paper, semantic segmentation by deep learning was integrated into an MR system to enable dynamic occlusion handling and landscape index estimation for both existing and designed landscape assessment, which can be operated on a mobile device with video communication over the internet.

Journal ArticleDOI
TL;DR: In this article, a thermal dissipation performance with de-ionized water and nanofluid as coolants is presented, considering the effects of coolant types and heat sink configurations on the thermal resistance.

Journal ArticleDOI
TL;DR: In this article, an integrated framework for fault detection and diagnosis in nuclear power plants using unsupervised machine learning techniques where there is no prior knowledge is required is presented. But, due to the massive amount of monitoring data that are collected and stored in modern NPPs, it is difficult to extract necessary information about the actual plant state in a timely and accurate manner.

Journal ArticleDOI
TL;DR: The 3D SIFT aided DVC method, which achieves unprecedented balance between accuracy, adaptability, and efficiency, shows great potential in the quantitative analysis of internal deformation.

Journal ArticleDOI
TL;DR: In this paper, various dc-dc converter topologies and control methodologies for transportation electrification are reviewed and the challenges and future development trends for creating the next generation DC-dc power converters are also discussed.
Abstract: DC-DC CONVERTERS ARE SOLID-STATE DC "transformers" that convert one type of dc electricity into another via power electronic circuits. They have been used in a broad range of applications, from IT devices (such as a power supply to provide different dc voltage levels inside a personal computer or a data center server) to medical equipment (e.g., a solar charger for a cardiac pacemaker) and from consumer electronics (e.g., chargers for various smart mobile devices) to high-power industrial applications, which is the focused area of this article. As there has been exponential growth in transportation electrification since the beginning of the 21st century, dc-dc converters have become one of the core devices in electric and autonomous vehicles. Extensive research and development efforts have been made on the design and implementation of dc-dc converters, aiming at higher power density, higher efficiency, and more reduction in cost. In this article, various dc-dc converter topologies and control methodologies for transportation electrification are reviewed. The challenges and future development trends for creating the next-generation dc-dc power converters are also discussed.

Journal ArticleDOI
Fuat Usta1
TL;DR: Bernstein approximation method has been applied along with Riemann–Liouville fractional integral operator to solve both the second and the first kind of fractional Volterra integral equations to show the applicability and efficiency of the proposed technique.

Journal ArticleDOI
TL;DR: In this paper, the tradeoff between average video quality and delivery latency was investigated for remote rendered real-time virtual reality (VR) applications in 5G mobile networks by leveraging the interpolation between client's field of view frame size and overall latency.
Abstract: The availability of high bandwidth with low-latency communication in 5G mobile networks enables remote rendered real-time virtual reality (VR) applications. Remote rendering of VR graphics in a cloud removes the need for local personal computer for graphics rendering and augments weak graphics processing unit capacity of stand-alone VR headsets. However, to prevent the added network latency of remote rendering from ruining user experience, rendering a locally navigable viewport that is larger than the field of view of the HMD is necessary. The size of the viewport required depends on latency: Longer latency requires rendering a larger viewport and streaming more content. In this article, we aim to utilize multi-access edge computing to assist the backend cloud in such remote rendered interactive VR. Given the dependency between latency and amount and quality of the content streamed, our objective is to jointly optimize the tradeoff between average video quality and delivery latency. Formulating the problem as mixed integer nonlinear programming, we leverage the interpolation between client’s field of view frame size and overall latency to convert the problem to integer nonlinear programming model and then design efficient online algorithms to solve it. The results of our simulations supplemented by real-world user data reveal that enabling a desired balance between video quality and latency, our algorithm particularly achieves the improvements of on average about 22p and 12p in term of video delivery latency and 8p in term of video quality compared to respectively order-of-arrival, threshold-based, and random-location strategies.

Journal ArticleDOI
TL;DR: Open-source mesh optimization of smaller dental prostheses in the current study produced minimal loss of geometric and volumetric details.
Abstract: Purpose Mesh optimization reduces the texture quality of three-dimensional models in order to reduce storage file size and computational load on a personal computer. The current study aims to explore mesh optimization using open source (free) software in the context of prosthodontic application MATERIALS AND METHODS: An auricular prosthesis, a complete denture, anterior and posterior crowns were constructed using conventional methods and laser scanned to create computerized three-dimensional meshes. The Meshes were optimized independently by four computer-aided design software (Meshmixer, Meshlab, Blender and SculptGL) to 100%, 90%, 75%, 50% and 25% levels of original file size. Upon optimization, the following parameters were virtually evaluated and compared; mesh vertices, file size, mesh surface area (SA), mesh volume (V), interpoint discrepancies (geometric similarity based on virtual point overlapping) and spatial similarity (volumetric similarity based on shape overlapping). The influence of software and optimization on surface area and volume of each prosthesis was evaluated independently using multiple linear regression. Results There were clear observable differences in vertices, file size, surface area and volume. The choice of software significantly influenced the overall virtual parameters of auricular prosthesis [SA: F(4,15) = 12.93, R2 = 0.67, P 97%. Conclusion Open-source mesh optimization of smaller dental prostheses in the current study produced minimal loss of geometric and volumetric details. SculptGL models were most influenced by the amount of optimization performed. This article is protected by copyright. All rights reserved.

Journal ArticleDOI
TL;DR: In this paper, the authors present a system architecture that, through the hardware and software synchronization of multiple ULA-OP 256 scanners, may implement advanced open platforms with an arbitrary number of channels.
Abstract: Ultrasound open platforms are programmable and flexible tools for the development and test of novel methods. In most cases, they embed the electronics for the independent control of (maximum) 256 probe elements. However, a higher number of channels is needed for the control of 2-D array probes. This paper presents a system architecture that, through the hardware and software synchronization of multiple ULA-OP 256 scanners, may implement advanced open platforms with an arbitrary number of channels. The proposed solution needs a single personal computer, maintains real-time features, and preserves portability. A prototype demonstrator, composed of two ULA-OP 256 scanners connected to 512 elements of a matrix array, was implemented and tested according to different channel configurations. Experiments performed under MATLAB control confirmed that by doubling the number of elements (from 256 to 512) the signal-to-noise and contrast ratios improve by 9 dB and 3 dB, respectively. Furthermore, as a full 512-channel scanner, the demonstrator can produce real-time B-mode images at 18 Hz, high enough for probe positioning during acquisitions. Also, the demonstrator permitted the implementation of a new high frame rate, bi-plane, triplex modality. All probe elements are excited to simultaneously produce two planar, perpendicular diverging waves. Each scanner independently processes the echoes received by the 256 connected elements to beamform 1300 frames per second. For each insonified plane, good quality morphological (B-mode), qualitative (color flow-), and quantitative (spectral-) Doppler images are finally shown in real-time by a dedicated interface.

Journal ArticleDOI
TL;DR: In this article, the authors assess the integrated impact of urbanisation and climate change on pluvial flooding in Kathmandu Metropolitan City, using the Personal Computer Storm Water Management Model.

Posted Content
TL;DR: In this article, the performance of four quadratic unconstrained binary optimization problem solvers, namely D-Wave Hybrid Solver Service (HSS), Toshiba Simulated Bifurcation Machine (SBM), Fujitsu Digital Annealer (DA), and simulated annealing on a personal computer, was benchmarked.
Abstract: Recently, inspired by quantum annealing, many solvers specialized for unconstrained binary quadratic programming problems have been developed. For further improvement and application of these solvers, it is important to clarify the differences in their performance for various types of problems. In this study, the performance of four quadratic unconstrained binary optimization problem solvers, namely D-Wave Hybrid Solver Service (HSS), Toshiba Simulated Bifurcation Machine (SBM), Fujitsu DigitalAnnealer (DA), and simulated annealing on a personal computer, was benchmarked. The problems used for benchmarking were instances of real problems in MQLib, instances of the SAT-UNSAT phase transition point of random not-all-equal 3-SAT(NAE 3-SAT), and the Ising spin glass Sherrington-Kirkpatrick (SK) model. Concerning MQLib instances, the HSS performance ranked first; for NAE 3-SAT, DA performance ranked first; and regarding the SK model, SBM performance ranked first. These results may help understand the strengths and weaknesses of these solvers.

Journal ArticleDOI
TL;DR: The unified gas-kinetic wave-particle method is further developed for diatomic gas with the energy exchange between translational and rotational modes for flow study in all regimes and the computational cost and memory requirement could be reduced by several orders of magnitude for the high speed and high temperature flow simulation.

Journal ArticleDOI
TL;DR: In this paper, the authors numerically explored the possibility of ultrathin layering and high efficiency of graphene as a back surface field (BSF) based on a CdTe solar cell by Personal computer one-dimensional (PC1D) simulation.
Abstract: This paper numerically explores the possibility of ultrathin layering and high efficiency of graphene as a back surface field (BSF) based on a CdTe solar cell by Personal computer one-dimensional (PC1D) simulation. CdTe solar cells have been characterized and studied by varying the carrier lifetime, doping concentration, thickness, and bandgap of the graphene layer. With simulation results, the highest short-circuit current (Isc = 2.09 A), power conversion efficiency (η = 15%), and quantum efficiency (QE~85%) were achieved at a carrier lifetime of 1 × 103 μs and a doping concentration of 1 × 1017 cm−3 of graphene as a BSF layer-based CdTe solar cell. The thickness of the graphene BSF layer (1 μm) was proven the ultrathin, optimal, and obtainable for the fabrication of high-performance CdTe solar cells, confirming the suitability of graphene material as a BSF. This simulation confirmed that a CdTe solar cell with the proposed graphene as the BSF layer might be highly efficient with optimized parameters for fabrication.

Journal ArticleDOI
TL;DR: A novel baseball pitch training software design to interact with a virtual object in a virtual reality or augmented reality environment by combining a Unity game engine and a digital glove has potential for application in medical rehabilitation and physical training.
Abstract: This paper proposes a novel baseball pitch training software design to interact with a virtual object in a virtual reality or augmented reality environment by combining a Unity game engine and a digital glove. An embedded microcontroller unit with a communication interface in the digital glove collects sensory data, including mechanical physical limit feedback, electric shock tactile feedback, finger-bending sensations, and three-dimensional spatial positioning, then interacts with the Unity game engine and HTC Vive through a personal computer. The user thereby experiences the sensation of holding an object in virtual reality. Autodesk Maya software is used to design a baseball pitch training mainframe with modeling and animation. The Unity game engine can load baseball pitching scenarios and create a three-dimensional virtual reality stream that is sent to an HTC Vive headset. To seamlessly complete the data exchange between the digital glove and the Unity engine, we use a shared memory mechanism designed with the C# Windows program. The embedded C# script design within the Unity game engine plays an important interactive role between virtual reality scenes and the digital glove. Our experimental results showed that players received physical feedback when touching virtual objects. The proposed design has potential for application in medical rehabilitation and physical training.