scispace - formally typeset
Search or ask a question

Showing papers by "Kai Li published in 2023"


Posted ContentDOI
24 Jan 2023-bioRxiv
TL;DR: In this article , the authors used millimeter-scale volumetric electron microscopy to investigate the connectivity of inhibitory neurons across a dense neuronal population spanning all layers of mouse visual cortex with synaptic resolution.
Abstract: Mammalian cortex features a large diversity of neuronal cell types, each with characteristic anatomical, molecular and functional properties. Synaptic connectivity rules powerfully shape how each cell type participates in the cortical circuit, but comprehensively mapping connectivity at the resolution of distinct cell types remains difficult. Here, we used millimeter-scale volumetric electron microscopy to investigate the connectivity of inhibitory neurons across a dense neuronal population spanning all layers of mouse visual cortex with synaptic resolution. We classified all 1183 excitatory neurons within a 100 micron column into anatomical subclasses using quantitative morphological and synapse features based on full dendritic reconstructions, finding both familiar subclasses corresponding to axonal projections and novel intralaminar distinctions based on synaptic properties. To relate these subclasses to single-cell connectivity, we reconstructed all 164 inhibitory interneurons in the same column, producing a wiring diagram of inhibition with more than 70,000 synapses. We found widespread cell-type-specific inhibition, including interneurons selectively targeting certain excitatory subpopulations among spatially intermingled neurons in layer 2/3, layer 5, and layer 6. Globally, inhibitory connectivity was organized into “motif groups,” heterogeneous collections of cells that collectively target both perisomatic and dendritic compartments of the same combinations of excitatory subtypes. We also discovered a novel category of disinhibitory-specialist interneuron that preferentially targets basket cells. Collectively, our analysis revealed new organizing principles for cortical inhibition and will serve as a powerful foundation for linking modern multimodal neuronal atlases with the cortical wiring diagram.

11 citations


Posted ContentDOI
14 Mar 2023-bioRxiv
TL;DR: In this article , the connectivity-10 function relationship in excitatory neurons of the mouse visual cortex across interlaminar and interarea projections, assessing connection selectivity at the coarse axon trajectory and fine synaptic formation levels.
Abstract: To understand how the brain computes, it is important to unravel the relationship between circuit connectivity and function. Previous research has shown that excitatory neurons in layer 2/3 of the primary visual cortex of mice with similar response 5 properties are more likely to form connections. However, technical challenges of combining synaptic connectivity and functional measurements have limited these studies to few, highly local connections. Utilizing the millimeter scale and nanometer resolution of the MICrONS dataset, we studied the connectivity-10 function relationship in excitatory neurons of the mouse visual cortex across interlaminar and interarea projections, assessing connection selectivity at the coarse axon trajectory and fine synaptic formation levels. A digital twin model of this mouse, that accurately predicted responses to arbitrary video 15 stimuli, enabled a comprehensive characterization of the function of neurons. We found that neurons with highly correlated responses to natural videos tended to be connected with each other, not only within the same cortical area but also across multiple layers and visual areas, including feedforward and feed-20 back connections, whereas we did not find that orientation preference predicted connectivity. The digital twin model separated each neuron’s tuning into a feature component (what the neuron responds to) and a spatial component (where the neuron’s receptive field is located). We show that the feature, but not the 25 spatial component, predicted which neurons were connected at the fine synaptic scale. Together, our results demonstrate the “like-to-like” connectivity rule generalizes to multiple connection types, and the rich MICrONS dataset is suitable to further refine a mechanistic understanding of circuit structure and 30 function.

10 citations


Posted ContentDOI
15 Mar 2023-bioRxiv
TL;DR: NEURD as discussed by the authors is a software package that decomposes each meshed neuron into a compact and extensively-annotated graph representation, which can enable many downstream analyses of neural morphology and connectivity.
Abstract: We are now in the era of millimeter-scale electron microscopy (EM) volumes collected at nanometer resolution (Shapson-Coe et al., 2021; Consortium et al., 2021; Eichler et al., 2017; Zheng et al., 2018). Dense reconstruction of cellular compartments in these EM volumes has been enabled by recent advances in Machine Learning (ML) (Lee et al., 2017; Wu et al., 2021; Lu et al., 2021; Macrina et al., 2021). Automated segmentation methods can now yield exceptionally accurate reconstructions of cells, but despite this accuracy, laborious post-hoc proofreading is still required to generate large connectomes free of merge and split errors. The elaborate 3-D meshes of neurons produced by these segmentations contain detailed morphological information, from the diameter, shape, and branching patterns of axons and dendrites, down to the fine-scale structure of dendritic spines. However, extracting information about these features can require substantial effort to piece together existing tools into custom workflows. Building on existing open-source software for mesh manipulation, here we present "NEURD", a software package that decomposes each meshed neuron into a compact and extensively-annotated graph representation. With these feature-rich graphs, we implement workflows for state-of-the art automated post-hoc proofreading of merge errors, cell classification, spine detection, axon-dendritic proximities, and other features that can enable many downstream analyses of neural morphology and connectivity. NEURD can make these new massive and complex datasets more accessible to neuro-science researchers focused on a variety of scientific questions.

5 citations


Posted ContentDOI
22 Mar 2023-bioRxiv
TL;DR: In this paper , a data-driven approach using graph-based machine learning methods was used to obtain a low-dimensional morphological "bar code" describing more than 30,000 excitatory neurons in mouse visual areas V1, AL and RL that were reconstructed from a millimeter scale serial-section electron microscopy volume.
Abstract: Neurons in the neocortex exhibit astonishing morphological diversity which is critical for properly wiring neural circuits and giving neurons their functional properties. The extent to which the morphological diversity of excitatory neurons forms a continuum or is built from distinct clusters of cell types remains an open question. Here we took a data-driven approach using graph-based machine learning methods to obtain a low-dimensional morphological “bar code” describing more than 30,000 excitatory neurons in mouse visual areas V1, AL and RL that were reconstructed from a millimeter scale serial-section electron microscopy volume. We found a set of principles that captured the morphological diversity of the dendrites of excitatory neurons. First, their morphologies varied with respect to three major axes: soma depth, total apical and basal skeletal length. Second, neurons in layer 2/3 showed a strong trend of a decreasing width of their dendritic arbor and a smaller tuft with increasing cortical depth. Third, in layer 4, atufted neurons were primarily located in the primary visual cortex, while tufted neurons were more abundant in higher visual areas. Fourth, we discovered layer 4 neurons in V1 on the border to layer 5 which showed a tendency towards avoiding deeper layers with their dendrites. In summary, excitatory neurons exhibited a substantial degree of dendritic morphological variation, both within and across cortical layers, but this variation mostly formed a continuum, with only a few notable exceptions in deeper layers.

4 citations


Posted ContentDOI
29 Mar 2023-bioRxiv
TL;DR: NEURD as discussed by the authors is a software package that decomposes each meshed neuron into a compact and extensively-annotated graph representation, with these feature-rich graphs, they implement workflows for state of the art automated post-hoc proofreading of merge errors, cell classification, spine detection, axon-dendritic proximities, and other features that can enable many downstream analyses of neural morphology and connectivity.
Abstract: We are now in the era of millimeter-scale electron microscopy (EM) volumes collected at nanometer resolution (Shapson-Coe et al., 2021; Consortium et al., 2021). Dense reconstruction of cellular compartments in these EM volumes has been enabled by recent advances in Machine Learning (ML) (Lee et al., 2017; Wu et al., 2021; Lu et al., 2021; Macrina et al., 2021). Automated segmentation methods can now yield exceptionally accurate reconstructions of cells, but despite this accuracy, laborious post-hoc proofreading is still required to generate large connectomes free of merge and split errors. The elaborate 3-D meshes of neurons produced by these segmentations contain detailed morphological information, from the diameter, shape, and branching patterns of axons and dendrites, down to the fine-scale structure of dendritic spines. However, extracting information about these features can require substantial effort to piece together existing tools into custom workflows. Building on existing open-source software for mesh manipulation, here we present “NEURD”, a software package that decomposes each meshed neuron into a compact and extensively-annotated graph representation. With these feature-rich graphs, we implement workflows for state of the art automated post-hoc proofreading of merge errors, cell classification, spine detection, axon-dendritic proximities, and other features that can enable many downstream analyses of neural morphology and connectivity. NEURD can make these new massive and complex datasets more accessible to neuroscience researchers focused on a variety of scientific questions.

3 citations


Journal ArticleDOI
TL;DR: In this article , a meta-analysis was performed to evaluate the effectiveness and safety of prophylactic central neck dissection (PCND) in patients with clinically node-negative (cN0) papillary thyroid carcinoma.
Abstract: Objective This meta-analysis was performed to evaluate the effectiveness and safety of prophylactic central neck dissection (PCND) in patients with clinically node-negative (cN0) papillary thyroid carcinoma. Materials and methods A meta-analysis of the literature was performed using the key words “papillary thyroid carcinomas” and “lymph node ecisions” for searches of electronic databases. Complications such as transient hypocalcemia, permanent hypocalcemia, transient and permanent hypoparathyroidism, transient and permanent vocal cord paralysis, transient recurrent and permanent recurrent laryngeal nerve injury, and local recurrence were pooled by meta-analysis. Stata17.0 was used to carry out the meta-analysis. Results Data were extracted from 15 studies. In the present review, the group of patients who had total thyroidectomy (TT) with PCND had a lower local recurrence than the group with TT alone (OR 0.22, 95% CI 0.10-0.45, P = 0.000), whereas the incidence of permanent hypocalcemia (OR 4.24, 95% CI 1.05-17.22, P = 0.043) and transient hypoparathyroidism (OR 2.14, 95% CI 1.34-3.42, P =0.001) were higher. No significant differences were recorded in the incidence of other complications: transient hypocalcemia (OR 2.24, 95% CI 0.77-6.51, P = 0.138), permanent hypoparathyroidism (OR 1.70, 95% CI 0.89-3.27, P = 0.111), transient vocal cord paralysis (OR 1.48, 95% CI 0.78-2.83, P = 0.231), permanent vocal cord paralysis (OR 1.44, 95% CI 0.53-3.94, P = 0.477), transient recurrent laryngeal nerve injury (OR 1.47, 95% CI 0.93-2.32, P = 0.102) and permanent recurrent laryngeal nerve injury (OR 1.24, 95% CI 0.56-2.74, P = 0.587) between the two groups. Conclusion Compared with TT alone, TT with PCND was more effective in reducing local recurrence without increasing the risk of recurrent laryngeal nerve, thyroid and vocal cord, except for hypocalcemia and transient hypoparathyroidism. Therefore, we believe that TT with PCND should be recommended for patients with cN0 PTC. Systematic review registration https://www.crd.york.ac.uk/prospero/, identifier CRD4202 2355078.

2 citations


Journal ArticleDOI
TL;DR: In this paper , the authors presented a high-performance cloud removal architecture called Progressive Multi-scale Attention Autoencoder (PMAA), which simultaneously leverages global and local information.
Abstract: Satellite imagery analysis plays a vital role in remote sensing, but the information loss caused by cloud cover seriously hinders its application. This study presents a high-performance cloud removal architecture called Progressive Multi-scale Attention Autoencoder (PMAA), which simultaneously leverages global and local information. It mainly consists of a cloud detection backbone and a cloud removal module. The cloud detection backbone uses cloud masks to reinforce cloudy areas to prompt the cloud removal module. The cloud removal module mainly comprises a novel Multi-scale Attention Module (MAM) and a Local Interaction Module (LIM). PMAA establishes the long-range dependency of multi-scale features using MAM and modulates the reconstruction of the fine-grained details using LIM, allowing for the simultaneous representation of fine- and coarse-grained features at the same level. With the help of diverse and multi-scale feature representation, PMAA outperforms the previous state-of-the-art model CTGAN consistently on the Sen2_MTC_Old and Sen2_MTC_New datasets. Furthermore, PMAA has a considerable efficiency advantage, with only 0.5% and 14.6% of the parameters and computational complexity of CTGAN, respectively. These extensive results highlight the potential of PMAA as a lightweight cloud removal network suitable for deployment on edge devices. We will release the code and trained models to facilitate the study in this direction.

1 citations


Journal ArticleDOI
TL;DR: In this paper, a soft thresholding module is added to the residual structure of the backbone network to perform noise reduction process on interference factors, such as light to enhance the robustness of the algorithm.
Abstract: With the development of deep convolutional neural networks, the effect of pedestrian detection has been rapidly improved. However, there are still many problems in small target pedestrian detection, for example noise (such as light) interference, target occlusion, and low detection accuracy. In order to solve the above problems, based on YOLOv4 algorithm, this paper proposes an improved small target pedestrian detection algorithm named PF_YOLOv4. The algorithm is improved in three aspects on the basis of the YOLOv4 algorithm: firstly, a soft thresholding module is added to the residual structure of the backbone network to perform noise reduction process on interference factors, such as light to enhance the robustness of the algorithm; secondly, the depthwise separable convolution replaces the traditional convolution in the YOLOv4 residual structure, to reduce the number of network model parameters; finally, the Convolutional Block Attention Module (CBAM) is added after the output feature map of the backbone network to enhance of the network feature expression. Experimental results show that the PF_YOLOv4 algorithm outperforms most of the state-of-the-art algorithms in detecting small target pedestrians. The mean Average Precision (mAP) of the PF_YOLOv4 algorithm is 2.35% higher than that of the YOLOv4 algorithm and 9.67% higher than that of the YOLOv3 algorithm, while the detection speed is slightly higher than that of YOLOv4 algorithm.

1 citations


Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed a framework that learns trustworthy models from noisy labels for surface defect defection, which represents the suspicious regions with consistent and precise elements at the pixel-level and redesign the loss function.
Abstract: —In the surface defect detection, there are some suspicious regions that cannot be uniquely classified as abnormal or normal. The annotating of suspicious regions is easily affected by factors such as workers’ emotional fluctuations and judgment standard, resulting in noisy labels, which in turn leads to missing and false detections, and ultimately leads to inconsistent judgments of product quality. Unlike the usual noisy labels, the ones used for surface defect detection appear to be inconsistent rather than mislabeled. The noise occurs in almost every label and is difficult to correct or evaluate. In this paper, we proposed a framework that learns trustworthy models from noisy labels for surface defect defection. At first, to avoid the negative impact of noisy labels on the model, we represent the suspicious regions with consistent and precise elements at the pixel-level and redesign the loss function. Secondly, without changing network structure and adding any extra labels, pluggable spatially correlated Bayesian module is proposed. Finally, the defect discrimination confidence is proposed to measure the uncertainty, with which anomalies can be identified as defects. Our results indicate not only the effectiveness of the proposed method in learning from noisy labels, but also robustness and real-time performance.

1 citations


Journal ArticleDOI
TL;DR: In this paper , the β-tricalcium phosphate (β-TCP)/titanium dioxide (TiO2) porous ceramics scaffolds were fabricated by 3D gel-printing sintering for the first time.
Abstract: Human bone is composed of cortical bone and cancellous bone. The interior portion of natural bone is cancellous with a porosity of 50%-90%, but the outer layer is made of dense cortical bone, of which porosity was not higher than 10%. Porous ceramics were expected to be research hotspot in bone tissue engineering by virtue of their similarity to the mineral constituent and physiological structure of human bone. However, it is challenging to utilize conventional manufacturing methods to fabricate porous structures with precise shapes and pore sizes. Three-dimensional (3D) printing of ceramics is currently the latest research trend because it has many advantages in the fabrication of porous scaffolds, which can meet the requirements of cancellous bone strength, arbitrarily complex shapes, and individualized design. In this study, β-tricalcium phosphate (β-TCP)/titanium dioxide (TiO2) porous ceramics scaffolds were fabricated by 3D gel-printing sintering for the first time. The chemical constituent, microstructure, and mechanical properties of the 3D-printed scaffolds were characterized. After sintering, a uniform porous structure with appropriate porosity and pore sizes was observed. Besides, biological mineralization activity and biocompatibility were evaluated by in vitro cell assay. The results demonstrated that the incorporation of TiO2 (5 wt%) significantly improved the compressive strength of the scaffolds, with an increase of 283%. Additionally, the in vitro results showed that the β-TCP/TiO2 scaffold had no toxicity. Meanwhile, the adhesion and proliferation of MC3T3-E1 cells on scaffolds were desirable, revealing that the β-TCP/TiO2 scaffolds can be used as a promising candidate for repair scaffolding in orthopedics and traumatology.

1 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a graph neural network recommendation method to alleviate the user cold-start problem caused by too few relevant items in personalized recommendation collaborative filtering, which transformed the bipartite graph of user-item interactions into the spectral domain, using a random wandering method to discover potential correlation information between users and items.
Abstract: This paper proposes a novel graph neural network recommendation method to alleviate the user cold-start problem caused by too few relevant items in personalized recommendation collaborative filtering. A deep feedforward neural network is constructed to transform the bipartite graph of user–item interactions into the spectral domain, using a random wandering method to discover potential correlation information between users and items. Then, a finite-order polynomial is used to optimize the convolution process and accelerate the convergence of the convolutional network, so that deep connections between users and items in the spectral domain can be discovered quickly. We conducted experiments on the classic dataset MovieLens-1M. The recall and precision were improved, and the results show that the method can improve the accuracy of recommendation results, tap the association information between users and items more effectively, and significantly alleviate the user cold-start problem.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a new method CM-Unet based on the U-Net framework to address the problems of holes, omissions, and fuzzy edge segmentation.
Abstract: Semantic segmentation is an active research area for high-resolution (HR) remote sensing image processing. Most existing algorithms are better at segmenting different features. However, for complex scenes, many algorithms have insufficient segmentation accuracy. In this study, we propose a new method CM-Unet based on the U-Net framework to address the problems of holes, omissions, and fuzzy edge segmentation. First, we add the channel attention mechanism in the encoding network and the residual module to transmit information. Second, a multi-feature fusion mechanism is proposed in the decoding network, and an improved sub-pixel convolution method replaces the traditional upsampling operation. We conducted simulation experiments on the Potsdam, Vaihingen and GID datasets. The experimental results show that the proposed CM-Unet required segmentation time is approximately 62 ms/piece, the MIoU is 90.4% and the floating point operations (FLOPs) is 20.95 MFLOPs. Compared with U-Net, CM-Unet only increased the total number of parameters and floating point operations slightly, but achieved the best segmentation effect compared with the other models. CM-Unet can segment remote sensing images efficiently and accurately owing to its lower time consumption and space requirements; the precision of the segmentation results is better than other methods.

Journal ArticleDOI
TL;DR: In this paper , a modified elite ant system algorithm with the local search (MEASL) is proposed to solve the problem of scheduling a group of jobs with arbitrary release times, non-identical sizes, and different processing times on nonidentical parallel batch processing machines to minimize the total completion time.

Journal ArticleDOI
TL;DR: In this paper , the authors summarized the application progress of sodium alginate-based hydrogel scaffolds for bone tissue repair based on 3D printing technology and provided relevant opinions and comments to provide a theoretical basis for follow-up research.
Abstract: In recent years, hydrogels have been widely used in the biomedical field as materials with excellent bionic structures and biological properties. Among them, the excellent comprehensive properties of natural polymer hydrogels represented by sodium alginate have attracted the great attention of researchers. At the same time, by physically blending sodium alginate with other materials, the problems of poor cell adhesion and mechanical properties of sodium alginate hydrogels were directly improved without chemical modification of sodium alginate. The composite blending of multiple materials can also improve the functionality of sodium alginate hydrogels, and the prepared composite hydrogel also has a larger application field. In addition, based on the adjustable viscosity of sodium alginate-based hydrogels, sodium alginate-based hydrogels can be loaded with cells to prepare biological ink, and the scaffold can be printed out by 3D printing technology for the repair of bone defects. This paper first summarizes the improvement of the properties of sodium alginate and other materials after physical blending. Then, it summarizes the application progress of sodium alginate-based hydrogel scaffolds for bone tissue repair based on 3D printing technology in recent years. Moreover, we provide relevant opinions and comments to provide a theoretical basis for follow-up research.

Journal ArticleDOI
TL;DR: The Graph-Based Memory Reconstruction (GBMR) framework as mentioned in this paper maintains an attribute graph on the agent's memory and retrieves its critical nodes to build and update potential paths among these nodes.
Abstract: Reinforcement learning (RL) algorithms typically require orders of magnitude more interactions than humans to learn effective policies. Research on memory in neuroscience suggests that humans' learning efficiency benefits from associating their experiences and reconstructing potential events. Inspired by this finding, we introduce a human brain-like memory structure for agents and build a general learning framework based on this structure to improve the RL sampling efficiency. Since this framework is similar to the memory reconstruction process in psychology, we name the newly proposed RL framework as Graph-Based Memory Reconstruction (GBMR). In particular, GBMR first maintains an attribute graph on the agent's memory and then retrieves its critical nodes to build and update potential paths among these nodes. This novel pipeline drives the RL agent to learn faster with its memory-enhanced value functions and reduces interactions with the environment by reconstructing its valuable paths. Extensive experimental analyses and evaluations in the Grid Maze and some challenging Atari environments demonstrate GBMR's superiority over traditional RL methods. We will release the source code and trained models to facilitate further studies in this research direction.

Journal ArticleDOI
TL;DR: In this article , the authors describe the research status of lithium technology safety issues visually and dynamically, elucidate the pressing issues in this field and reveal future development trends using metrology literature analysis and knowledge graph methods.
Abstract: (1) Background: Lithium plays an extremely important role in the national economy. However, the chemical activity of lithium metal leads to many safety problems in the application of lithium technology, which is the bottleneck problem restricting the development of lithium technology. The purpose of this paper is to describe the research status of lithium technology safety issues visually and dynamically, elucidate the pressing issues in this field and reveal future development trends. (2) Methods: In this paper, metrology literature analysis and knowledge graph methods were adopted. With the help of visualization tools, namely, CiteSpace and VOSviewer, literature data exported from the Web of Science were analyzed in a multi-angle and all-round way. (3) Results: The number of papers in the field of lithium technology safety showed an accelerating trend. Close collaboration between authors and institutions. The scope of the research has gradually shifted from the early focus on the medical application of lithium and the resulting safety issues to the health and safety of lithium batteries. (4) Conclusions: Lithium technology safety is a hot topic in the current academic community. Future research trends will continue to focus on the safety problems and solutions of lithium technology, and pay more attention to sustainable development, especially the research on the improvement and optimization of lithium-ion battery performance.

Journal ArticleDOI
TL;DR: This paper proposed Auto-PruMUX, a meta-level model that can predict the high-performance parameters for pruning and multiplexing given a desired accuracy loss budget, providing a practical method to leverage the combination effectively.
Abstract: As language models increase in size by the day, methods for efficient inference are critical to leveraging their capabilities for various applications. Prior work has investigated techniques like model pruning, knowledge distillation, and data multiplexing to increase model throughput without sacrificing accuracy. In this paper, we combine two such methods -- structured pruning and data multiplexing -- to compound the speedup gains obtained by either method. Our approach, PruMUX, obtains up to 7.5-29.5X throughput improvement over BERT-base model with accuracy threshold from 80% to 74%. We further study various combinations of parameters (such as sparsity and multiplexing factor) in the two techniques to provide a comprehensive analysis of the tradeoff between accuracy and throughput in the resulting models. We then propose Auto-PruMUX, a meta-level model that can predict the high-performance parameters for pruning and multiplexing given a desired accuracy loss budget, providing a practical method to leverage the combination effectively.

Proceedings ArticleDOI
14 Jun 2023
TL;DR: In this article , a multi-objective optimization for the tail multi-parameters was carried out to solve the problem of wind noise and structural resistance caused by vehicle external structural parameters.
Abstract: To solve the problem of wind noise and structural resistance caused by vehicle external structural parameters, this paper carried out multi-objective optimization for the tail multi-parameters. Based on the selected four parameters of C-Pillar angle, trunk angle, rear box length and boat tail angle of Ahmed model, the central combination test design of response surface Box-Benhnken was carried out in the selected range. The sampled data sets were numerically simulated in star-ccm+, and the surface sound pressure level and drag coefficient of the model were obtained. The optimal Latin hypercube sampling data sets were used to verify the accuracy. Finally, the response surface model was established, and the optimal front window surface SPL was 65.11dB with an error of 1.36%, and the optimal drag coefficient was 0.1210 with an error of 1.68%, taking Curle SPL and drag coefficient as the optimization objective.

Journal ArticleDOI
TL;DR: In this paper , the authors used the finite element software APDL to build the three-dimensional finite element model of a long-span transmission tower, to carry out the modal finite element analysis as well as to extract the specific parameters of each modality: Modality, Natural frequency of vibration, Periodicity.
Abstract: The structure of the long-span transmission tower is a typical nonlinear structure with the characteristics of great height, large line span, heavy overall weight and flexible tower body. The current design code only analyzes the traditional tower types, but the analysis of the truss structure of transmission tower is limited. Aiming at improving the design defects of the structure of long-span transmission towers, this paper uses the finite element software APDL to build the three-dimensional finite element model of a long-span transmission tower, to carry out the modal finite element analysis as well as to extract the specific parameters of each modal finite element mode: Modality, Natural frequency of vibration, Periodicity. The results show that the natural vibration period of the main machinery of this type of steel transmission tower is about 0.37–1.37 s; The structure of the long-span transmission tower has certain displacements in six degrees of freedom, in which the value of the X-dimensional displacement is the largest. There are some large displacements and local torsion in the high-order mode, combined with the results of modal analysis, so it is suggested to consider the structural improvement or external reinforcement of the weak parts of the long-span transmission tower.

Journal ArticleDOI
TL;DR: In this paper , a HPM-JTM hybrid model based on translation process is proposed to avoid the iteration and maintain a wide application range for the simulation of non-Gaussian process.

Journal ArticleDOI
TL;DR: In this paper , a sequence labeling model developed using a stacked bidirectional long short-term memory network with a conditional random field layer (stacked-BiLSTM-CRF) is proposed to automatically label and intercept vibration signals.

Journal ArticleDOI
TL;DR: In this article , the authors developed a no-remanufacturing competition model and two cooperative remanufacturing models to characterize the problems and explore who (manufacturer or third party) is the best partner when refurbished products appear.
Abstract: Refurbishing and remanufacturing differ in that the former allows for restored products of lower quality than new ones. Motivated by the competition between the refurbished product and remanufactured product, our article examines the strategic choices for remanufacturing between the original equipment manufacturer (OEM) and the upstream manufacturer/rival third party. We develop a no-remanufacturing competition model and two cooperative remanufacturing models to characterize the problems. By analyzing the equilibrium market structure of each model, we try to explore who (manufacturer or third party) is the best remanufacturing partner when refurbished products appear. The results show that the OEM is always willing to cooperate with the manufacturer or the third party, but such cooperation will not always possible. In both cooperative models, the third party tends to prioritize improving the quality of refurbished products. Additionally, the manufacturer-remanufacturing model consistently demonstrates a lower environmental impact. Our study provides some important managerial implications that can be utilized as strategic guidance for the remanufacturing industry.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a weakly supervised method called SA-MIL for pixel-level segmentation in histopathology images, which introduces a self-attention mechanism into the MIL framework, which captures global correlation among all instances.

08 Jul 2023
TL;DR: In this article , the Sarsa algorithm is used to solve procedural content generation on underground parking garage static scenario simulation, which is regarded as a Procedural Content Generation (PCG) problem.
Abstract: Autonomous driving technology has five levels, from L0 to L5. Currently, only the L2 level (partial automation) can be achieved, and there is a long way to go before reaching the final level of L5 (full automation). The key to crossing these levels lies in training the autonomous driving model. However, relying solely on real-world road data to train the model is far from enough and consumes a great deal of resources. Although there are already examples of training autonomous driving models through simulators that simulate real-world scenarios, these scenarios require complete manual construction. Directly converting 3D scenes from road network formats will lack a large amount of detail and cannot be used as training sets. Underground parking garage static scenario simulation is regarded as a procedural content generation (PCG) problem. This paper will use the Sarsa algorithm to solve procedural content generation on underground garage structures.

Journal ArticleDOI
TL;DR: In this paper , a flexible uninterrupted neutral section passing system based on four-leg V-clamp multilevel converter (VMC) is proposed, which can better solve a series of problems such as overvoltage, overcurrent and speed loss caused by the interruptible over-phase scheme used in electrified railway, and compensate the negative sequence and reactive current in traction power supply system at the same time.

Journal ArticleDOI
TL;DR: In this paper , the authors present the first study of privacy risks in retrieval-based LMs, particularly $k$NN-LMs, and explore the optimal design and training procedure in domains where privacy is of concern, aiming to strike a balance between utility and privacy.
Abstract: Retrieval-based language models (LMs) have demonstrated improved interpretability, factuality, and adaptability compared to their parametric counterparts, by incorporating retrieved text from external datastores. While it is well known that parametric models are prone to leaking private data, it remains unclear how the addition of a retrieval datastore impacts model privacy. In this work, we present the first study of privacy risks in retrieval-based LMs, particularly $k$NN-LMs. Our goal is to explore the optimal design and training procedure in domains where privacy is of concern, aiming to strike a balance between utility and privacy. Crucially, we find that $k$NN-LMs are more susceptible to leaking private information from their private datastore than parametric models. We further explore mitigations of privacy risks. When privacy information is targeted and readily detected in the text, we find that a simple sanitization step would completely eliminate the risks, while decoupling query and key encoders achieves an even better utility-privacy trade-off. Otherwise, we consider strategies of mixing public and private data in both datastore and encoder training. While these methods offer modest improvements, they leave considerable room for future work. Together, our findings provide insights for practitioners to better understand and mitigate privacy risks in retrieval-based LMs. Our code is available at: https://github.com/Princeton-SysML/kNNLM_privacy .

Journal ArticleDOI
TL;DR: In this paper , four MiFPF genes were obtained from mango (Mangifera indica L), and tissue expression analysis showed that MiFPFs were expressed in all mango tissues.



Journal ArticleDOI
TL;DR: The Machine learning At the Tail (MAT) framework as mentioned in this paper integrates an ML module with a traditional cache system based on a heuristic algorithm to reduce cache miss ratios by making better eviction decisions than heuristics.
Abstract: Recent work shows the effectiveness of Machine Learning (ML) to reduce cache miss ratios by making better eviction decisions than heuristics. However, state-of-the-art ML caches require many predictions to make an eviction decision, making them impractical for high-throughput caching systems. This paper introduces Machine learning At the Tail (MAT), a framework to build efficient ML-based caching systems by integrating an ML module with a traditional cache system based on a heuristic algorithm. MAT treats the heuristic algorithm as a “filter” to receive high-quality samples to train an ML model and likely candidate objects for evictions. We evaluate MAT on 8 production workloads, spanning storage, in-memory caching, and CDNs. The simulation experiments show MAT reduces the number of costly ML predictions-per-eviction from 63 to 2, while achieving comparable miss ratios to the state-of-the-art ML cache system. We compare a MAT prototype system with an LRU-based caching system in the same setting and show that achieve similar request rates.