scispace - formally typeset
Search or ask a question
Author

Minhyuk Park

Bio: Minhyuk Park is an academic researcher from University of Illinois at Urbana–Champaign. The author has contributed to research in topics: Phylogenetic tree & Disjoint sets. The author has an hindex of 1, co-authored 2 publications receiving 2 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors evaluate the feasibility of using disjoint tree mergers to improve the scalability of maximum likelihood (ML) gene tree estimation to large numbers of input sequences and show that good DTM pipeline design can provide advantages over these ML codes on large datasets.
Abstract: The estimation of phylogenetic trees for individual genes or multi-locus datasets is a basic part of considerable biological research. In order to enable large trees to be computed, Disjoint Tree Mergers (DTMs) have been developed; these methods operate by dividing the input sequence dataset into disjoint sets, constructing trees on each subset, and then combining the subset trees (using auxiliary information) into a tree on the full dataset. DTMs have been used to advantage for multi-locus species tree estimation, enabling highly accurate species trees at reduced computational effort, compared to leading species tree estimation methods. Here, we evaluate the feasibility of using DTMs to improve the scalability of maximum likelihood (ML) gene tree estimation to large numbers of input sequences. Our study shows distinct differences between the three selected ML codes—RAxML-NG, IQ-TREE 2, and FastTree 2—and shows that good DTM pipeline design can provide advantages over these ML codes on large datasets.

7 citations

Posted Content
TL;DR: In this article, a modular pipeline is developed to find publication communities in a citation network consisting of over 14 million publications relevant to the field of extracellular vesicles, using a quantitative and qualitative approach.
Abstract: Clustering and community detection in networks are of broad interest and have been the subject of extensive research that spans several fields. We are interested in the relatively narrow question of detecting communities of scientific publications that are linked by citations. These publication communities can be used to identify scientists with shared interests who form communities of researchers. Building on the well-known k-core algorithm, we have developed a modular pipeline to find publication communities. We compare our approach to communities discovered by the widely used Leiden algorithm for community finding. Using a quantitative and qualitative approach, we evaluate community finding results on a citation network consisting of over 14 million publications relevant to the field of extracellular vesicles.

Cited by
More filters
Journal ArticleDOI
TL;DR: MAGUS as discussed by the authors is a state-of-the-art method for aligning large numbers of sequences, which can align vastly larger datasets with greater speed than other alignment tools. But it suffers from the limitation that few methods can handle large datasets while maintaining alignment accuracy.
Abstract: Multiple sequence alignment tools struggle to keep pace with rapidly growing sequence data, as few methods can handle large datasets while maintaining alignment accuracy. We recently introduced MAGUS, a new state-of-the-art method for aligning large numbers of sequences. In this paper, we present a comprehensive set of enhancements that allow MAGUS to align vastly larger datasets with greater speed. We compare MAGUS to other leading alignment methods on datasets of up to one million sequences. Our results demonstrate the advantages of MAGUS over other alignment software in both accuracy and speed. MAGUS is freely available in open-source form at https://github.com/vlasmirnov/MAGUS.

10 citations

Journal ArticleDOI
TL;DR: This new method, WeIghTed Consensus Hmm alignment (WITCH), improves on UPP in three important ways: it uses a statistically principled technique to weight and rank the HMMs; second, it uses k>1 HMMs from the ensemble rather than a single HMM; and third, it combines the alignments for each of the selected HMMs using a consensus algorithm that takes the weights into account.
Abstract: Accurate multiple sequence alignment is challenging on many data sets, including those that are large, evolve under high rates of evolution, or have sequence length heterogeneity. While substantial progress has been made over the last decade in addressing the first two challenges, sequence length heterogeneity remains a significant issue for many data sets. Sequence length heterogeneity occurs for biological and technological reasons, including large insertions or deletions (indels) that occurred in the evolutionary history relating the sequences, or the inclusion of sequences that are not fully assembled. Ultra-large alignments using Phylogeny-Aware Profiles (UPP) (Nguyen et al. 2015) is one of the most accurate approaches for aligning data sets that exhibit sequence length heterogeneity: it constructs an alignment on the subset of sequences it considers "full-length," represents this "backbone alignment" using an ensemble of hidden Markov models (HMMs), and then adds each remaining sequence into the backbone alignment based on an HMM selected for that sequence from the ensemble. Our new method, WeIghTed Consensus Hmm alignment (WITCH), improves on UPP in three important ways: first, it uses a statistically principled technique to weight and rank the HMMs; second, it uses k>1 HMMs from the ensemble rather than a single HMM; and third, it combines the alignments for each of the selected HMMs using a consensus algorithm that takes the weights into account. We show that this approach provides improved alignment accuracy compared with UPP and other leading alignment methods, as well as improved accuracy for maximum likelihood trees based on these alignments.

8 citations

Posted ContentDOI
25 Oct 2022-bioRxiv
TL;DR:
Abstract: Phylogenetic placement is the problem of placing “query” sequences into an existing tree (called a “backbone tree”), and is useful in both microbiome analysis and to update large evolutionary trees. The most accurate phylogenetic placement method to date is the maximum likelihood-based method pplacer, which uses RAxML to estimate numeric parameters on the backbone tree and then adds the given query sequence to the edge that maximizes the probability that the resulting tree generates the query sequence. Unfortunately, pplacer fails to return valid outputs on many moderately large datasets, and so is limited to backbone trees with at most ∼10,000 leaves. In TCBB 2022, Wedell et al. introduced SCAMPP, a technique to enable pplacer to run on larger backbone trees. SCAMPP operates by finding a small “placement subtree” specific to each query sequence, within which the query sequence are placed using pplacer. That approach matched the scalability and accuracy of APPLES-2, the previous most scalable method. In this study, we explore a different aspect of pplacer’s strategy: the technique used to estimate numeric parameters on the backbone tree. We confirm anecdotal evidence that using FastTree instead of RAxML to estimate numeric parameters on the backbone tree enables pplacer to scale to much larger backbone trees, almost (but not quite) matching the scalability of APPLES-2 and pplacer-SCAMPP. We then evaluate the combination of these two techniques – SCAMPP and the use of FastTree. We show that this combined approach, pplacer-SCAMPP-FastTree, has the same scalability as APPLES-2, improves on the scalability of pplacer-FastTree, and achieves better accuracy than the comparably scalable methods. Availability: https://github.com/gillichu/PLUSplacer-taxtastic.

2 citations

Journal ArticleDOI
TL;DR: A number of methods have been developed that aim to enable highly accurate phylogeny estimations on these large datasets, including divide-and-conquer techniques for multiple sequence alignment and/or tree estimation, methods that can estimate species trees from multi-locus datasets while addressing heterogeneity due to biological processes as mentioned in this paper .
Abstract: With the increased availability of sequence data and even of fully sequenced and assembled genomes, phylogeny estimation of very large trees (even of hundreds of thousands of sequences) is now a goal for some biologists. Yet, the construction of these phylogenies is a complex pipeline presenting analytical and computational challenges, especially when the number of sequences is very large. In the past few years, new methods have been developed that aim to enable highly accurate phylogeny estimations on these large datasets, including divide-and-conquer techniques for multiple sequence alignment and/or tree estimation, methods that can estimate species trees from multi-locus datasets while addressing heterogeneity due to biological processes (e.g. incomplete lineage sorting and gene duplication and loss), and methods to add sequences into large gene trees or species trees. Here we present some of these recent advances and discuss opportunities for future improvements. This article is part of a discussion meeting issue 'Genomic population structures of microbial pathogens'.

2 citations

Journal ArticleDOI
TL;DR: Gillichu et al. as discussed by the authors used FastTree instead of RAxML to estimate numeric parameters on the backbone tree and then added the given query sequence to the edge that maximizes the probability that the resulting tree generates the query sequence.
Abstract: Abstract Summary Phylogenetic placement is the problem of placing ‘query’ sequences into an existing tree (called a ‘backbone tree’). One of the most accurate phylogenetic placement methods to date is the maximum likelihood-based method pplacer, using RAxML to estimate numeric parameters on the backbone tree and then adding the given query sequence to the edge that maximizes the probability that the resulting tree generates the query sequence. Unfortunately, this way of running pplacer fails to return valid outputs on many moderately large backbone trees and so is limited to backbone trees with at most ∼10 000 leaves. SCAMPP is a technique to enable pplacer to run on larger backbone trees, which operates by finding a small ‘placement subtree’ specific to each query sequence, within which the query sequence are placed using pplacer. That approach matched the scalability and accuracy of APPLES-2, the previous most scalable method. Here, we explore a different aspect of pplacer’s strategy: the technique used to estimate numeric parameters on the backbone tree. We confirm anecdotal evidence that using FastTree instead of RAxML to estimate numeric parameters on the backbone tree enables pplacer to scale to much larger backbone trees, almost (but not quite) matching the scalability of APPLES-2 and pplacer-SCAMPP. We then evaluate the combination of these two techniques—SCAMPP and the use of FastTree. We show that this combined approach, pplacer-SCAMPP-FastTree, has the same scalability as APPLES-2, improves on the scalability of pplacer-FastTree and achieves better accuracy than the comparably scalable methods. Availability and implementation https://github.com/gillichu/PLUSplacer-taxtastic. Supplementary information Supplementary data are available at Bioinformatics Advances online.

1 citations