scispace - formally typeset
Search or ask a question

Showing papers on "Domain knowledge published in 2022"


Journal ArticleDOI
Yuansheng Liu1
TL;DR: In this article , a review of knowledge graph-based works that implement drug repurposing and adverse drug reaction prediction for drug discovery is presented, and several representative embedding models are introduced to provide a comprehensive understanding of knowledge representation learning.

61 citations


Journal ArticleDOI
TL;DR: In this paper, the authors summarize knowledge graph-based works that implement drug repurposing and adverse drug reaction prediction for drug discovery, and introduce several representative embedding models to provide a comprehensive understanding of knowledge representation learning.

61 citations


Journal ArticleDOI
TL;DR: An automatic construction framework for the process knowledge base in the field of machining based on knowledge graph (KG) is introduced and a hybrid algorithm based on an improved edit distance and attribute weighting is built to overcome the redundancy in the knowledge fusion stage.
Abstract: The process knowledge base is the key module in intelligent process design, it determines the intelligence degree of the design system and affects the quality of product design. However, traditional process knowledge base construction is non-automated, time consuming and requires much manual work, which is not sufficient to meet the demands of the modern manufacturing mode. Moreover, the knowledge base often adopts a single knowledge representation, and this may lead to ambiguity in the meaning of some knowledge, which will affect the quality of the process knowledge base. To overcome the above problems, an automatic construction framework for the process knowledge base in the field of machining based on knowledge graph (KG) is introduced. First, the knowledge is classified and annotated based on the function-behavior-states (FBS) design method. Second, a knowledge extraction framework based on BERT-BiLSTM-CRF is established to perform the automatic knowledge extraction of process text. Third, a knowledge representation method based on fuzzy comprehensive evaluation is established, forming three types of knowledge representation with the KG as the main, production rules and two-dimensional data linked list as a supplement. In addition, to overcome the redundancy in the knowledge fusion stage, a hybrid algorithm based on an improved edit distance and attribute weighting is built. Finally, a prototype system is developed, and quality analysis is carried out. Compared with the F values of BiLSTM-CRF and CNN-BiLSTM-CRF, that of the proposed extraction method in the machining domain is increased by 7.35% and 3.87%, respectively.

32 citations


Journal ArticleDOI
TL;DR: In this paper, a knowledge-based system for predictive maintenance in Industry 4.0 (KSPMI) is developed based on a novel hybrid approach that leverages both statistical and symbolic AI technologies.
Abstract: In the context of Industry 4.0, smart factories use advanced sensing and data analytic technologies to understand and monitor the manufacturing processes. To enhance production efficiency and reliability, statistical Artificial Intelligence (AI) technologies such as machine learning and data mining are used to detect and predict potential anomalies within manufacturing processes. However, due to the heterogeneous nature of industrial data, sometimes the knowledge extracted from industrial data is presented in a complex structure. This brings the semantic gap issue which stands for the lack of interoperability among different manufacturing systems. Furthermore, as the Cyber-Physical Systems (CPS) are becoming more knowledge-intensive, uniform knowledge representation of physical resources and real-time reasoning capabilities for analytic tasks are needed to automate the decision-making processes for these systems. These requirements highlight the potential of using symbolic AI for predictive maintenance. To automate and facilitate predictive analytics in Industry 4.0, in this paper, we present a novel Knowledge-based System for Predictive Maintenance in Industry 4.0 (KSPMI). KSPMI is developed based on a novel hybrid approach that leverages both statistical and symbolic AI technologies. The hybrid approach involves using statistical AI technologies such as machine learning and chronicle mining (a special type of sequential pattern mining approach) to extract machine degradation models from industrial data. On the other hand, symbolic AI technologies, especially domain ontologies and logic rules, will use the extracted chronicle patterns to query and reason on system input data with rich domain and contextual knowledge. This hybrid approach uses Semantic Web Rule Language (SWRL) rules generated from chronicle patterns together with domain ontologies to perform ontology reasoning, which enables the automatic detection of machinery anomalies and the prediction of future events’ occurrence. KSPMI is evaluated and tested on both real-world and synthetic data sets.

26 citations


Journal ArticleDOI
TL;DR: In this article , an automatic construction framework for the process knowledge base in the field of machining based on knowledge graph (KG) is introduced, and a knowledge extraction framework based on BERT-BiLSTM-CRF is established to perform the automatic knowledge extraction of process text.
Abstract: • A framework for automatically constructing a knowledge base is developed. • The extraction effect of this framework is better than other frameworks. • A evaluation algorithm is proposed to judge the optimal expression of knowledge. • Semantic and attribute weighting factors among knowledge entities are considered. The process knowledge base is the key module in intelligent process design, it determines the intelligence degree of the design system and affects the quality of product design. However, traditional process knowledge base construction is non-automated, time consuming and requires much manual work, which is not sufficient to meet the demands of the modern manufacturing mode. Moreover, the knowledge base often adopts a single knowledge representation, and this may lead to ambiguity in the meaning of some knowledge, which will affect the quality of the process knowledge base. To overcome the above problems, an automatic construction framework for the process knowledge base in the field of machining based on knowledge graph (KG) is introduced. First, the knowledge is classified and annotated based on the function-behavior-states (FBS) design method. Second, a knowledge extraction framework based on BERT-BiLSTM-CRF is established to perform the automatic knowledge extraction of process text. Third, a knowledge representation method based on fuzzy comprehensive evaluation is established, forming three types of knowledge representation with the KG as the main, production rules and two-dimensional data linked list as a supplement. In addition, to overcome the redundancy in the knowledge fusion stage, a hybrid algorithm based on an improved edit distance and attribute weighting is built. Finally, a prototype system is developed, and quality analysis is carried out. Compared with the F values of BiLSTM-CRF and CNN-BiLSTM-CRF, that of the proposed extraction method in the machining domain is increased by 7.35% and 3.87%, respectively.

26 citations


Journal ArticleDOI
TL;DR: X-NeSyL as discussed by the authors proposes to fuse DL representations with expert domain knowledge during the learning process so it serves as a sound basis for explainability, and demonstrate that with their approach, it is possible to improve explainability at the same time as performance.

26 citations


Journal ArticleDOI
TL;DR: In this article , a knowledge-based system for predictive maintenance in Industry 4.0 (KSPMI) is developed based on a novel hybrid approach that leverages both statistical and symbolic AI technologies.
Abstract: In the context of Industry 4.0, smart factories use advanced sensing and data analytic technologies to understand and monitor the manufacturing processes. To enhance production efficiency and reliability, statistical Artificial Intelligence (AI) technologies such as machine learning and data mining are used to detect and predict potential anomalies within manufacturing processes. However, due to the heterogeneous nature of industrial data, sometimes the knowledge extracted from industrial data is presented in a complex structure. This brings the semantic gap issue which stands for the lack of interoperability among different manufacturing systems. Furthermore, as the Cyber-Physical Systems (CPS) are becoming more knowledge-intensive, uniform knowledge representation of physical resources and real-time reasoning capabilities for analytic tasks are needed to automate the decision-making processes for these systems. These requirements highlight the potential of using symbolic AI for predictive maintenance. To automate and facilitate predictive analytics in Industry 4.0, in this paper, we present a novel Knowledge-based System for Predictive Maintenance in Industry 4.0 (KSPMI). KSPMI is developed based on a novel hybrid approach that leverages both statistical and symbolic AI technologies. The hybrid approach involves using statistical AI technologies such as machine learning and chronicle mining (a special type of sequential pattern mining approach) to extract machine degradation models from industrial data. On the other hand, symbolic AI technologies, especially domain ontologies and logic rules, will use the extracted chronicle patterns to query and reason on system input data with rich domain and contextual knowledge. This hybrid approach uses Semantic Web Rule Language (SWRL) rules generated from chronicle patterns together with domain ontologies to perform ontology reasoning, which enables the automatic detection of machinery anomalies and the prediction of future events’ occurrence. KSPMI is evaluated and tested on both real-world and synthetic data sets.

26 citations


Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors improved the effectiveness of supervised machine learning-based SBR prediction with the help of software security domain knowledge by splitting the words in summary and description fields of the SBRs and using customized relationships to label entities and build a rule-based entity recognition corpus.
Abstract: To eliminate security attack risks of software products, the security bug report (SBR) prediction has been increasingly investigated. However, there is still much room for improving the performance of automatic SBR prediction. This work is inspired by the work of two recent studies proposed by Peters et al. and Wu et al., which focused on SBR prediction and have been published on the top tier journal TSE (IEEE Transactions on Software Engineering). The goal of this work is to improve the effectiveness of supervised machine learning-based SBR prediction with the help of software security domain knowledge. First, we split the words in summary and description fields of the SBRs. Then, we use customized relationships to label entities and build a rule-based entity recognition corpus. After that, we establish relationships between entities and construct knowledge graphs. The information of CWE (Common Weakness Enumeration) is used to expand our corpus and the security-related words and phrases are integrated. Finally, we predict SBRs from target project by calculating the cosine similarity between our integrated corpus and the target bug reports. Our experimental evaluation on 5 open-source SBR datasets shows that our domain knowledge-guided approach could improve the effectiveness of SBRs prediction by 52% in terms of F1-score on average.

25 citations


DOI
01 Jan 2022
TL;DR: In this paper, the authors provide a detailed overview of state-of-the-art techniques on applying transfer learning in demand response, showing improvements that can exceed 30% in a variety of tasks.
Abstract: A number of decarbonization scenarios for the energy sector are built on simultaneous electrification of energy demand, and decarbonization of electricity generation through renewable energy sources. However, increased electricity demand due to heat and transport electrification and the variability associated with renewables have the potential to disrupt stable electric grid operation. To address these issues using demand response, researchers and practitioners have increasingly turned towards automated decision support tools which utilize machine learning and optimization algorithms. However, when applied naively, these algorithms suffer from high sample complexity, which means that it is often impractical to fit sufficiently complex models because of a lack of observed data. Recent advances have shown that techniques such as transfer learning can address this problem and improve their performance considerably - both in supervised and reinforcement learning contexts. Such formulations allow models to leverage existing domain knowledge and human expertise in addition to sparse observational data. More formally, transfer learning embodies all techniques where one aims to increase (learning) performance in a target domain or task, by using knowledge gained in a source domain or task. This paper provides a detailed overview of state-of-the-art techniques on applying transfer learning in demand response, showing improvements that can exceed 30% in a variety of tasks. We observe that most research to date has focused on transfer learning in the context of electricity demand prediction, although reinforcement learning based controllers have also seen increasing attention. However, a number of limitations remain in these studies, including a lack of benchmarks, systematic performance improvement tracking, and consensus on techniques that can help avoid negative transfer.

25 citations


Journal ArticleDOI
TL;DR: X-NeSyL as discussed by the authors proposes to fuse DL representations with expert domain knowledge during the learning process so it serves as a sound basis for explainability, and demonstrate that with their approach, it is possible to improve explainability at the same time as performance.

25 citations


Proceedings ArticleDOI
27 Jan 2022
TL;DR: This study develops the ontology transformation based on the external knowledge graph to address the knowledge missing issue and proposes ontology-enhanced prompt-tuning (OntoPrompt), which fulfills and converts structure knowledge to text.
Abstract: Few-shot Learning (FSL) is aimed to make predictions based on a limited number of samples. Structured data such as knowledge graphs and ontology libraries has been leveraged to benefit the few-shot setting in various tasks. However, the priors adopted by the existing methods suffer from challenging knowledge missing, knowledge noise, and knowledge heterogeneity, which hinder the performance for few-shot learning. In this study, we explore knowledge injection for FSL with pre-trained language models and propose ontology-enhanced prompt-tuning (OntoPrompt). Specifically, we develop the ontology transformation based on the external knowledge graph to address the knowledge missing issue, which fulfills and converts structure knowledge to text. We further introduce span-sensitive knowledge injection via a visible matrix to select informative knowledge to handle the knowledge noise issue. To bridge the gap between knowledge and text, we propose a collective training algorithm to optimize representations jointly. We evaluate our proposed OntoPrompt in three tasks, including relation extraction, event extraction, and knowledge graph completion, with eight datasets. Experimental results demonstrate that our approach can obtain better few-shot performance than baselines.

Journal ArticleDOI
TL;DR: In this paper , the authors present a survey of ways in which existing scientific knowledge is included when constructing models with neural networks, by means of changes to: the input, the loss-function, and the architecture of deep networks.
Abstract: We present a survey of ways in which existing scientific knowledge are included when constructing models with neural networks. The inclusion of domain-knowledge is of special interest not just to constructing scientific assistants, but also, many other areas that involve understanding data using human-machine collaboration. In many such instances, machine-based model construction may benefit significantly from being provided with human-knowledge of the domain encoded in a sufficiently precise form. This paper examines the inclusion of domain-knowledge by means of changes to: the input, the loss-function, and the architecture of deep networks. The categorisation is for ease of exposition: in practice we expect a combination of such changes will be employed. In each category, we describe techniques that have been shown to yield significant changes in the performance of deep neural networks.

Journal ArticleDOI
TL;DR: This paper proposed an approach called informed AI (IAI) by integrating human domain knowledge into AI to develop effective and reliable data labeling and model explainability processes for complex and ill-structured problems that lack transparency and have unclear goals.

Journal ArticleDOI
TL;DR: In this paper, an unsupervised feature learning based health indicator construction method is proposed in order to automatically construct a health indicator, which mainly consists of three steps: Firstly, a multiscale convolutional autoencoder network is built where the network hyperparameters are optimized through a genetic algorithm.

Book ChapterDOI
15 Jan 2022
TL;DR: This article propose a simple model, Kformer, which takes advantage of the knowledge stored in pre-trained language models and external knowledge via knowledge injection in Transformer FFN layers, which can yield better performance than other knowledge injection technologies such as concatenation or attention-based injection.
Abstract: Recent days have witnessed a diverse set of knowledge injection models for pre-trained language models (PTMs); however, most previous studies neglect the PTMs’ own ability with quantities of implicit knowledge stored in parameters. A recent study [2] has observed knowledge neurons in the Feed Forward Network (FFN), which are responsible for expressing factual knowledge. In this work, we propose a simple model, Kformer, which takes advantage of the knowledge stored in PTMs and external knowledge via knowledge injection in Transformer FFN layers. Empirically results on two knowledge-intensive tasks, commonsense reasoning (i.e., SocialIQA) and medical question answering (i.e., MedQA-USMLE), demonstrate that Kformer can yield better performance than other knowledge injection technologies such as concatenation or attention-based injection. We think the proposed simple model and empirical findings may be helpful for the community to develop more powerful knowledge injection methods $$^{1}$$ (Code available in https://github.com/zjunlp/Kformer ).

Journal ArticleDOI
TL;DR: SciMED as discussed by the authors combines a wrapper selection method, based on a genetic algorithm, with automatic machine learning and two levels of symbolic regression (SR) methods to discover meaningful symbolic expressions from the data.
Abstract: Discovering a meaningful symbolic expression that explains experimental data is a fundamental challenge in many scientific fields. We present a novel, open-source computational framework called Scientist-Machine Equation Detector (SciMED), which integrates scientific discipline wisdom in a scientist-in-the-loop approach, with state-of-the-art symbolic regression (SR) methods. SciMED combines a wrapper selection method, that is based on a genetic algorithm, with automatic machine learning and two levels of SR methods. We test SciMED on five configurations of a settling sphere, with and without aerodynamic non-linear drag force, and with excessive noise in the measurements. We show that SciMED is sufficiently robust to discover the correct physically meaningful symbolic expressions from the data, and demonstrate how the integration of domain knowledge enhances its performance. Our results indicate better performance on these tasks than the state-of-the-art SR software packages , even in cases where no knowledge is integrated. Moreover, we demonstrate how SciMED can alert the user about possible missing features, unlike the majority of current SR systems.

Journal ArticleDOI
01 Feb 2022
TL;DR: In this article , the authors explored how domain knowledge, identified by expert decision makers, can be used to achieve a more human-centred approach to AI and measured the effect of domain knowledge on trust in AI, reliance on AI, and task performance in an AI-assisted complex decision-making environment.
Abstract: Increasingly, artificial intelligence (AI) is being used to assist complex decision-making such as financial investing. However, there are concerns regarding the black-box nature of AI algorithms. The field of explainable AI (XAI) has emerged to address these concerns. XAI techniques can reveal how an AI decision is formed and can be used to understand and appropriately trust an AI system. However, XAI techniques still may not be human-centred and may not support human decision-making adequately. In this work, we explored how domain knowledge, identified by expert decision makers, can be used to achieve a more human-centred approach to AI. We measured the effect of domain knowledge on trust in AI, reliance on AI, and task performance in an AI-assisted complex decision-making environment. In a peer-to-peer lending simulator, non-expert participants made financial investments using an AI assistant. The presence or absence of domain knowledge was manipulated. The results showed that participants who had access to domain knowledge relied less on the AI assistant when the AI assistant was incorrect and indicated less trust in AI assistant. However, overall investing performance was not affected. These results suggest that providing domain knowledge can influence how non-expert users use AI and could be a powerful tool to help these users develop appropriate levels of trust and reliance.

Journal ArticleDOI
TL;DR: In this paper , a generalizable deep knowledge tracing (DKT) approach called GameDKT was proposed to model the learners' knowledge state during gameplay, in an attempt to monitor and trace their proficiency level for the different skills required for educational games.
Abstract: Despite the multiple deep knowledge tracing (DKT) methods developed for intelligent tutoring systems and online learning environments, there exists only a few applications of such methods in educational computer games. One key challenge is that a player may deploy several interweaved and overlapped skills during gameplay, making the assessment task nontrivial. In this research, we present a generalizable DKT approach called GameDKT that integrates state-of-the-art machine learning with domain knowledge to model the learners’ knowledge state during gameplay, in an attempt to monitor and trace their proficiency level for the different skills required for educational games. Our findings reveal that GameDKT approach could successfully predict the performance of players in the coming game task using the cross-validated CNN model with accuracy and AUC of roughly 85% and 0.913, respectively, thus outperforming the MLP baseline model by up to 14%. When the performance of players is forecasted for up to four game tasks in advance, results show that the CNN model can achieve more than 70% accuracy. Interestingly, this model seems to be better and faster at identifying local patterns and it could achieve a higher performance compared to RNN and LSTM in both one-step and multi-step prediction of learners’ performance in game tasks.

Journal ArticleDOI
TL;DR: In this article, a new knowledge model-based framework that incorporates the semantic information and SMM rules in BIM for automatic code-compliant quantity take-off is presented.

Journal ArticleDOI
TL;DR: A Multi-source domain Knowledge Transfer method for object detection that leverages both image-level and instance-level attention to promote positive cross-domain transfer and suppress negative transfer and achieves the state-of-the-art performance.

Journal ArticleDOI
TL;DR: The MT-KD network provides an adequate platform for neural TTS training, where the student model learns to emulate the behaviors of the two teachers, at the same time, minimizing the mismatch between training and run-time inference.
Abstract: Neural end-to-end text-to-speech (TTS) is superior to conventional statistical methods in many ways. However, the exposure bias problem, that arises from the mismatch between the training and inference process in autoregressive models, remains an issue. It often leads to performance degradation in face of out-of-domain test data. To address this problem, we study a novel decoding knowledge transfer strategy, and propose a multi-teacher knowledge distillation (MT-KD) network for Tacotron2 TTS model. The idea is to pre-train two Tacotron2 TTS teacher models in teacher forcing and scheduled sampling modes, and transfer the pre-trained knowledge to a student model that performs free running decoding. We show that the MT-KD network provides an adequate platform for neural TTS training, where the student model learns to emulate the behaviors of the two teachers, at the same time, minimizing the mismatch between training and run-time inference. Experiments on both Chinese and English data show that MT-KD system consistently outperforms the competitive baselines in terms of naturalness, robustness and expressiveness for in-domain and out-of-domain test data. Furthermore, we show that knowledge distillation outperforms adversarial learning and data augmentation in addressing the exposure bias problem.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a feature selection method embedded with materials domain knowledge named NCOR-FS to select higher quality features, which can be applied to any materials systems, and the idea of embedding domain knowledge into data-driven algorithm is expected to facilitate constructing extensive machine learning models embedded with material domain knowledge.

Journal ArticleDOI
TL;DR: In this paper , a knowledge-informed framework for improved automated rule checking (ARC) is proposed based on natural language processing, and semantic alignment and conflict resolution are introduced to enhance the rule interpretation process based on predefined domain knowledge and unsupervised learning techniques.

Journal ArticleDOI
TL;DR: The authors explored the relationship between prior knowledge and learning of new, previously unknown information and found that high-knowledge learners' curiosity may be related to attention-based mechanisms that increase the effectiveness of encoding during feedback.
Abstract: When learning new information, students' prior knowledge related to that information will often vary. Prior research has not systematically explored how prior knowledge relates to learning of new, previously unknown information. Accordingly, the goal of the present research was to explore this relationship. In three experiments, students first completed a prior knowledge test over two domains (football and cooking) and then learned new information from these domains by answering questions and receiving feedback. Students also made a judgment of learning for each. To ensure that the learning was new (i.e., previously unknown) for all students, the to-be-learned information was false. Last, students completed a final test over the same questions from the learning phase. Prior knowledge in each domain was positively related to new learning for items from that domain but not from the other domain. Thus, the relationship between prior knowledge and new learning was domain specific, which we refer to as the rich-get-richer effect. Prior knowledge was also positively related to the magnitude of judgments of learning. In Experiment 3, to explore a potential reason why prior knowledge is related to new learning, students rated their curiosity in learning each item prior to receiving feedback. Critically, students' curiosity judgments mediated the relationship between prior knowledge and new learning. These outcomes suggest that for high-knowledge learners, curiosity may be related to attention-based mechanisms that increase the effectiveness of encoding during feedback. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

Proceedings ArticleDOI
25 Apr 2022
TL;DR: A novel knowledge graph distillation method is proposed and a novel knowledge injector is designed for the dynamic interaction between text and knowledge encoder for in-depth domain knowledge in passage re-ranking.
Abstract: Passage re-ranking is to obtain a permutation over the candidate passage set from retrieval stage. Re-rankers have been boomed by Pre-trained Language Models (PLMs) due to their overwhelming advantages in natural language understanding. However, existing PLM based re-rankers may easily suffer from vocabulary mismatch and lack of domain specific knowledge. To alleviate these problems, explicit knowledge contained in knowledge graph is carefully introduced in our work. Specifically, we employ the existing knowledge graph which is incomplete and noisy, and first apply it in passage re-ranking task. To leverage a reliable knowledge, we propose a novel knowledge graph distillation method and obtain a knowledge meta graph as the bridge between query and passage. To align both kinds of embedding in the latent space, we employ PLM as text encoder and graph neural network over knowledge meta graph as knowledge encoder. Besides, a novel knowledge injector is designed for the dynamic interaction between text and knowledge encoder. Experimental results demonstrate the effectiveness of our method especially in queries requiring in-depth domain knowledge.

Journal ArticleDOI
TL;DR: In this article , a model called ontology-based knowledge map is proposed to represent and store the results (knowledge) of data mining in crop farming to build, maintain, and enrich the process of knowledge discovery.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a hierarchical guided transfer learning framework (HGTL) for fault recognition with few-shot samples, which fuse domain knowledge, label semantics and inter-class distance to calculate the affinity between categories, based on which a category hierarchical tree is constructed by hierarchical clustering.

Journal ArticleDOI
TL;DR: In this article , a multi-disease prediction model was proposed to discover the relationship between diseases and symptoms to predict potential diseases for patients, which is based on knowledge graph.
Abstract: In recent years, the means of disease diagnosis and treatment have been improved remarkably, along with the continuous development of technology and science. Researchers have spent tremendous time and effort to build models that aim to assist medical practitioners in decision-making support. However, one of the greatest challenges remains how to identify the connection between different diseases. This study aims to discover the relationship between diseases and symptoms to predict potential diseases for patients. Considering it a multi-label classification problem, the study proposed a new multi-disease prediction model learning from NHANES, an extensive health related dataset, and MEDLINE, a corpus with medical domain knowledge. A heterogeneous information graph is firstly constructed and then populated using medical domain knowledge discovered from MEDLINE. The knowledge graph is analysed for clarification of the relevancy within nodes in positive or negative space, helping to access to the correlation amongst multiple diseases and their symptoms. A multi-label disease prediction model is then developed adopting the medical domain knowledge graph. Empirical experiments are conducted to evaluate the proposed model. The experimental results show that the performance of the proposed model surpassed state-of-the-art related works representing the mainstreams of multi-label classification. This study contributes to the medical community with a novel model for multi-disease prediction and represents a new endeavour on multi-label classification using knowledge graphs.