Patent
Knowledge graph vector representation generation method, device and equipment
TLDR
In this article, a knowledge graph vector representation generation method is proposed for the field of artificial intelligence. And the method comprises the steps: obtaining a knowledge graphs, and enabling the knowledge graph to comprise a plurality of entity nodes; obtaining a context type and context data corresponding to the knowledge graphs; and generating vector representations corresponding to entity nodes through a context model according to the context data and the context type.Abstract:
The invention provides a knowledge graph vector representation generation method, device and equipment, and relates to the technical field of artificial intelligence, and the specific implementation scheme is that the method comprises the steps: obtaining a knowledge graph, and enabling the knowledge graph to comprise a plurality of entity nodes; obtaining a context type and context data corresponding to the knowledge graph; and generating vector representations corresponding to the plurality of entity nodes through a context model according to the context data and the context type. Therefore,finer semantic representation of the entity in the context is realized, and the knowledge graph representation learning accuracy is further improved.read more
Citations
More filters
Patent
Vector representation generation method, apparatus and device for knowledge graph
TL;DR: In this paper, a vector representation generation method, apparatus and device for a knowledge graph, wherein same relate to the technical field of artificial intelligence, is proposed, where the knowledge graph comprises a plurality of entity nodes.
References
More filters
Journal ArticleDOI
Learning Vector-space Representations of Items for Recommendations Using Word Embedding Models
TL;DR: The method of generating item recommendations by learning item feature vector embeddings is analogous to approaches like Word2Vec or Glove used to generate a good vector representation of words in a natural language corpus.
Patent
Face recognition and image search system using sparse feature vectors, compact binary vectors, and sub-linear search
Mark J. Burge,Jordan Cheney +1 more
TL;DR: In this article, an input image of a face may be received and cropped, and the image may be processed through a deep neural network (DNN) to produce a k-dimensional feature vector.
Patent
Personalized recommendation method based on knowledge graph
Chang Liang,Kuang Haili +1 more
TL;DR: In this article, a personalized recommendation method based on a knowledge graph is proposed, where link relations among conceptual entities in the knowledge graph are utilized to measure the semantic association between every two optional nodes, a network representation learning method is used to obtain the representation vectors of the nodes in a network structure, and items are precisely recommended for users by calculating node similarity.
Patent
Information pushing method and device based on mapping knowledge domain
TL;DR: In this article, an information pushing method and device based on a mapping knowledge domain is described, which comprises the following steps: recognizing at least one entity in a target text; determining the types of the various entities in at least at one entity; determining an intention point word in the target text, and determining the entity which is associated with the intention point words in one entity as a target entity; and determining knowledge information which is matched with the target entity, the type of the target entities, and pushing the knowledge information.
Patent
Visualization framework based on document representation learning
TL;DR: In this paper, a framework based on document representation learning is described, in which a free text document is converted into word vectors using learning word embeddings and document representations are determined in a fixed-dimensional semantic representation space by passing the word vectors through a trained machine learning model, wherein more related documents lie closer than less related documents in the representation space.