scispace - formally typeset
Search or ask a question

Context in user modelling? 


Best insight from top research papers

Context plays a crucial role in user modelling, particularly in the field of personalisation and human-computer interaction. It is becoming increasingly significant as user-centric applications proliferate. The representation and utilization of contextual information in intelligent systems are the focus of research in this area. The First ACM Workshop on Context Representation in User Modelling (CRUM 2023) aimed to bring together researchers from various disciplines to discuss the role of context in adaptive applications . Advances in machine learning techniques and the availability of large-scale and labeled datasets have enabled context-aware services to recognize the current situation of a user and optimize system personalization features . In collaborative tasks using online platforms, having a knowledge base on users' behavior in a group (collaborative profile) is vital for enhancing collaborative abilities. Traditional approaches using questionnaires or external observers can be replaced by observation of users' behavior in a collaborative serious game . Additionally, context is important in sleep modelling to understand the causal relationships between daily activities and sleep quality. A data-driven personalized sleep model can provide specific feedback recommendations to improve sleep outcomes .

Answers from top 5 papers

More filters
Papers (5)Insight
The paper discusses the use of contextual variables, such as lifestyle factors, in user modelling for personalized health recommendation systems.
The paper proposes using user embeddings, learned from their previous posts, to capture speaker information and context for sarcasm detection.
The paper discusses the importance of considering the context in user modeling, specifically in the context of a collaborative serious game.
The paper discusses the significance of context in user modelling and its role in adaptive applications.
The paper discusses the modeling of user context using raw sensor data and dimensionality reduction techniques to optimize context classification.

Related Questions

What are some techniques for increasing the context size of language models in natural language processing tasks?10 answersTo increase the context size of language models in natural language processing tasks, several techniques have been proposed in recent research. One approach is Position Interpolation (PI), which extends the context window sizes of RoPE-based pretrained LLMs up to 32768 tokens with minimal fine-tuning, demonstrating strong empirical results on various tasks. Another technique is the use of AutoCompressors, which compress long contexts into summary vectors that can be used as soft prompts, allowing for the utilization of long contexts to improve perplexity and accuracy in tasks requiring long context. Additionally, the framework LongMem enables LLMs to memorize long history by decoupling the network architecture into a memory encoder and retriever, allowing for the caching and updating of long-term past contexts for memory retrieval without memory staleness. Parallel Context Windows (PCW) is another method that carves long contexts into chunks or windows, restricting the attention mechanism within each window and re-using positional embeddings across windows to alleviate the context window restriction for off-the-shelf LLMs without further training, showing substantial improvements for tasks with diverse input and output spaces. Lastly, In-Context Retrieval-Augmented Language Modeling (RALM) involves prepending grounding documents to the input of an unchanged LM, utilizing off-the-shelf general purpose retrievers to provide significant LM gains across model sizes and diverse corpora.
How to encourage large language model to focus on the context?5 answersTo encourage large language models to focus on the context, several techniques have been proposed. One approach is to endow the attention layer with access to an external memory, which consists of (key, value) pairs. This allows the model to extend its effective context length and incorporate new information in a contextual manner. Another method is to use carefully designed prompting strategies, such as opinion-based prompts and counterfactual demonstrations. Opinion-based prompts reframe the context as a narrator's statement and inquire about the narrator's opinions, while counterfactual demonstrations use instances containing false facts to improve faithfulness in knowledge conflict situations. Additionally, a framework called LongMem has been proposed, which enables language models to memorize long history and utilize long-term memory for language modeling. This is achieved through a decoupled network architecture with a memory encoder and a memory retriever and reader.
How to understand context from user command?4 answersUnderstanding context from user commands involves several steps. First, the user interaction with a computer application's user interface is detected and analyzed to determine action details. Based on these details and additional context information, a user context is generated. Next, relevant operations are selected based on the user interaction and context, and the computer application is instructed to perform the operation. Context information can be used to replace gesture input or natural language, reducing the need for excessive input clarification and limiting information processing problems. In the case of voice command inputs, a computing device obtains language processing results, including intent and arguments, and identifies actions based on these and contextual information. The device then suggests the action to the user and can automatically perform it as well. Disambiguating commands based on context data is another approach, where user voice input is analyzed, and ambiguous commands are clarified using context data before performing the predetermined action.
What is context diagram?3 answersA context diagram is a visual representation that captures the context of a system or application. It shows the interactions and relationships between the system and its external entities, such as users, other systems, and the environment. The diagram helps to understand the boundaries and scope of the system, as well as the inputs and outputs it receives and produces. It is a useful tool for requirements capture and analysis, especially for context-aware applications that need to adapt to changes in the user's context. The context diagram can also be extended to include context-awareness requirements, allowing for a clear separation of concerns between functional requirements and context-awareness requirements. Overall, the context diagram provides a visual and intuitive way to represent and understand the context of a system or application.
What is context in business and organizational behavior?5 answersContext in business and organizational behavior refers to the external factors that influence the behavior and decision-making of organizations. It includes factors such as organizational characteristics, societal and cultural factors, and the specific environment in which the organization operates. Understanding the role of context is crucial for comprehending how organizations operate and for developing effective organizational strategies. Research has shown that context has a significant impact on various organizational outcomes, including internationalization behavior, firm performance, and mergers and acquisitions. Additionally, context can influence individual behavior within organizations, such as feedback-seeking behavior and work engagement. By considering context in research and practice, we can gain valuable insights into organizational behavior and improve the fit between research and real-world applications.
How can contextualized vocabulary be used as a way to improve language models?5 answersContextualized vocabulary can be used to improve language models by incorporating external knowledge and improving topic coherence and diversity. One approach is to use a negative sampling mechanism during model training, where the generated document-topic vector is perturbed and a triplet loss is used to encourage the reconstructed document to be similar to the input document and dissimilar to the perturbed vector. Another approach is to develop a technique called "vokenization" that maps language tokens to related images, allowing for multimodal alignments and improving language models trained on large corpora. Additionally, enriching contextualized language models with biomedical knowledge graphs has been shown to consistently outperform other extraction models in biomedical information extraction tasks.