scispace - formally typeset
Search or ask a question

Answers from top 13 papers

More filters
Papers (13)Insight
are investigated under the matrix expression. These applications demonstrate the usefulness of the new matrix products.
We argue that this new matrix contains Jordan cells.
Proceedings ArticleDOI
Jitao Sang, Changsheng Xu 
25 Oct 2010
77 Citations
In this paper, we incorporate script into movie analysis and propose a novel character-based movie summarization approach, which is validated by modern film theory that what actually catches audiences' attention is the character.
Experimental results on MovieLens and Each Movie data sets demonstrate that the proposed method is much more competitive compared with the state of the art matrix factorization based collaborative filtering methods.
The desired M-matrix solution of the quadratic matrix equation (a special non-symmetric ARE by itself) turns out to be the unique stabilizing or almost stabilizing solution.
The new formalism enables one to calculate matrix convolutions and other algebraic operations in matrix form.
This paper proposes a new method to predict the sentiment of the movie on the rotten tomatoes by combining the sentiment score from SentiWordnet and expert original score.
The main novelty of our approach is that we convert the movie scene segmentation into a movie-script alignment problem and propose a HMM alignment algorithm to map the script scene structure to the movie content.
In this paper, we propose a new algorithm for computing sparse matrix-matrix products by exploiting their nonzero structure through the process of graph coloring.
The results also shed new light on low-rank matrix completion.
These promising new agents join the matrix metalloproteinase inhibitors as important new drugs in the anti-cancer armamentarium.
The new model can learn much sparser matrix factorization.
With movie metadata collected for the NP movies we show that the prediction performance of the methods is comparable to that of CF, and can be used to predict user preferences on new movies.

See what other people are reading

What is main topic?
4 answers
The main topic can be identified through various methods, such as dictionary-based software for topic analysis. This software, which operates in English and Italian, uses a manually created dictionary of about 5,500 topic words to detect the main topics in texts accurately. Additionally, historical documents and artworks have shed light on the prevalence of child abuse throughout different societies and time periods. Furthermore, in the medical field, understanding the manifestations of HIV in the ear, nose, and throat regions is crucial, with conditions like oral candidiasis and Kaposi's sarcoma being common in HIV-positive individuals. By analyzing texts, historical records, and medical literature, one can gain insights into prevalent topics across different domains.
What are the most common physical characteristics of suspects in criminal investigations?
10 answers
In criminal investigations, the identification and analysis of suspects' physical characteristics are crucial for narrowing down suspects and solving cases. Recent research has highlighted various physical characteristics that are commonly analyzed. Gestures, including hand movements and body language, are significant indicators of a suspect's behavior and can be automatically identified using deep learning techniques with high accuracy levels. Beyond gestures, the anatomical features on the dorsal surface of the hand, such as the occurrence rate of specific hand features, have been studied to assist in forensic investigations, showing significant variation between males and females. DNA analysis has become routine in forensic science, providing not only the genetic profile but also the potential to estimate the biological age of the individual from biological stains, adding another layer of physical characteristic useful in investigations. The ICM_Apriori algorithm has improved the efficiency of mining criminal data, including suspects' characteristics, by significantly reducing the time required for analysis. Interviews and the study of white-collar crime also reveal that socioeconomic status and occupational roles can be considered indirect physical or demographic characteristics relevant in certain criminal investigations. Dermatological conditions affecting fingerprints can complicate the identification process, indicating the importance of skin conditions as a physical characteristic in criminalistics. Facial attributes, defined and extracted using automated algorithms, serve as critical identifiers in the absence of traditional eyewitness sketches, highlighting the role of describable facial features in suspect identification. The personality and lifestyle choices, including substance abuse, can indirectly influence physical characteristics or the perception thereof in criminal profiles. Forensic dentistry emphasizes the uniqueness of dental structures as a vital physical characteristic in identifying suspects or victims in crimes. Lastly, the use of subjective feature-based mug shot retrieval systems demonstrates the importance of witnesses' perceptions of suspects' physical features, further underscoring the variety of characteristics used in criminal investigations.
What are the most common characteristics of suspects in criminal investigations?
10 answers
The most common characteristics of suspects in criminal investigations can be discerned through various analytical lenses, including behavioral evidence analysis, psychological profiling, interrogation outcomes, and the application of technology and algorithms in criminal investigations. Behavioral evidence analysis suggests that investigatively relevant characteristics of suspects include criminal skill, knowledge of the victim, the crime scene, and methods and materials, although profilers often err in deducing characteristics like age, sex, and intelligence. The application of technology, specifically deep learning techniques, has shown efficacy in identifying suspects based on unique behavioral cues such as body language and hand gestures, with a high accuracy level in distinguishing between positive and negative gestures. Psychological profiling, particularly through polygraph surveys, highlights the importance of emotional and behavioral traits such as reactive aggression, emotional lability, and a tendency towards emotional instability among suspects. This is complemented by the use of the Behavior Analysis Interview (BAI), which effectively differentiates between deceptive and truthful suspects by assessing verbal and nonverbal cues, with a notable accuracy in identifying deceptive behaviors. Criminal investigations also benefit from the analysis of suspects' characteristics through algorithms like the ICM_Apriori, which improves the efficiency of mining criminal rules and clues from crime data. Moreover, the study of criminal careers reveals patterns of generalist and specialist behaviors among suspects, influenced by factors such as age and gender, with women showing a propensity towards certain types of crimes like fraud. Interrogation outcomes further elucidate suspects' characteristics, indicating that previous contact with the police and personal traits significantly influence the truthfulness and content of suspects' confessions. Additionally, psychological assessments of detainees have identified mental health, memory and suggestibility, previous criminality, and literacy and IQ as significant factors. In summary, the most common characteristics of suspects in criminal investigations encompass a broad spectrum of behavioral, emotional, psychological, and demographic traits, as well as criminal skills and knowledge, all of which are crucial for effective profiling, interrogation, and investigation processes.
How Language Models Use Long Contexts?
5 answers
Language models exhibit varied performance with long contexts. Research shows that models excel when relevant information is at the beginning or end of the input but struggle with mid-context information, degrading significantly. To address this limitation, LongMem was proposed, enabling models to memorize long histories and utilize long-term memory for language modeling. AutoCompressors offer a solution by compressing long contexts into summary vectors, enhancing perplexity and reducing inference cost. These approaches highlight the challenges language models face with long contexts and propose innovative solutions to leverage extensive information effectively, ultimately improving model performance and adaptability to longer inputs.
What are the key components of Treisman and Gelade's feature-integration theory of attention?
4 answers
Treisman and Gelade's Feature Integration Theory (FIT) posits that during early visual processing, certain visual features are pre-attentively extracted in a parallel manner across the visual field, allowing objects with unique features to effortlessly capture attention. This theory was inspired by a culmination of ideas from visual psychophysics, cognitive psychology, and visual neurophysiology, integrating various key concepts and experimental findings from these fields. Despite being primarily focused on visual feature integration, Treisman's theory has been influential beyond vision, serving as a benchmark for assessing alternative approaches in both visual and non-visual domains. FIT's two-stage system, involving a preattentive stage followed by attentive processes, has significantly shaped discussions on visual search and attention for decades, prompting researchers to engage with and either adopt or modify its foundational ideas.
How does natural language processing work?
5 answers
Natural language processing (NLP) involves developing computational systems to analyze and understand human languages for automating tasks requiring human language skills. NLP utilizes traditional methods like bag of word models and newer deep learning techniques such as recurrent neural networks and transformers. Machine learning algorithms like neural networks and support vector machines play a crucial role in NLP tasks like sentiment analysis and text classification, by learning patterns from data. NLP encompasses natural language understanding (NLU) and natural language generation (NLG), focusing on tasks like speech and written text processing, sentence planning, and text realization. NLP is a subset of artificial intelligence, bridging human communication with computer reasoning, and is essential for extracting information from large document sets in various industries like oil and gas.
How does textual revision aid in improving the quality of articles?
5 answers
Textual revision plays a crucial role in enhancing the quality of articles by addressing grammatical errors, improving readability, and refining the overall structure. Scholars facing challenges in English writing benefit from revisions that enhance textual cohesion and coherence, transforming immature drafts into publishable works. Evaluation strategies like guiding notes and comments facilitate interactive revisions, fostering dialogues between teachers and students to refine written productions effectively. In academic settings, the cognitive apprenticeship theory aids in teaching revision skills to students, offering methods like modeling, coaching, and fading to enhance their ability to revise articles effectively. Through iterative text refinement, authors can significantly elevate the quality of their work, ensuring clarity, coherence, and overall impact.
What is the The current state of EI and AI in education and project management.?
6 answers
The current state of Artificial Intelligence (AI) in education and project management reflects a transformative phase where AI's capabilities are being harnessed to enhance efficiency, decision-making, and learning outcomes. In education, AI is significantly improving administrative and management systems at institutional levels, modifying traditional teaching and learning techniques through seamless integration with E-learning technology and online Learning Management Systems (LMS). AI systems in education have evolved to include web-based chatbots and humanoid robots, aiming to increase teachers' efficiency in managing tasks and improving the standard of instructional activities. In project management, AI is emerging as a key technological driver, with applications ranging from digital assistants to intelligent tools that assist in various project management areas, including productivity, quality, and risk management. These AI applications are designed to complement the skills of project managers, allowing them to focus more on complex management tasks and collaborative work while AI handles routine tasks. However, the integration of AI in project management also faces challenges such as data integration across different project stages, the availability of sufficient historical data, and the operability of dynamic real-time tracking data. Despite these challenges, AI's role in project management is increasingly recognized for its potential to provide valuable insights through the interpretation of vast amounts of data, thereby enhancing decision-making, resource allocation, and efficient project delivery. However, the application of AI in project management is still in its early stages, with a need for further research and development to fully exploit AI's capabilities across all project management processes. Overall, both fields are witnessing the growing influence of AI, promising significant advancements in how educational and project management objectives are achieved. Yet, this also calls for a balanced approach where AI complements human expertise rather than replacing it, ensuring that the human element remains at the core of both educational and project management practices.
How to decrease radiological reporting time using AI?
5 answers
To decrease radiological reporting time using AI, several strategies can be implemented based on the findings from the research papers. Implementing smart worklist prioritization by AI can optimize workflow and reduce report turnaround times for critical findings in radiographs. Utilizing artificial intelligence for automatic generation of structured reports can significantly improve efficiency by converting dictated free text into structured reports, reducing the time needed for manual completion. Additionally, incorporating deep learning models like convolutional neural networks can help in triaging true-negative cases, reducing the caseload for radiologists and potentially improving overall efficiency without compromising detection rates. Lastly, utilizing memory-driven Transformers can enhance the generation of radiology reports, lightening the workload of radiologists and promoting clinical automation.
What is ASCII file?
5 answers
An ASCII file is a type of file format that utilizes ASCII (American Standard Code for Information Interchange) characters for encoding data. ASCII art, a form of ASCII file, represents images using character shapes and is commonly used on internet bulletin boards. ASCII-based encryption/decryption applications can be applied to various file types, such as images, data files, audio files, and more, by translating ASCII values of characters into binary mode. Additionally, an ASCII file format has been proposed for exchanging large systems of nonlinear ordinary differential matrix equations, supporting dense and sparse matrices with the inclusion of nonlinear functions and couplings. Furthermore, ASCII text-cover centric data hiding schemes are being researched to enhance information security through steganography, addressing issues like copyright infringement and e-theft.
How does CDOM absorption compare with other methods for predicting water pollution, in terms of accuracy and efficiency?
10 answers
The comparison of CDOM absorption methods with other techniques for predicting water pollution reveals a nuanced landscape of accuracy and efficiency across various studies. CDOM, or chromophoric dissolved organic matter, plays a crucial role in the biogeochemical and carbon cycles of aquatic environments, with its absorption being a significant part of light absorptions in these systems. The use of machine learning algorithms, such as Gaussian process regression (GPR), has shown high stability and estimation accuracy for CDOM absorption coefficients, outperforming traditional empirical models in inland waters. This suggests a potential for high accuracy in CDOM-based monitoring approaches. In contrast, other predictive models for water quality, such as those targeting chemical oxygen demand (COD) using spectral technology and particle swarm algorithms, have also demonstrated good prediction effects. These methods, while distinct from CDOM absorption techniques, offer a stable, fast, and real-time measurement capability that is crucial for effective water pollution control and treatment. UV-visible imaging spectroscopy, an emerging technology, has been anticipated to improve the remote sensing of coastal waters by facilitating the detection of CDOM in optically complex waters. This method, especially when incorporating UV reflectance, significantly improves the retrieval of CDOM absorption coefficients, indicating its efficiency and accuracy in monitoring coastal water quality. Furthermore, the exploration of CDOM sources and dynamics through machine learning models like XGBoost has provided insights into the spectral slopes of CDOM, offering a more detailed understanding of its characteristics in marine environments. The identification of fluorescent components of CDOM in urban waters and their correlation with pollution levels further underscores the utility of CDOM analysis in real-time water quality monitoring. Comparatively, studies on the spectral characteristics of CDOM in highly polluted waters have highlighted the importance of understanding CDOM optical properties and their relationship with water quality. Seasonal characteristics of fluorescent CDOM components in polluted watershed tributaries have also been examined, demonstrating the influence of environmental factors on CDOM dynamics. In summary, while CDOM absorption methods offer promising accuracy and efficiency in monitoring water pollution, especially when enhanced by machine learning algorithms and UV-visible imaging spectroscopy, they are part of a broader toolkit that includes other predictive models for water quality assessment. Each method has its strengths and applications, suggesting that a comprehensive approach, possibly integrating multiple techniques, could provide the most accurate and efficient water pollution prediction.