scispace - formally typeset
Search or ask a question

Showing papers presented at "Computational Science and Engineering in 2010"


Journal ArticleDOI
01 May 2010
TL;DR: In this article, the authors make the cloud platform behave virtually like a local homogeneous computer cluster, giving users access to high-performance clusters without requiring them to purchase or maintain sophisticated hardware.
Abstract: Large, virtualized pools of computational resources raise the possibility of a new, advantageous computing paradigm for scientific research. To help achieve this, new tools make the cloud platform behave virtually like a local homogeneous computer cluster, giving users access to high-performance clusters without requiring them to purchase or maintain sophisticated hardware.

118 citations


Proceedings ArticleDOI
11 Dec 2010
TL;DR: A system called MIFAS (Medical Image File Accessing System) is presented to solve the exchanging, storing and sharing on Medical Images of crossing the different hospitals issues and can make the bestpossible patient-care decisions.
Abstract: Large scale cluster based on cloud technologies has been widely used in many areas, including the data center and cloud computing environment. The purpose of presenting the research paper in this field was to solve the challenge in Medical Image exchanging, storing and sharing issues of EMR (Electronic Medical Record). In recent years, many countries invested significant resources on the projects of EMR topics. The benefit of the EMR included: Patient-centered Care, Collaborative Teams, Evidence-based Care, Redesigned Business Processes, Relevant Data Capture and Analysis and Timely Feedback and Education. For instance, the ARRAHIT project in Untied States (2011 - 2015), Health Infoway project in Canada (2001 - 2015) and NHIP project in Taiwan, etc. Aim to the topic of EMR, we presented a system called MIFAS (Medical Image File Accessing System) to solve the exchanging, storing and sharing on Medical Images of crossing the different hospitals issues. Through this system we can enhance efficiency of sharing information between patients and their caregivers. Furthermore, the system can make the bestpossible patient-care decisions.

77 citations


Proceedings ArticleDOI
Haichang Gao1, Dan Yao1, Honggang Liu1, Xiyang Liu1, Liming Wang1 
11 Dec 2010
TL;DR: A novel image based CAPTCHA which involves in solving a jigsaw puzzle is presented in this paper and is a promising substitution to the current text-based CAPTCHAs.
Abstract: Most commonly used CAPTCHAs are text-based CAPTCHAs which relay on the distortion of texts in the background image. With the development of automated computer vision techniques, which have been designed to remove noise and segment the distorted strings to make characters readable for OCR, traditional text-based CAPTHCAs are not considered safe anymore for authentication. A novel image based CAPTCHA which involves in solving a jigsaw puzzle is presented in this paper. An image is divided into an niAn (n=3, 4 or 5, depends on security level) pieces to construct the jigsaw puzzle CAPTCHA. Only two of the pieces are misplaced from their original positions. Users are required to find the two pieces and swap them. Considering the previous works which are devoted to solving jigsaw puzzles using edge matching technique, the edges of all pieces are processed with glitch treatment to prevent the computer automatic solving. Experiments and security analysis proved that human users can complete the CAPTCHA verification quickly and accurately, but computers rarely can. It is a promising substitution to the current text-based CAPTCHA.

62 citations


Journal ArticleDOI
01 Nov 2010
TL;DR: As power consumption of supercomputers grows out of control, energy efficiency will move from desirable to mandatory.
Abstract: As power consumption of supercomputers grows out of control, energy efficiency will move from desirable to mandatory.

58 citations


Book ChapterDOI
01 Jan 2010
TL;DR: Based on extensive global software engineering research, a software process, Global Teaming, is developed, which includes specific practices and sub-practices to ensure that requirements for international software engineering are stipulated so that organisations can ensure successful implementation of global software Engineering.
Abstract: Our research has shown that many companies are struggling with the suc- cessful implementation of global software engineering, due to temporal, cultural and geographical distance, which causes a range of factors to come into play. For exam- ple, cultural, project management and communication difficulties continually cause problems for software engineers and project managers. While the implementation of efficient software processes can be used to improve the quality of the software prod- uct, published software process models do not cater explicitly for the recent growth in global software engineering. Our thesis is that global software engineering factors should be included in software process models to ensure their continued usefulness in global organisations. Based on extensive global software engineering research, we have developed a software process, Global Teaming, which includes specific practices and sub-practices. The purpose is to ensure that requirements for suc- cessful global software engineering are stipulated so that organisations can ensure successful implementation of global software engineering.

58 citations


Book ChapterDOI
01 Jan 2010
TL;DR: This chapter summarizes the main findings of this book, draws some conclusions on these findings and looks at the prospects for software engineers in dealing with the challenges of collaborative software development.
Abstract: Much work is presently ongoing in collaborative software engineering research. This work is beginning to make serious inroads into our ability to more effectively practice collaborative software engineering, with best practices, processes, tools, metrics, and other techniques becoming available for day-to-day use. However, we have not yet reached the point where the practice of collaborative software engineering is routine, without surprises, and generally as optimal as possible. This chapter summarizes the main findings of this book, draws some conclusions on these findings and looks at the prospects for software engineers in dealing with the challenges of collaborative software development. The chapter ends with prospects for collaborative software engineering.

55 citations


Proceedings ArticleDOI
11 Dec 2010
TL;DR: A novel cultured AFA with the crossover operator, namely CAFAC, is proposed to enhance its optimization performance and numerical simulation results demonstrate that the proposedCAFAC can indeed outperform the original AFA.
Abstract: The Artificial Fish-swarm Algorithm (AFA) is an intelligent population-based optimization algorithm inspired by the behaviors of fish swarm. Unfortunately, it sometimes fails to maintain an appropriate balance between exploration and exploitation, and has a drawback of blind search. In this paper, a novel cultured AFA with the crossover operator, namely CAFAC, is proposed to enhance its optimization performance. The crossover operator utilized is to promote the diversification of the artificial fish and make them inherit their parents’ characteristics. The Culture Algorithms (CA) is also combined with the AFA so that the blind search can be combated with. A total of 10 high-dimension and multi-peak functions are employed to investigate the optimization property of our CAFAC. Numerical simulation results demonstrate that the proposed CAFAC can indeed outperform the original AFA.

32 citations


Journal ArticleDOI
01 Nov 2010
TL;DR: This paper presents a taxonomy of different approaches proposed to enable location privacy in LBS and elaborates on the strengths and weaknesses of each class of approaches.
Abstract: The ubiquity of smartphones and other location-aware hand-held devices has resulted in a dramatic increase in popularity of location-based services (LBS) tailored to users' locations. The comfort of LBS comes with a privacy cost. Various distressing privacy violations caused by sharing sensitive location information with potentially malicious services have highlighted the importance of location privacy research aiming to protect users' privacy while interacting with LBS. This paper presents a taxonomy of different approaches proposed to enable location privacy in LBS and elaborates on the strengths and weaknesses of each class of approaches.

30 citations


Book ChapterDOI
01 Jan 2010
TL;DR: This chapter presents a two-part solution to this problem: a collaborative architecting process based on architectural knowledge and an accompanying tool suite that demonstrates one way to support the process.
Abstract: In the field of software architecture, there has been a paradigm shift from describing the outcome of the architecting process to documenting architectural knowledge, such as design decisions and rationale. Moreover, in a global, distributed setting, software architecting is essentially a collaborative process in which sharing and reusing architectural knowledge is a crucial and indispensible part. Although the importance of architectural knowledge has been recognized for a considerable period of time, there is still no systematic process emphasizing the use of architectural knowledge in a collaborative context. In this chapter, we present a two-part solution to this problem: a collaborative architecting process based on architectural knowledge and an accompanying tool suite that demonstrates one way to support the process.

28 citations


Proceedings ArticleDOI
11 Dec 2010
TL;DR: The framework formalizes a series of theoretical steps from discovering a workflow-based social network to analyzing the discoveredsocial network and applies the degree centrality algorithm to the discovered social network, which is one of the well-known social network analysis algorithms in the literature.
Abstract: The purpose of this paper is to build a fundamental framework of discovering and analyzing a workflow-based social network formed through workflow-based organizational business operations. A little more precisely speaking, the framework formalizes a series of theoretical steps from discovering a workflow-based social network to analyzing the discovered social network. For the sake of the discovery phase, we conceive an algorithm that is able to automatically discover the workflow-based social network from a workflow procedure; while on the other hand, in the analysis phase we apply the degree centrality algorithm to the discovered social network, which is one of the well-known social network analysis algorithms in the literature. Consequently, the crucial implication of the framework is in quantifying the degree of work-intimacy among performers who are involved in enacting the corresponding workflow procedure. Also, as a conceptual extension of the framework, it can be applied to discovering and analyzing d gree centrality or collaborative closeness and betweenness among architectural components and nodes of collaborative cloud workflow computing environments.

26 citations


Proceedings ArticleDOI
11 Dec 2010
TL;DR: A massively parallel version of the RMAP short-read mapping tool that is highly optimized for the NVIDIA family of GPUs is proposed and it is shown that despite using the conventionally “slower” but GPU-compatible binary search algorithm, GPU-RMAP outperforms the sequential RMAP implementation, which uses the “faster” hashing technique on a PC.
Abstract: Next-generation, high-throughput sequencers are now capable of producing hundreds of billions of short sequences (reads) in a single day. The task of accurately mapping the reads back to a reference genome is of particular importance because it is used in several other biological applications, e.g., genome re-sequencing, DNA methylation, and ChiP sequencing. On a personal computer (PC), the computationally intensive short-read mapping task currently requires several hours to execute while working on very large sets of reads and genomes. Accelerating this task requires parallel computing. Among the current parallel computing platforms, the graphics processing unit (GPU) provides massively parallel computational prowess that holds the promise of accelerating scientific applications at low cost. In this paper, we propose GPU-RMAP, a massively parallel version of the RMAP short-read mapping tool that is highly optimized for the NVIDIA family of GPUs. We then evaluate GPU-RMAP by mapping millions of synthetic and real reads of varying widths on the mosquito (Aedes aegypti) and human genomes. We also discuss the effects of various input parameters, such as read width, number of reads, and chromosome size, on the performance of GPU-RMAP. We then show that despite using the conventionally “slower” but GPU-compatible binary search algorithm, GPU-RMAP outperforms the sequential RMAP implementation, which uses the “faster” hashing technique on a PC. Our data-parallel GPU implementation results in impressive speedups of up to 14:5-times for the mapping kernel and up to 9:6-times for the overall program execution time over the sequential RMAP implementation on a traditional PC.

Book ChapterDOI
01 Jan 2010
TL;DR: The purpose of this article is to provide systematic account of how ontologies can be applied in CSD, and to describe benefits of both existing applications such as “semantic wikis ” as well as visionary scenarios such as a “Software Engineering Semantic Web ”.
Abstract: Making distributed teams more efficient is one main goal of Collaborative Software Development (CSD) research. To this end, ontologies, which are models that capture a shared understanding of a specific domain, provide key benefits. Ontologies have formal, machine-interpretable semantics that allow to define semantic mappings for heterogeneous data and to infer implicit knowledge at run-time. Extending development infrastructures and software architectures with ontologies (of problem and solution domains) will address coordination and knowledge sharing challenges in activities such as documentation, requirements specification , component reuse, error handling, and test case management. The purpose of this article is to provide systematic account of how ontologies can be applied in CSD, and to describe benefits of both existing applications such as “semantic wikis ” as well as visionary scenarios such as a “Software Engineering Semantic Web ”.

Proceedings ArticleDOI
11 Dec 2010
TL;DR: Experimental results show the practicability and superiority of the proposed method over its classical counterparts, providing high performance in terms of PSNR and data hiding capacity.
Abstract: Reversible data hiding is a technique not only the secret data can be extracted from a covered image but also the cover image can be completely rebuilt after the extraction process. Therefore, it is the choice in cases of secret data hiding where the full recovery of the cover image is essential. In this paper, we propose a reversible data hiding technique based on Neighbor Mean Interpolation (NMI) method utilizing the R-weighted Coding Method (RCM). Experimental results show the practicability and superiority of the proposed method over its classical counterparts, providing high performance in terms of PSNR and data hiding capacity.

Journal ArticleDOI
01 Nov 2010
TL;DR: Several software and hardware approaches can increase the interconnection network's power efficiency by using the network more efficiently or using throttling bandwidths to reduce the power consumption of unneeded resources.
Abstract: Although most large-scale systems are designed with the network as a central component, the interconnection network's energy consumption has received little attention. However, several software and hardware approaches can increase the interconnection network's power efficiency by using the network more efficiently or using throttling bandwidths to reduce the power consumption of unneeded resources.

Proceedings ArticleDOI
11 Dec 2010
TL;DR: A novel query dependent feature fusion method for medical image retrieval based on one class support vector machine that can learn different feature fusion models for different image queries only based on multiply image samples provided by the user and can outperform existed feature fusion methods for image retrieval.
Abstract: Due to the huge growth of the World Wide Web, medical images are now available in large numbers in online repositories, and there exists the need to retrieval the images through automatically extracting visual information of the medical images, which is commonly known as content-based image retrieval (CBIR). Since each feature extracted from images just characterizes certain aspect of image content, multiple features are necessarily employed to improve the retrieval performance. Meanwhile, experiments demonstrate that a special feature is not equally important for different image queries. Most of existed feature fusion methods for image retrieval only utilize query independent feature fusion or rely on explicit user weighting. In this paper, we present a novel query dependent feature fusion method for medical image retrieval based on one class support vector machine. Having considered that a special feature is not equally important for different image queries, the proposed query dependent feature fusion method can learn different feature fusion models for different image queries only based on multiply image samples provided by the user, and the learned feature fusion models can re?ect the different importances of a special feature for different image queries. The experimental results on the IRMA medical image collection demonstrate that the proposed method can improve the retrieval performance effectively and can outperform existed feature fusion methods for image retrieval.

Proceedings ArticleDOI
11 Dec 2010
TL;DR: This paper proposes a novel individualized error prediction (IEP) mechanism that considers a range of k-NN classifiers (for different k values) and uses secondary regression models that predict the error of each such classifier.
Abstract: Time-series classification is an active research topic in machine learning, as it finds applications in numerous domains. The k-NN classifier, based on the discrete time warping (DTW) distance, had been shown to be competitive to many state-of-the art time-series classification methods. Nevertheless, due to the complexity of time-series data sets, our investigation demonstrates that a single, global choice for k (>= 1) can become sub optimal, because each individual region of a data set may require a different k value. In this paper, we proposed a novel individualized error prediction (IEP) mechanism that considers a range of k-NN classifiers (for different k values) and uses secondary regression models that predict the error of each such classifier. This permits to perform k-NN time-series classification in a more fine grained fashion that adapts to the varying characteristics among different regions by avoiding the restriction of a single value of k. Our experimental evaluation, using a large collection of real time-series data, indicates that the proposed method is more robust and compares favorably against two examined baselines by resulting in significant reduction in the classification error.

Book ChapterDOI
01 Jan 2010
TL;DR: This work states that in Web 2.0 contexts, little to no attention has been given to practitioners carrying out complex, collaborative, and knowledge-intensive tasks in organizational contexts.
Abstract: Modern software architecting increasingly often takes place in geographically distributed contexts involving teams of professionals and customers with different backgrounds and roles. So far, attention and effort have been mainly dedicated to individuals sharing already formalized knowledge and less to social, informal collaboration. Furthermore, in Web 2.0 contexts, little to no attention has been given to practitioners carrying out complex, collaborative, and knowledge-intensive tasks in organizational contexts.

Journal ArticleDOI
01 Sep 2010
TL;DR: The hierarchical data format the computational fluid dynamics (CFD) data storage for the CFD General Notation Standard (CGNS) is described.
Abstract: Practical experience migrating to HDF for a numerical simulation library illustrates the storage system's benefits as well as how numerical simulation codes contribute to an open system. In this article describe the hierarchical data format the computational fluid dynamics (CFD) data storage for the CFD General Notation Standard (CGNS).

Proceedings ArticleDOI
11 Dec 2010
TL;DR: The system design of the Multi-Object Oriented Augmented Reality (MOOAR) system for location-based adaptive mobile learning environment and the scenario study and the detailed rationales behind the MOOAR system are presented.
Abstract: Augmented Reality allows the user to see the virtual objects superimposed upon or composited with the real world. This paper presented the system design of the Multi-Object Oriented Augmented Reality (MOOAR) system for location-based adaptive mobile learning environment and the scenario study. Moreover, the detailed rationales behind the MOOAR system are also discussed in this paper. The implementation of the MOOAR system is described with the designed scenario. Furthermore, the expected results of the scenario study are shown in this paper to demonstrate the advantages of using Augmented Reality in location-based adaptive mobile learning.

Proceedings ArticleDOI
11 Dec 2010
TL;DR: This paper proposes a new parallel computational model called LogGPH with a new parameter H incorporated into the LogGP model to describe the communication hierarchy and shows that the new model is more accurate than the Log GP model.
Abstract: In large-scale cluster systems, interconnecting thousands of computing nodes increase the complexity of the network topology. Nevertheless, few existing computational models consider the impact of hierarchical communication latencies and bandwidths caused by the network complexity. In this paper we propose a new parallel computational model called LogGPH with a new parameter H incorporated into the LogGP model to describe the communication hierarchy. Through predicting and analyzing the point-to-point and collective MPI_Allgather communication on two 100-Terascale supercomputers, the Dawning 5000A and the Deep Comp 7000, with the new model, it shows that the new model is more accurate than the LogGP model. The mean of absolute error of our model on point-to-point communications is 13%, but the value is 30% without the hierarchical communication consideration.

Proceedings ArticleDOI
11 Dec 2010
TL;DR: This paper describes and analyzes three parallel versions of the dense LU factorization method that is used in linear system solving on a multicore using OpenMP interface and proposes an implementation of the pipelining technique in OpenMP.
Abstract: Recent developments in high performance computer architecture have a significant effect on all fields of scientific computing. Linear algebra and especially the solution of linear systems of equations lies at the heart of many applications in scientific computing. This paper describes and analyzes three parallel versions of the dense LU factorization method that is used in linear system solving on a multicore using OpenMP interface. More specifically, we present two naive parallel algorithms based on row block and row cyclic data distribution and we put special emphasis on presenting a third parallel algorithm based on the pipeline technique. Further, we propose an implementation of the pipelining technique in OpenMP. Experimental results on a multicore CPU show that the proposed OpenMP pipeline implementation achieves good overall performance compared to the other two naive parallel methods. Finally, in this work we propose a simple, fast and reasonably analytical model to predict the performance of the LU decomposition method with the pipelining technique.

Proceedings ArticleDOI
11 Dec 2010
TL;DR: A fault taxonomy for scientific workflows is introduced that may help in conducting a systematic analysis of faults, so that the potential faults that may arise at execution time can be corrected.
Abstract: Scientific workflows generally involve the distribution of tasks to distributed resources, which may exist in different administrative domains. The use of distributed resources in this way may lead to faults, and detecting them, identifying them and subsequently correcting them remains an important research challenge. We introduce a fault taxonomy for scientific workflows that may help in conducting a systematic analysis of faults, so that the potential faults that may arise at execution time can be corrected (recovered from). The presented taxonomy is motivated by previous work [4], but has a particular focus on workflow environments (compared to previous work which focused on Grid-based resource management) and demonstrated through its use in Weka4WS.

Journal ArticleDOI
01 Jul 2010
TL;DR: A fairly simple use case that's likely to be of interest to the authors' readers: setting up your own mini compute cluster to use for developing and testing high-performance computing applications.
Abstract: The fun all began in May of 1999, when VMware launched VMware Workstation, a product that lets you run multiple operating systems simultaneously on your desktop computer. In truth, the story begins much earlier with the VM OS concept, which was pioneered (like many things) by IBM in 1960 but eventually perfected by others. The idea behind virtualization is simple. You can run multiple OSs simultaneously and share the CPU, memory, and peripherals among them. In this article, we're not going to cover what virtualization is per se. This would easily require two articles, and the actual ideas behind virtualization are well explained elsewhere. And besides, we've already covered the use of virtualization in this column for use in maintaining experimental computing laboratories. Instead, we'll focus here on a fairly simple use case that's likely to be of interest to our readers: setting up your own mini compute cluster to use for developing and testing high-performance computing applications.

Journal ArticleDOI
01 Jan 2010
TL;DR: Using real-time, multiplatform remote sensing imagery data, researchers located these barrier lakes, acquired their distribution information, and assessed the threats to inform mitigation efforts and help prevent potentially disastrous consequences.
Abstract: The Wenchuan earthquake of 12 May 2008 produced numerous secondary mountain disasters, including the formation of dangerous barrier lakes resulting from blocked river channels By exploiting real-time, multiplatform remote sensing imagery data, researchers located these barrier lakes, acquired their distribution information, and assessed the threats to inform mitigation efforts and help prevent potentially disastrous consequences

Proceedings ArticleDOI
11 Dec 2010
TL;DR: Primitive evaluations show that the precision rate of the emotion detection engine is rather satisfactory for applications that distinguish emotions of Happiness, Sadness, Anger, and Fear.
Abstract: This study proposes an emotion detection engine for real time Internet chatting applications. We adopt a Web scale text mining approach that automates the categorization of affection state of daily events. We first accumulated a huge collection of real-life entities from Web that would participate in events with a user in the chatting room. Based on the common actions between each entity and the type of the user in a chatting room session, such as boy, girl, old man and so on, each collected entity was automatically classified into different affective categories such as pleasant, provoking, grievous, and scary. During a chatting session, each sentence is first parsed using semantic roles labeling techniques to retrieve the verb and object of the event embedded in the sentence. Based on a set of manually authored emotion generation rule, the system then assigns the emotion based on the verb and the affective categories of the object. Primitive evaluations show that the precision rate of the emotion detection engine is rather satisfactory for applications that distinguish emotions of Happiness, Sadness, Anger, and Fear.

Journal ArticleDOI
01 Jul 2010
TL;DR: In this paper, the authors investigate a study in which magnetic resonance imaging (MRI) is combined with electroencephalography (EEG) data to examine working memory, and they use VisTrails; however, nearly any Pythonenabled application can produce similar results.
Abstract: The Python programming language provides a development environment suitable to both computational and visualization tasks. One of Python's key advantages is that it lets developers use packages that extend the language to provide advanced capabilities, such as array and matrix manipulation, image processing, digital signal processing, and visualization. Several popular data exploration and visualization tools have been built in Python, including Visit (www.llnl. gov/visit), Paraview (www.paraview. org), climate data analysis tools (CDAT; www2-pcmdi.llnl.gov/cdat), and VisTrails (www.vistrails .org). In our work, we use VisTrails; however, nearly any Python-enabled application can produce similar results. The neuroscience field often uses bothmultimodal data and computationally complex algorithms to analyze data collected from study participants. Here, we investigate a study in which magnetic resonance imaging (MRI) is combined with electroencephalography (EEG) data to examine working memory.

Proceedings ArticleDOI
11 Dec 2010
TL;DR: A dynamically adaptive reconfigurable accelerator framework for the processor/RL architecture is proposed and an accelerator selection model is presented for selecting an accelerator at run time among the predefined input patterns.
Abstract: Attaching a reconfigurable loop accelerator to a processor for improving the performance and the efficiency of the system, which can be further enhanced by unrolling the loop to change its parallelism in a better way, is a promising development. The more a loop is unrolled, the wider the reconfigurable area that is exposed. However, the utilization of a loop accelerator is highly linked with the input. Also, in some situations, one will be wasting area to overunroll the loop. With a focus on the area and the performance balance, this paper proposes a dynamically adaptive reconfigurable accelerator framework for the processor/RL architecture. In the framework, reconfiguration of the accelerator is driven by the input. An accelerator selection model is presented for selecting an accelerator at run time among the predefined input patterns. Also, with the help of a detailed illustration of a bzip2 case study, experimental results were provided for the feasibility of the approach, which showed that up to 69.21% reconfigurable area is saved at a cost of 2.63% performance slowdown in the best case.

Journal ArticleDOI
01 Sep 2010
TL;DR: The goal of the Battlespace Environments Institute (BEI) is to integrate Earth and space modeling capabilities into a seamless, whole-Earth common modeling infrastructure that facilitates interservice development of multiple, mission-specific environmental simulations and supports battlefield decisions, improves interoperability, and reduces operating costs.
Abstract: The goal of the Battlespace Environments Institute (BEI) is to integrate Earth and space modeling capabilities into a seamless, whole-Earth common modeling infrastructure that facilitates interservice development of multiple, mission-specific environmental simulations and supports battlefield decisions, improves interoperability, and reduces operating costs.

Proceedings ArticleDOI
11 Dec 2010
TL;DR: Experimental results show that the proposed method which utilizes social tags as well as the content of academic conference in order to improve automatically identifying academic conference classification performs better than the method which only utilizes the content to classify academic conference.
Abstract: Automatically classifying academic conference into semantic topic promises improved academic search and browsing for users. Social tagging is an increasingly popular way of describing the topic of academic conference. However, no attention has been devoted to academic conference classification by making use of social tags. Motivated by this observation, this paper proposes a method which utilizes social tags as well as the content of academic conference in order to improve automatically identifying academic conference classification. The proposed method applies different automatic classification algorithms to improve classification quality by using social tags. Experimental results show that this method mentioned above performs better than the method which only utilizes the content to classify academic conference with 1% Precision measure score increase and 1.64% F1 measure score increase, which demonstrates the effectiveness of the proposed method.

BookDOI
01 Jan 2010
TL;DR: A Model-Order Reduction approach to Parametric Electromagnetic Inversion and a Suite of Mathematical Models for Bone Ingrowth, Bone Fracture Healing and Intra-Osseous Wound Healing are presented.
Abstract: A Model-Order Reduction Approach to Parametric Electromagnetic Inversion.- Shifted-Laplacian Preconditioners for Heterogeneous Helmholtz Problems.- On Numerical Issues in Time Accurate Laminar Reacting Gas Flow Solvers.- Parallel Scientific Computing on Loosely Coupled Networks of Computers.- Data Assimilation Algorithms for Numerical Models.- Radial Basis Functions for Interface Interpolation and Mesh Deformation.- Least-Squares Spectral Element Methods in Computational Fluid Dynamics.- Finite-Volume Discretizations and Immersed Boundaries.- Large Eddy Simulation of Turbulent Non-Premixed Jet Flames with a High Order Numerical Method.- A Suite of Mathematical Models for Bone Ingrowth, Bone Fracture Healing and Intra-Osseous Wound Healing.- Numerical Modeling of the Electromechanical Interaction in MEMS.- Simulation of Progressive Failure in Composite Laminates.- Numerical Modeling of Wave Propagation, Breaking and Run-Up on a Beach.- Hybrid Navier-Stokes/DSMC Simulations of Gas Flows with Rarefied-Continuum Transitions.- Multi-Scale PDE-Based Design of Hierarchically Structured Porous Catalysts.- From Molecular Dynamics and Particle Simulations towards Constitutive Relations for Continuum Theory.