scispace - formally typeset
Search or ask a question

Showing papers presented at "Central and Eastern European Software Engineering Conference in Russia in 2017"


Proceedings ArticleDOI
20 Oct 2017
TL;DR: A method for solving the problem of encapsulation is proposed based on the creation of the VI-XML metalanguage (Visual Intelligence XML), which includes a description of rules, concepts and opinions, oriented to presenting components of visual models of different hierarchy levels.
Abstract: The work is devoted to the analysis of complex systems visual modelling methods. A set of graphical models usage (graphical notation) methods and formal text description (text notation) methods is used. The conceptual, structure-functional, logical and physical levels of modelling are distinguished. In general the lower level of visual modelling reflects specific abstractions. The problem of encapsulation is also considered, which is caused by the fragmented nature of visual analysis and the isolation of available tools from application during different stages of the life cycle modelling. A method for solving these problems is proposed based on the creation of the VI-XML metalanguage (Visual Intelligence XML), which includes a description of rules, concepts and opinions, oriented to presenting components of visual models of different hierarchy levels. Based on the universal language of synchronous visual models VI-XML, a universal visual diagram editor VI (Visual Intelligence) has been created.

17 citations


Proceedings ArticleDOI
20 Oct 2017
TL;DR: In this article, a graph parsing technique based on generalized top-down parsing algorithm (GLL) is proposed, which allows one to build finite structural query result representation with respect to the given grammar in polynomial time and space.
Abstract: Graph data model and graph databases are popular in such areas as bioinformatics, semantic web, and social networks. One specific problem in the area is a path querying with constraints formulated in terms of formal grammars. The query in this approach is written as a grammar and paths querying is graph parsing with respect to the grammar. There are several solutions to it, but they are based mostly on CYK or Earley algorithms which impose some restrictions in comparison with other parsing techniques, and employing of advanced parsing techniques for graph parsing is still an open problem. In this paper we propose a graph parsing technique which is based on generalized top-down parsing algorithm (GLL) and allows one to build finite structural query result representation with respect to the given grammar in polynomial time and space for arbitrary context-free grammar and graph.

15 citations


Proceedings ArticleDOI
20 Oct 2017
TL;DR: This work proposes an adaptable and flexible trust modeling framework -employing general accepted ISO 25010 modeling concepts while enhancing the previous work in quality in use modeling, which can be used for evaluation and improvement of trust targeted for different mobile apps.
Abstract: The sharing economy1, brought to us via mobile and cloud technologies, has transformed our lives. While making our lives more convenient and cost effective, sharing also implies trust; trust not only between people, but trust based on interconnections between people and software. Trust is not an element that comes to mind when using software, but it is critical to our usage patterns and hence the software's success in the market. Thus, understanding trust as a crucial characteristic of mobile apps will lead to more successful applications. Developing this understanding needs to be done via modeling and evaluating combined with iterative improvement in the end user's trust via systematic frameworks and strategies. To do this, we propose an adaptable and flexible trust modeling framework -employing general accepted ISO 25010 modeling concepts while enhancing our previous work in quality in use modeling. By representing specific system quality characteristics that may influence trust from the quality in use standpoint, our resulting trust modeling framework, validated through surveys of real users, can be used for evaluation and improvement of trust targeted for different mobile apps.

7 citations


Proceedings ArticleDOI
20 Oct 2017
TL;DR: The article gives the review of TRIZ application experience, as well as an algorithm of analysis and development of project ideas in the area of IT and software, and an electronic template of demand contradictions formulating are described.
Abstract: The article presents the further development of the report presented on SECR-2009. It was written specially for SECR-20171. In contradistinction to the previous report the current article gives the review of TRIZ application experience, as well as an algorithm of analysis and development of project ideas in the area of IT. An electronic template of demand contradictions formulating is also described. A brief review of proceedings about TRIZ methods application for efficiency improvement of all software development life cycle stages is given in the article.Starting from1985 TRIZ community works on transfer of TRIZ instruments from tangible technical systems to intangible ones.This process demanded a serious review of important TRIZ concepts such as product-instrument, substance-field etc.Foundation stone of TRIZ - the concept and wording of a contradiction - also requires a review.TRIZ application in an area of IT is one of the most perspective directions of TRIZ development in intangible systems.Some methods, that have already proven to be the effective tools for software development process, are given in the article.The algorithm of analysis and development of project ideas in the area of IT and software is given. Electronic template for simplification and improvement of accuracy of wordings of contradictions of demands and their solutions in software development is worked out. There are illustrations of some TRIZ instruments application for software development.The article will be useful for software developers, analysts, decision-makers in the development of new solutions in the field of IT.

7 citations


Proceedings ArticleDOI
20 Oct 2017
TL;DR: In this article, a workflow of connected activities is proposed to transform a study process to active learning where both sides (instructors and students) actively interact and cooperate, and argue that more collaborative activities should be introduced into teaching process, thus enabling students to have more opportunities for public display of their works.
Abstract: The paper continues a discourse on methods and organization of software development teaching and learning process through the perspective of considering computer science disciplines within the context of liberal arts. We examine the methodology gap in software development education, and argue that more collaborative activities should be introduced into teaching process, thus enabling students to have more opportunities for public display of their works. Instead of a common non-creative lecture/lab organization, we introduce a workflow of connected activities transforming a study process to active learning where both sides (instructors and students) actively interact and cooperate. We make an effort to revisit the question, how teaching forms and practices existing in humanities can be applied to a case of computer science education. With an example of programming courses taught at the University of Aizu to international students, we argue that any live activities (such as lectures in a lab, hands-on, testing and review sessions) are of extreme importance for involving students into a creative process. This approach aims at transforming one-way information transmission between a teacher and a student to their communication and, therefore, to better understanding of each other's perspectives.

6 citations


Proceedings ArticleDOI
20 Oct 2017
TL;DR: The mathematical description of extended model of goals, operators, methods and selection rules (GOMS) for quantifying gesture interfaces suggested and user experience of interacting with gesture interface, controlled by hand motions, analyzed.
Abstract: Extended model of goals, operators, methods and selection rules (GOMS) for quantifying gesture interfaces suggested. User experience of interacting with gesture interface, controlled by hand motions, analyzed. The components and coefficients of extended model are proposed, experimentally tested and verified. The mathematical description of extended model of goals, operators, methods and selection rules (GOMS) is presented.

5 citations


Proceedings ArticleDOI
20 Oct 2017
TL;DR: An exact algorithm for traveling salesman problem based on simplified branch-and-bound algorithm developed by E. Balas and N. Christofides, parallelized with OpenMP on a multi-core processor is described.
Abstract: We describe an exact algorithm for traveling salesman problem based on simplified branch-and-bound algorithm developed by E. Balas and N. Christofides, parallelized with OpenMP on a multi-core processor. It has shown better performance than algorithms in preceding articles and works. Our article is intended for people who use parallel programming technologies, deal with mathematical optimization problems, have interest in perspective algorithms for bioinformatics or NP-hard problems.

4 citations


Proceedings ArticleDOI
20 Oct 2017
TL;DR: PictoMir was developed in the SRISA RAS, allowing preschoolers of the age 6+ to master a basic set of programming concepts: program, subroutine, repeater, feedback, command-orders and command-questions, branching, repeaters, counters.
Abstract: The age reduction of acquaintance of children with programming is a worldwide trend. During long-term experiments, a freely distributed multiplatform educational and gaming system PictoMir was developed in the SRISA RAS, allowing preschoolers of the age 6+ to master a basic set of programming concepts: program, subroutine, repeater, feedback, command-orders and command-questions, branching, repeaters, counters. In the 2016--2017 academic year, 902 children in 15 municipal kindergartens of Surgut successfully passed the annual cycle of "Algorithmic for Preschoolers", creating on the tablets programs for managing virtual robots and real robots-toys.

3 citations


Proceedings ArticleDOI
20 Oct 2017
TL;DR: A new classification using biometric system errors is proposed and its feasibility is demonstrated and its limitations are shown.
Abstract: Biometrie system users' classification based on their recognition quality is an important issue when developing and exploiting such systems. Existing approaches based on a biometric menagerie concept are described and their limitations are shown. A new classification using biometric system errors is proposed and its feasibility is demonstrated.

3 citations


Proceedings ArticleDOI
20 Oct 2017
TL;DR: The authors found an original way of applying regression based on Random Forest to the regional surrogate model of the oilfield, which made it possible to increase the accuracy of planning and economy to the digital oilfield processes.
Abstract: The oil and gas industry successfully existed long before the advent of digital technology. The main driver for new solutions and the use of new technologies in the oil and gas industry is the complication of methods of oil extraction. It is necessary to make more accurate predictions of production while increasing the cost of obtaining the initial information for modeling. This state of things leads to the inevitable use of the most advanced modelling techniques, which primarily include machine learning. The complexity of the machine learning application in the oil and gas industry is that it is necessary to obtain a result beyond the available methods on the one hand, and on the other hand, the quantity and quality of the original data should not increase. Proposed application of machine learning was the modeling of the additional production from tertiary methods of improving oil recovery. The authors found an original way of applying regression based on Random Forest to the regional surrogate model of the oilfield. The successful application of machine learning for modeling made it possible to increase the accuracy of planning and economy to the digital oilfield processes.

3 citations


Proceedings ArticleDOI
20 Oct 2017
TL;DR: The converters from Clang intermediate representation (Clang AST) to OPS intermediate representation and back are described, and the promising projects of automatic code acceleration that are being developed on the base of OPS are described.
Abstract: In this work, the perspective of using Optimizing Parallelizing System (http://ops.rsu.ru/en/) together with Clang compiler is considered. The converters from Clang intermediate representation (Clang AST) to OPS intermediate representation and back are described. The advantages of high-level internal representation of OPS over register-based LLVM and GCC internal representations in the problems of code generation for non-standard architectures are described. The capability of code generation for computational systems with distributed memory (for example, compute clusters), as well as graphical and FPGA accelerators from OPS internal representation is considered. The promising projects of automatic code acceleration that are being developed on the base of OPS are described. OPS source code has been gradually moved to open access since August 2017 (https://github.com/OpsGroup/open-ops).

Proceedings ArticleDOI
20 Oct 2017
TL;DR: The mathematical model of userspace-based process tree reconstruction via syscall sequences is constructed on the basis of the type-0 formal grammar and prototypes as two-staged grammar analyser with 3 heuristics for grammar shortening indicate the possibility of grammatical analysis competitive application for metadata reconstruction in checkpoint-restore tools.
Abstract: The mathematical model of userspace-based process tree reconstruction via syscall sequences is constructed on the basis of the type-0 formal grammar and prototyped as two-staged grammar analyser with 3 heuristics for grammar shortening. The prototype has been developed to compare with profile-based techniques of syscall collection. The results indicate the possibility of grammatical analysis competitive application for metadata reconstruction in checkpoint-restore tools.

Proceedings ArticleDOI
20 Oct 2017
TL;DR: This article considers a novel approach to network function development and presents a framework for rapid development of performant, scalable virtualized network functions.
Abstract: This article considers a novel approach to network function development. Transmitting speed and amount of data in the networks are exponentially increasing, which makes middle-boxes to be less efficient due to cost, deployment, inflexibility, scalability and other issues. Network function virtualization technology, on the other hand, was proposed to solve this problem by moving hardware functionality to be developed as a software and deployed to commodity hardware. However, this approach brought several new problems: slow speed of network functions' development, lower performance compared to the middle-boxes, virtual machines scaling and deployment issues. Our approach presents a framework for rapid development of performant, scalable virtualized network functions.

Proceedings ArticleDOI
20 Oct 2017
TL;DR: The author proposes to begin designing a software system from building a domain model and not from identifying classes, but from dividing the designed application into system layers, in contrast to the standard three-layer architecture.
Abstract: The article will be useful for colleagues who are engaged in the development of client applications for Windows and other operation systems. The author proposes to begin designing a software system not from building a domain model and not from identifying classes, but from dividing the designed application into system layers. In contrast to the standard three-layer architecture, the application is divided into 5 layers. The purpose of these layers and approaches to the classes identification on each of them is described. The proposed methodology is used in the development of internal tools used in the game development process of Larian Studios.

Proceedings ArticleDOI
20 Oct 2017
TL;DR: A modification to VFS API is proposed that shows measurable improvement of latency and general throughput on synthetic metadata-intensive tests, even with standard NFS servers.
Abstract: Due to VFS architecture limitation, Linux NFSv4 and 4.1 client cannot join RPC requests into compounds even in cases when it is allowed by protocol specification. This leads to the high sensitivity to the network latency and loss of performance on metadata-intensive operations, especially on workloads when many small files are opened. Similar issue exists in other Unix-like kernels. We propose a modification to VFS API that resolves this issue. We have a demo implementation of modified VFS and NFS client that shows measurable improvement of latency and general throughput on synthetic metadata-intensive tests, even with standard NFS servers.

Proceedings ArticleDOI
20 Oct 2017
TL;DR: The paper describes checkpoint/restore facility for file locks in Linux in userspace and two types of file locks will be considered in particular: file lease and OFD lock.
Abstract: Checkpoint/restore (a.k.a checkpoint/restart) is a technique which is naturally described by its two parts. The first one is a checkpoint. It allows creating snapshot of an application. The second one is restart. It uses the snapshot to run a copy of the application in the state it was during the time of the checkpoint. Moreover, a new instance can be created on another machine or after a period of time. Checkpoint/restore has found many applications in real-world problems such as live migration, load balancing, crash recovering, debugging and many others.The paper describes checkpoint/restore facility for file locks in Linux in userspace. Two types of file locks will be considered in particular: file lease and OFD lock. The solution described can be used in userspace and it does not require extra kernel modules or modification of the operating system. Implementation is based on CRIU project and intensively uses its infrastructure.

Proceedings ArticleDOI
20 Oct 2017
TL;DR: This paper describes a new algorithm of SSD cache filling based on analysis of requests to the storage system - with the use of machine learning methods.
Abstract: This paper describes a new algorithm of SSD cache filling based on analysis of requests to the storage system - with the use of machine learning methods. The goal of this research is extension of the SSD lifecycle in working scenarios where SSD is used as a caching device.

Proceedings ArticleDOI
20 Oct 2017
TL;DR: Within development of multi-user natural language processing tools with one processing and management core it is important to dispatch user requests basing on forecasts of execution time and required resources, monitoring of available resources, and queueing subsystem considering the details of natural languageprocessing.
Abstract: High demands of calculating resources for processing even typical computer linguistics tasks and durable execution of complex algorithms significantly obstruct the development of program tools for mass use. Within development of multi-user natural language processing tools with one processing and management core it is important to dispatch user requests basing on forecasts of execution time and required resources, monitoring of available resources, and queueing subsystem considering the details of natural language processing.

Proceedings ArticleDOI
Alexey Kanatov1, Eugene Zouev
20 Oct 2017
TL;DR: This paper gives overview of the SLang programming language, its differentiating features like multiple inheritance with conflicts and multiple overriding, modules-classes-types as unified concept together with standalone routines, NULL absence, constant objects, extended overloading and other concepts.
Abstract: This paper gives overview of the SLang programming language, its differentiating features like multiple inheritance with conflicts and multiple overriding, modules-classes-types as unified concept together with standalone routines, NULL absence, constant objects, extended overloading and other concepts.

Proceedings ArticleDOI
20 Oct 2017
TL;DR: This paper presents an effective method of combining technologies of selective deduplication index and log-structured writing which allows using inline-deduplications without significant loss of performance.
Abstract: There is an important task of reducing the cost of storage in cloud infrastructure. One of the best known technologies of saving space and reducing the cost of storage as a result is deduplication.This paper presents an effective method of combining technologies of selective deduplication index and log-structured writing which allows using inline-deduplication without significant loss of performance.

Proceedings ArticleDOI
20 Oct 2017
TL;DR: The aim is designing an architecture that integrates automatic metadata extraction and rule-based methods to better utilize the OER, and this architecture helps synchronizing metadata from the resulting repository with versatile OER Web-based resources.
Abstract: This paper proposes an integrated approach for data warehousing of the educational metadata in the area of open educational resources (OER). The aim is designing an architecture that integrates automatic metadata extraction and rule-based methods to better utilize the OER. This architecture helps synchronizing metadata from the resulting repository with versatile OER Web-based resources. Therewith, the approach to data warehousing involves an extract-transform-load (ETL) process. The proposed architecture, as our experimentation shows, in terms of data management and process flow, improves performance in handling OER metadata. This rule-based architecture allows for automatic extraction and classification of metadata in a number of pre-defined categories. These metadata categories are available through a web portal.

Proceedings ArticleDOI
20 Oct 2017
TL;DR: An algorithm analysis technique that allows to determine fragments for a significant reduction in the computing time during the parallel execution of parallelized algorithm for its effective solution with the help of hybrid supercomputer resources is presented.
Abstract: The work1 considers an approach to the adaptation of applications that use algorithms proven to be successful and developed with the help of computers that do not have a high degree of parallelism, though in a number of implementations they require a sharp reduction in the computing time. A natural way-out is to transfer the algorithm solution to a highly parallel heterogeneous processing environment, i.e. hybrid supercomputer. Unfortunately, the result does not always meet expectations. The challenge is the need to consider architectural features of the supercomputer and the corresponding translation of the generic algorithm, while maintaining its semantic features, i. e. the development of parallel software of the generic algorithm scalable to allocated supercomputer resources. Available approaches to the software parallelization deliver superb results when algorithms demonstrate obvious parallelism. Otherwise, their transformation to the parallel representation requires an analysis of dependencies in parallel threads on data and costs of the parallel supercomputer execution. Presented in this paper is an algorithm analysis technique that allows to determine fragments for a significant reduction in the computing time during the parallel execution. The result is an algorithm specification work schedule that ensures the effective solution, using the supercomputer. The schedule is used to create the dedicated control over the execution of parallelized algorithm for its effective solution with the help of hybrid supercomputer resources. The work shows results of the implementation of the developed technique in terms of the genetic research.

Proceedings ArticleDOI
20 Oct 2017
TL;DR: The paper presents a module for gathering and analyzing the source code massively in a detailed manner and compares existing static code analyzers for Python programming language.
Abstract: Authors describe architecture and implementation of an automated source code analyzing system which uses pluggable static code analyzers. The paper presents a module for gathering and analyzing the source code massively in a detailed manner. Authors also compare existing static code analyzers for Python programming language. A common format of storing results of code analysis for subsequent processing is introduced. Also, authors discuss methods of statistical processing and visualizing of raw analysis data.