scispace - formally typeset
Search or ask a question

Showing papers by "Hellenic Military Academy published in 2017"


Book ChapterDOI
11 Sep 2017
TL;DR: Current data quality indicators for geographic information as part of the ISO 19157 (2013) standard and how these have been used to evaluate the data quality of VGI in the past are reviewed.
Abstract: Uncertainty over the data quality of Volunteered Geographic Information (VGI) is the largest barrier to the use of this data source by National Mapping Agencies (NMAs) and other government bodies. A considerable body of literature exists that has examined the quality of VGI as well as proposed methods for quality assessment. The purpose of this chapter is to review current data quality indicators for geographic information as part of the ISO 19157 (2013) standard and how these have been used to evaluate the data quality of VGI in the past. These indicators include positional, thematic and temporal accuracy, completeness, logical consistency and usability. Additional indicators that have been proposed for VGI are then presented and discussed. In the final section of the chapter, the idea of integrated indicators and workflows of quality assurance that combine many assessment methods into a filtering system is highlighted as one way forward to improve confidence in VGI.

68 citations


Book ChapterDOI
11 Sep 2017
TL;DR: This chapter proposes a generic and flexible protocol for VGI data collection, which can be applied to new as well as to existing projects regardless of the specific type of geospatial information collected.
Abstract: Volunteered Geographic Information (VGI) has become a rich and well established source of geospatial data. From the popular OpenStreetMap (OSM) to many citizen science projects and social network platforms, the amount of geographically referenced information that is constantly being generated by citizens is burgeoning. The main issue that continues to hamper the full exploitation of VGI lies in its quality, which is by its nature typically undocumented and can range from very high quality to very poor. A crucial step towards improving VGI quality, which impacts on VGI usability, is the development and adoption of protocols, guidelines and best practices to assist users when collecting VGI. This chapter proposes a generic and flexible protocol for VGI data collection, which can be applied to new as well as to existing projects regardless of the specific type of geospatial information collected. The protocol is meant to balance the contrasting needs of providing VGI contributors with precise and detailed instructions while maintaining and growing the enthusiasm and motivation of contributors. Two real-world applications of the protocol are presented, which guide the collection of VGI in respectively the generation and updating of thematic information in a topographic building database; and the uploading of geotagged photographs for the improvement of land use and land cover maps. Technology is highlighted as a key factor in determining the success of the protocol implementation.

10 citations


Book ChapterDOI
01 Jan 2017
TL;DR: In this chapter, methods for securely performing the calculations required for fundamental modular arithmetic operations, namely multiplication and exponentiation using mobile, embedded, remote or distant computational resources, are proposed that offer the possibility for green information processing system development.
Abstract: In this chapter, methods for securely performing the calculations required for fundamental modular arithmetic operations, namely multiplication and exponentiation using mobile, embedded, remote or distant computational resources, are proposed that offer the possibility for green information processing system development. These methods are targeted to the distributed paradigms of cloud computing resources and Internet of Things applications. They provide security by avoiding the disclosure to the cloud resource of either the data or the user secret key. Simultaneously, environmental effects of processing are minimized by the simplifications of the operations and by transferring demanding calculations to energy efficient data centers. Hence the proposed methods are also shown to serve the green IT engineering paradigm. An algorithm for the software implementation of modular multiplication is proposed, which uses pre-computations with a constant modulus to reduce the computational load imposed upon the processor. The developed modular multiplication algorithm provides faster execution on low complexity hardware in comparison with the existing algorithms and is oriented towards the variable value of the modulus, especially with the software implementation on micro controllers and smart cards whose architectures include a small number of bits. The proposed technique for modular exponentiation is based on performing simple operations on the user computational resources, shifting the remaining complex operations to high performance, energy-efficient cloud resources and operates by separating the procedure for modular exponentiation in two components. Security is maintained by maintaining the purpose-specific secret key information exclusively in user resources. The details of the pre-calculation of the secret keys are described. Hence the procedure for transferring the most demanding part of the calculation to the cloud resources is given. It is therefore shown that a potential attacker receives no information by intercepting the data existing in the cloud. The overall process is illustrated by a simple numerical example. The use of the new algorithm in Information Society applications that demand security is investigated. Such applications include e-Government, e-Banking, e-Commerce etc. The algorithm is shown to be adequate both for the applications for which it was originally intended, as well as for applications that are much more demanding in the level of security they require, such as military applications.

6 citations


Journal ArticleDOI
TL;DR: If the same problem is considered without the assumption that the customers are ordered, numerical experiments indicate that the optimal routing strategy can be computed for N ≤ 8 and it is possible to prove that the ideal routing strategy has a specific threshold-type structure.

4 citations


Book ChapterDOI
01 Jan 2017
TL;DR: In this paper, the authors analyze the case studies of the region of the North Aegean and South Aegea, especially the islands of Lesvos, Rhodes, and Crete, and analyze the spatial planning framework and its contribution towards sustainable regional development.
Abstract: Spatial Planning focuses on planning and management of space, as a core axis towards sustainable development, as well as balanced sustainable development, closely related with economic determining factors, such as productivity, economic environment, investments and competitiveness. This paper attempts to analyze spatial planning framework and its contribution towards sustainable regional development. More precisely, this paper analyses the case studies of the region of the North Aegean and South Aegean, especially the islands of Lesvos, Rhodes, and Crete.

4 citations


Journal ArticleDOI
TL;DR: In this article, subjective priorities for the data amounts in the processing of geopolitical data accoding to Mazis I. Th., theoretical paradigm of Systemic Geopolitical Analysis were studied. And the authors investigated geopolitical contrasts of subjective priorities by several geopolitical operators.
Abstract: This paper studies subjective priorities for the data amounts in the processing of geopolitical data accoding to Mazis I. Th., theoretical paradigm of Systemic Geopolitical Analysis. After defining geopolitical plans and geopolitical focus sets, they are introduced geopolitical preferences and geopolitical management capacities. The geopolitical rational choice is studied, as well as the geopolitical preference-capacity distributions. Then, they are investigated geopolitical contrasts of subjective priorities by several geopolitical operators, and it is shown that there are cores and equilibriums of geopolitical contrasts, the study of which may provide useful information.

4 citations


Book ChapterDOI
01 Jan 2017
TL;DR: A block-based approach for watermarking image objects in a way that is invariant to RST distortions, based on shape information since the watermark is embedded in image blocks, the location and orientation of which are defined by Eulerian tours that are appropriately arranged in layers, around the object’s robust skeleton.
Abstract: Plain rotation, scaling, and/or translation (RST) of an image can lead to loss of watermark synchronization and thus authentication failure with standard techniques. The block-based approaches in particular, albeit strong against frequency and cropping attacks, are sensitive to geometric distortions due to the need for repositioning the blocks’ rectangular grid of reference. In this paper, we propose a block-based approach for watermarking image objects in a way that is invariant to RST distortions. With the term image object we refer to semantically contiguous parts of images that have a specific contour boundary. The proposed approach is based on shape information since the watermark is embedded in image blocks, the location and orientation of which are defined by Eulerian tours that are appropriately arranged in layers, around the object’s robust skeleton. The object’s robust skeleton is derived by its boundary after applying an extraction technique and not only is invariant to RST transformations but also to cropping, clipping, and other common deformation attacks, difficult to defend with current methods. Experiments using standard benchmark datasets demonstrate the advantages of the proposed scheme in comparison to alternative state-of-the-art methods.

2 citations


Book ChapterDOI
01 Jan 2017
TL;DR: In this article, the main pillars and determinant factors for efficiency in supply chain management and the effects in the competitiveness and the efficient level for an economy are surveyed. But the authors do not consider the impact of these factors on the performance of an organization.
Abstract: Today’s organizations struggle for efficiency and effectiveness. Strategies involving collaboration between actors and integration of activity chains are reliant of factors that firms do not have direct ownership and control over. This has implications for strategizing, setting the goals and measuring performance. Efficiency and effectiveness are often used to describe performance. From a resource dependence perspective efficiency is defined as an internal standard of performance and effectiveness as an external standard of fit to various demands. This chapter attempts through a literature survey to search the main pillars and the determinant factors for efficiency in supply chain management and to present the effects in the competitiveness and the efficient level for an economy.

2 citations


Book ChapterDOI
01 Jan 2017
TL;DR: This chapter proposes a collection of techniques for correcting transmission burst errors in data transmitted over signal channels suffering from strong electromagnetic interference, such as encountered in distributed and embedded systems and is shown to be more efficient than existing ones, according to criteria that are relevant to current applications.
Abstract: The rapid advances of communication technologies that aim to increase the information transmission speeds, aggravate problems of reliable data exchanges. Especially the expansion of the use of wireless telecommunications technologies is accompanied by a noticeable increase of the intensity of the electromagnetic field and consequently by an increase in the number of errors caused by external interference. The importance of the classical criteria, such as the number of control bits, is reduced and more attention is paid to other parameters, such as the computational and temporal complexity of the procedures for correcting errors, as well as transmission energy requirements. The above factors dictate the necessity for sufficient developments of the means for ensuring the reliability of communication systems, including methods for data transmission error correction. This chapter proposes a collection of techniques for correcting transmission burst errors in data transmitted over signal channels suffering from strong electromagnetic interference, such as encountered in distributed and embedded systems. Efficiency is achieved by separating the error detection from the correction process and using different codes for each case. The proposed error control techniques are based on simple mathematical operations and are suitable for implementation in FPGA devices. It hence becomes possible to replace energy demanding retransmission operations, including the overheads they entail with energy efficient local error correction calculations. The techniques employed are shown to be more efficient than existing ones, according to criteria that are relevant to current applications. These techniques reduce the need for error recovery by retransmission and hence the environmental effect of data transmission in terms of energy consumption and electromagnetic emissions.

2 citations


Journal ArticleDOI
01 Jan 2017
TL;DR: It is shown, both theoretically and experimentally that the proposed method attains a per order acceleration of the execution time required for the user identification by 2 – 3 orders of magnitude, via a hardware implementation.
Abstract: This article proposes an approach that accelerates the realization of strict remote user identification using non reversible Galois field transformation. The proposed approach is based on using finite field arithmetic to replace the usual modular arithmetic. The application of this efficient method that was developed using Galois Fields, renders feasible an exponential reduction of the computation time required for classical zero knowledge identification methods, such as FFSIS, Schnorr and Guillou & Quisquater. The new method for user registration and identification procedure for obtaining access to the system, are illustrated. It is shown, both theoretically and experimentally that the proposed method attains a per order acceleration of the execution time required for the user identification by 2 – 3 orders of magnitude, via a hardware implementation.

2 citations


Journal ArticleDOI
TL;DR: In this paper, the authors studied real hypersurfaces with vanishing, semi-parallel and pseudo parallel Ricci tensors in complex hyperbolic space and provided new results concerning the parallelism of real hypersuran surfaces in non-flat complex space.
Abstract: This paper focuses on the study of three dimensional real hypersurfaces in non-flat complex space forms whose $^{*}$-Ricci tensor satisfies conditions of parallelism. More precisely, results concerning real hypersurfaces with vanishing, semi-parallel and pseudo-parallel $^{*}$-Ricci tensor in complex hyperbolic space are provided. Furthermore, new results concerning $\xi$-parallelism of $^{*}$-Ricci tensor of real hypersurfaces in non-flat complex space forms are presented.


Journal ArticleDOI
01 Jan 2017
TL;DR: The work highlights the most important principles of software reliability management (SRM) and construes a basis for developing a method of requirements correctness improvement to improve requirements correctness due to identification of a higher number of defects with restricted resources.
Abstract: The work highlights the most important principles of software reliability management (SRM). The SRM concept construes a basis for developing a method of requirements correctness improvement. The method assumes that complicated requirements contain more actual and potential design faults/defects. The method applies a newer metric to evaluate the requirements complexity and double sorting technique evaluating the priority and complexity of a particular requirement. The method enables to improve requirements correctness due to identification of a higher number of defects with restricted resources. Practical application of the proposed method in the course of demands review assured a sensible technical and economic effect.

Journal ArticleDOI
01 Jan 2017
TL;DR: The proposed method is computationally simple, since it uses a small number of simple mathematical operations, contrary to existing, general purpose techniques, and the transmission overheads it imposes do not vary significantly from existing error control codes.
Abstract: Loss of synchronization is a common source of errors in asynchronous data channels. Minute differences in the operating frequencies between the transmitter and the receiver result in data bits being lost or false bits being inserted. This paper presents an innovative technique, especially designed for detecting and correcting errors of this type. Data packets are preprocessed and areas that are susceptible to are determined. Suitable redundancy is introduced in the form of control symbols. On the receiver side, similar calculations take place and decisions are made on the occurrence and positions of the transmission errors due to loss of synchronization, which are hence corrected. The proposed method is computationally simple, since it uses a small number of simple mathematical operations, contrary to existing, general purpose techniques. The transmission overheads it imposes do not vary significantly from existing error control codes. Additionally, the number of errors that may be corrected is not subject to the same limits as existing techniques.

Book ChapterDOI
01 Jan 2017
TL;DR: In this article, the authors investigate the possibility of destroying a passive point target and determine the best targeting points in an area within which stationary or mobile targets are distributed uniformly or normally.
Abstract: First, we investigate the possibility of destruction of a passive point target. Subsequently, we study the problem of determination of best targeting points in an area within which stationary or mobile targets are distributed uniformly or normally. Partial results are given in the case in which the number of targeting points is less than seven or four, respectively. Thereafter, we study the case where there is no information on the enemy distribution. Then, the targeting should be organized in such a way that the surface defined by the kill radii of the missiles fully covers each point within a desired region of space-time. The problem is equivalent to the problem of packing ellipsoids of different sizes and shapes into an ellipsoidal container in \(\mathbb{R}^{4}\) so as to minimize a measure of overlap between ellipsoids is considered.