scispace - formally typeset
Search or ask a question

Showing papers by "University of Cyprus published in 1994"


Journal ArticleDOI
TL;DR: In this article, the authors argue that teachers can take a greater role in the overall success of the school when they commit to being active participants in the decision-making process and that teachers need to feel they have more to offer to the school than just teaching within their classroom.
Abstract: Teachers can take a greater role in the overall success of the school when they commit to being active participants in the decision‐making process. Moreover, teachers need to feel they have more to offer to the school than just teaching within their classroom. Suggests that schools must understand that the entire system will benefit when teachers play an active role in controlling their work environment. Finally, everyone must be provided with as much recent research on participatory decision‐making processes as possible before implementation can be achieved successfully. At the same time, specific training should also be provided.

59 citations


Journal ArticleDOI
TL;DR: In this paper, the free energy of SU(n) gauge theories on the lattice, to three loops, was calculated and combined with Monte Carlo data for the average plaquette, giving a more precise estimate of the gluonic condensate.

49 citations


Journal ArticleDOI
TL;DR: In this article, a unified approach for determining the market value of any generic investment lottery, through the concept of a market utility function, is presented, which can be used to value both primary and derivative assets (whether traded or not), is applicable to both CAPM and non-CAPM economies, and does not rely on the ability to trade, replicate or otherwise justify risk neutral valuation in pricing contingent claims.

37 citations


Journal ArticleDOI
M. Acciarri, A. Adam1, O. Adriani2, M. Aguilar-Benitez  +499 moreInstitutions (39)
TL;DR: In this article, the leptonic and hadronic cross sections were measured with the L3 detector at LEP and the results of these measurements were used to determine the electroweak parameters.
Abstract: We report on the measurement of the leptonic and hadronic cross sections and leptonic forward-backward asymmetries at theZ peak with the L3 detector at LEP. The total luminosity of 40.8 pb−1 collected in the years 1990, 1991 and 1992 corresponds to 1.09·106 hadronic and 0.98·105 leptonicZ decays observed. These data allow us to determine the electroweak parameters. From the cross sections we derive the properties of theZ boson: $$\begin{gathered} M_Z = 91195 \pm 9MeV\Gamma _{\rm Z} = 2494 \pm 10MeV \hfill \\ \Gamma _{had} = 1748 \pm 10MeV\Gamma _\ell = 83.49 \pm 0.46MeV \hfill \\ \end{gathered} $$ assuming lepton universality. We obtain an invisible width of Γinv=496.5±7.9 MeV which, in the Standard Model, corresponds to a number of light neutrino species ofN v=2.981±0.050. Using also the three leptonic forward-backward asymmetries and the average tau polarization, we determine the effective vector and axial-vector coupling constants of the neutral weak current to charged leptons to be: $$\bar g^\ell v = - 0.0378_{ - 0.0042}^{ + 0.0045} \bar g^\ell _A = - 0.4998 \pm 0.0014.$$ Within the framework of the Standard Model, and including our measurements of the $$Z \to b\bar b$$ forward-backward asymmetry and partial decay width, we derive an effective electroweak mixing angle of $$sin^2 \bar \theta _W = 0.2326 \pm 0.0012$$ . We obtain an estimate for the strong coupling constant, αS=0.142 ± 0.013 and for the top-quark mass,m t =158 −40 +32 ±19(Higgs) GeV, where the second error arises due to the uncertainty in the Higgs-boson mass.

34 citations


Journal ArticleDOI
M. Acciarri, A. Adam1, O. Adriani2, M. Aguilar-Benitez  +486 moreInstitutions (38)
TL;DR: In this paper, the authors measured the polarization of τ-leptons as a function of the production polar angle using the following 1-prong τ decay modes: τ − → e − v e v τ, τ− →π − (K − )v τ, τ − − →ϱ − v τ and τ −→a 1 − vτ, and obtained for the ratio vector to axial-vector weak neutral couplings for electrons gVe/gAe=0.0791± 0.0099(stat)±0.

25 citations


Proceedings ArticleDOI
14 Aug 1994
TL;DR: This work proposes these combinatorial conditions to be “balancing analogs” of the well known Zero-One principle holding for sorting networks, and develops a combinatorially framework for achieving this goal.
Abstract: Balancing networks, originally introduced by Aspnes et al. (Proceedings of the 23rd Annual ACM Symposium on Theory of Computing, pp. 348-358, May 1991), represent a new class of distributed, low-contention data structures suitable for solving many fundamental multi-processor coordination problems that can be expressed as balancing problems. In this work, we present a mathematical study of the combinatorial structure of balancing networks, and a variety of its applications.Our study identifies important combinatorial transfer parameters of balancing networks. In turn, necessary and sufficient combinatorial conditions are established, expressed in terms of transfer parameters, which precisely characterize many important and well studied classes of balancing networks such as counting networks and smoothing networks. We propose these combinatorial conditions to be “balancing analogs” of the well known Zero-One principle holding for sorting networksWithin the combinatorial framework we develop, our first application is in deriving combinatorial conditions, involving the transfer parameters, which precisely delimit the boundary between counting networks and sorting networks.

24 citations


Journal ArticleDOI
M. Acciarri, A. Adam1, O. Adriani2, M. Aguilar-Benitez  +491 moreInstitutions (37)
TL;DR: Using the L3 detector, the branching ratio BR(b → πνX) has been measured using a sample of Z → b b events tagged by high momentum and high transverse momentum leptons in one hemisphere and with missing energy in the opposite hemisphere as mentioned in this paper.

20 citations


Journal ArticleDOI
TL;DR: A model for layered superconductors, where the order parameter between the layers is nonzero, due to proximity effects, in the limit of decoupled layers, which gives all the terms of the usual Lawrence-Doniach free energy, as well as new terms arising from the existence of an order parameterbetween the layers.
Abstract: We present a model for layered superconductors, where the order parameter between the layers is nonzero, due to proximity effects. In the limit of decoupled layers, this model gives all the terms of the usual Lawrence-Doniach free energy, as well as new terms arising from the existence of an order parameter between the layers. The corresponding Josephson coupling is found to be field dependent. The upper critical field normal to the layers has positive curvature as a function of T, and decreases with increasing interlayer distance, contrary to the predictions of the Lawrence-Doniach model. In addition, the anisotropy deduced from the critical fields ${\mathit{H}}_{\mathit{c}2}$ is markedly different from the low-field anisotropy.

16 citations


Journal ArticleDOI
M. Acciarri, A. Adam1, O. Adriani2, M. Aguilar-Benitez  +488 moreInstitutions (38)
TL;DR: In this article, the authors measured the time integrated B0 B 0 mixing parameter and the forward-backward asymmetry in the process e+e− → b b using hadronic events containing muons or electrons.

14 citations


Journal ArticleDOI
TL;DR: In this paper, a collision between linearly and circularly polarised sech pulses and also elliptically polarised pulses having more complicated profile is studied numerically. But the authors focus on the collision between any two such pulses.

14 citations


Journal ArticleDOI
TL;DR: Artificial neural networks (ANN) are introduced as a tool for building an intelligent diagnostic system; the system does not attempt to replace the physician from being the decision maker but to enhance ones facilities for reaching a correct decision.
Abstract: Recent advances in computer technology offer to the medical profession specialized tools for gathering medical data, processing power, as well as fast storing and retrieving capabilities. Artificial intelligence (AI), an emerging field of computer science is studying the issues of human problem solving and decision making. Furthermore, rule-based systems and knowledge-based systems that are other fields of AI have been adopted by many scientists in an effort to develop intelligent medical diagnostic systems. In this study artificial neural networks (ANN) are introduced as a tool for building an intelligent diagnostic system; the system does not attempt to replace the physician from being the decision maker but to enhance ones facilities for reaching a correct decision. An integrated diagnostic system for assessing certain neuromuscular disorders is used in this study as an example for demonstrating the proposed methodology. The diagnostic system is composed of modules that independently provide numerical data to the system from the clinical examination of a patient, and from various laboratory tests that are performed. The examination procedure has been standardized by developing protocols for each specialized area, in cooperation with experts in the area. At the conclusion of the clinical examination and laboratory tests, data in the form of a numerical vector represents a medical examination snapshot of the subject. Artificial neural network (ANN) models were developed using the unsupervised self-organizing feature maps algorithm. Data from 71 subjects were collected. The ANN models were trained with the data from 41 subjects, and tested with the data from the remaining 30 subjects. Two sets of models were developed; those trained with the data from only the clinical examinations; and those trained by combining the clinical and the laboratory test data. The diagnostic yield that was obtained for the unknown cases is in the region of 73 to 93% for the models trained with only the clinical data, and in the region of 73 to 100% for those trained by combining both the clinical and laboratory data. The pictorial representation of the diagnostic models through the self organized two dimensional feature maps provide the physician with a friendly human-computer interface and a comprehensive tool that can be used for further observations, for example in monitoring disease progression of a subject.

Journal ArticleDOI
TL;DR: In this article, a lattice study of heavy baryons with one or two b-quarks Ξ b was performed, and the quenched approximation was obtained for the mass of Λ b, which was shown to be 5.728 ± 0.018 GeV. The mass splitting between the λ b and the B -meson was found to increase by about 20% if the light quark mass is varied from the chiral limit to the strange quark masses.

Book ChapterDOI
01 Jan 1994
TL;DR: In this article, the acceptability of negation as failure of logic programs is used to define a general framework for default reasoning, where negation can be viewed as a form of hypotheses with which we extend the underlying theory, given by the logic program, provided that this extension satisfies an appropriate criterion, called acceptability.
Abstract: We show how recent developments in the study of negation as failure of Logic Programming can be used to define a general framework for Default Reasoning. Negation as failure can be viewed as a form of hypotheses with which we extend the underlying theory, given by the logic program, provided that this extension satisfies an appropriate criterion, called acceptability. The generalization of logic programming to more general representation frameworks is done by adapting this view of NAF and the basic notion of acceptability to the new frameworks. In general, an extension of a given theory by a set of hypotheses (e.g. defaults) is acceptable if it can render any other conflicting extension not acceptable thus ensuring that it is a “self-justified” set of assumptions. We compare our framework with other existing non-monotonic frameworks such as Default Logic and Theorist. In particular, we will show how our framework includes and extends Default Logic.

Journal ArticleDOI
TL;DR: In this article, a high statistics study of the leptonic decay constant of heavy pseudoscalar mesons using propagating heavy Wilson quarks within the quenched approximation, on lattices covering sizes from about 0.7 fm to 2 fm.
Abstract: We present a high statistics study of the leptonic decay constantf P of heavy pseudoscalar mesons using propagating heavy Wilson quarks within the quenched approximation, on lattices covering sizes from about 0.7 fm to 2 fm. Varying β between 5.74 and 6.26 we observe a sizeablea dependence off P when one uses the quark field normalization that was suggested by Kronfeld and Mackenzie, compared with the weaker depence observed for the standard relativistic norm. The two schemes come into agreement when one extrapolates toa→0. The extrapolations needed to reach the continuum quantityf B introduce large errors and lead to the valuef B =0.18(5) GeV in the quenched approximation. This suggests that much more effort will be needed to obtain an accurate lattice prediction forf B .

Journal ArticleDOI
A. Adam1, O. Adriani2, M. Aguilar-Benitez, Steven Ahlen3  +490 moreInstitutions (38)
TL;DR: In this article, the authors studied the four-fermion processes ee → eeee, eeμμ, eebττ, μμμm, eeqq, and μμqq with the L3 detector at LEP.

Journal ArticleDOI
TL;DR: Practical approaches to the data-parallel solution of large scale optimization problems with network—or embedded-network—structures and empirical results that highlight the performance of the algorithms on a data-Parallel computer, the Connection Machine CM-2 are discussed.
Abstract: Data level parallelism is a type of parallelism whereby operations are performed on many data elements concurrently, by many processors. These operations are (more or less) identical, and are executed in a synchronous, orderly fashion. This type of parallelism is used by massively parallel SIMD (i.e., Single Instruction, Multiple Data) architectures, like the Connection Machine CM-2, the AMT DAP and Masspar, and MIMD (i.e., Multiple Instruction, Multiple Data) architectures, like the Connection Machine CM-5. Data parallelism can also be described by a theoretical model of computation: the Vector-Random Access Machine (V-RAM).

Proceedings ArticleDOI
06 Sep 1994
TL;DR: An overview of the recent developments in the use of artificial neural networks in medical imaging includes ultrasound, magnetic resonance, nuclear medicine and radiological (including computerized tomography).
Abstract: The objective of this paper is to provide an overview of the recent developments in the use of artificial neural networks in medical imaging. The areas of medical imaging that are covered include: ultrasound, magnetic resonance, nuclear medicine and radiological (including computerized tomography). >

Journal ArticleDOI
TL;DR: The diagnostic model used in the medical expert system Skeletal Dysplasias Diagnostician (SDD) is discussed, which aims to capture the diagnostic skills of domain experts and is relatively generic.

Proceedings ArticleDOI
03 Nov 1994
TL;DR: A biopsy analysis support system, called BASS, is introduced that uses an unsupervised object detection loop involving center-surround receptive fields, and a nonlinearity to mark the location of cell nuclei in an image.
Abstract: A biopsy analysis support system, called BASS, is introduced. The system uses an unsupervised object detection loop involving center-surround receptive fields, and a nonlinearity to mark the location of cell nuclei in an image. Preliminary results of applying BASS to breast cancer biopsies are presented. >

Journal ArticleDOI
TL;DR: In this paper, the on-shell proton matrix element of the topological charge density in the quenched approximation was computed on the lattice and the spin content of the proton was computed.

Journal ArticleDOI
TL;DR: In this paper, the theoretical foundations of the creative problem-solving technique known as Synectics were presented and a five-session training programme was developed and tested in Cyprus by the author.
Abstract: Presents the theoretical foundations of the creative problem‐solving technique known as Synectics. Points out the usefulness of this technique for educational leaders and describes a five‐session training programme that was developed and tested in Cyprus by the author. The training programme, which is based on the Synectics literature, has promising results with school principals and prospective organizational leaders.

Journal ArticleDOI
TL;DR: In this article, a Kolmogorov inequality for the class of U-statistics based on kernels which are Bernoulli random variables is presented, which contains the sample average of i.i.d.

Journal ArticleDOI
TL;DR: In this article, the authors tried to establish empirically whether a different degree of persistence is found in real variables under alternative nominal exchange rate systems, and the measure of persistence employed is the modified rescaled range statistic proposed by Lo.

Journal ArticleDOI
TL;DR: In this paper, a collocation strategy for the satisfaction of boundary conditions in the application of Chebyshev spectral methods to Poisson and biharmonic type problems in cuboidal domains is presented.
Abstract: A collocation strategy for the satisfaction of boundary conditions in the application of Chebyshev spectral methods to Poisson and biharmonic-type problems in cuboidal domains is presented. This strategy leads, in both cases, to nonsingular linear systems for the determination of the unknown coefficients.

Journal ArticleDOI
TL;DR: In this article, a Chebyshev collocation strategy is introduced for subdivision of cuboids into cuboidal subdomains (elements), which are conforming, which means that the approximation to the solution isC 0 continuous at all points across their interfaces.
Abstract: A Chebyshev collocation strategy is introduced for the subdivision of cuboids into cuboidal subdomains (elements). These elements are conforming, which means that the approximation to the solution isC 0 continuous at all points across their interfaces.

Journal ArticleDOI
TL;DR: In this paper, a conforming domain decomposition Chebyshev spectral collocation method is developed for the solution of biharmonic-type problems in rectangular domains, which ensures that the solution is C 1 pointwise continuous across the subdomain interfaces and that the boundary conditions are satisfied identically.

Journal ArticleDOI
TL;DR: In this paper, a methodology of testing for exchange rate bubbles using two sets of variance inequalities is proposed. But the null hypothesis is only used for the second set of inequalities and not for the first one.

Posted Content
TL;DR: In this paper, the authors examined the effects of publicly financed infrastructure and R&D capital on the cost structure and productivity performance of twelve two-digit U.S. manufacturing industries and developed a general framework to measure contribution of demand, relative input prices, technical change, as well as publicly financed capital on total factor productivity growth.
Abstract: In this paper we examine the effects of publicly financed infrastructure and R&D capital on the cost structure and productivity performance of twelve two-digit U.S. manufacturing industries. A general framework is developed to measure contribution of demand, relative input prices, technical change, as well as publicly financed capital on total factor productivity growth. The magnitude of the contribution of these sources varies considerably across industries: in some changes in demand dominate while in others changes in technology or relative prices are the main contributors. Publicly financed infrastructure and R&D capital contribute to productivity growth. However, the magnitudes of their contribution vary considerably across industries and on the whole they are not the major contributors to TFP in these industries.

Journal ArticleDOI
TL;DR: In this article, the dependence of the photopyroelectric induced signal on the optical and thermal properties of a two-layer sample, for example a thin film deposited on a substrate, is described.
Abstract: This paper presents a one-dimensional theory which describes the dependence of the photopyroelectric induced signal on the optical and thermal properties of a two-layer sample, for example a thin film deposited on a substrate. Several computer simulations were performed in order to determine the validity of this model for a wide range of wavelengths and modulation frequencies.

Book ChapterDOI
06 Sep 1994
TL;DR: The overall time and processor complexity of both algorithms are O(logn) and O(max{δ2·n2/logn, nβ+DG}), respectively, imply that the proposed algorithms improve in performance upon the best-known algorithms for these problems.
Abstract: We present efficient parallel algorithms for recognizing chordal graphs and locating all maximal cliques of a chordal graph G=(V,E). Our techniques are based on partitioning the vertex set V using information contained in the distance matrix of the graph. We use these properties to formulate parallel algorithms which, given a graph G=(V,E) and its adjacency-level sets, decide whether or not G is a chordal graph, and, if so, locate all maximal cliques of the graph in time O(k) by using δ2·n2/k processors on a CRCW-PRAM, where δ is the maximum degree of a vertex in G and 1≤k≤n. The construction of the adjacency-level sets can be done by computing first the distance matrix of the graph, in time O(logn) with O(nβ+DG) processors, where DG is the output size of the partitions and β=2.376, and then extracting all necessary set information. Hence, the overall time and processor complexity of both algorithms are O(logn) and O(max{δ2·n2/logn, nβ+DG}), respectively. These results imply that, for δ≤√nlogn, the proposed algorithms improve in performance upon the best-known algorithms for these problems.