scispace - formally typeset
Search or ask a question

Showing papers on "Orthogonality (programming) published in 2011"


Journal ArticleDOI
TL;DR: Every descriptive language is not only metaphoric and interpretative, but it is also developed (or adopted) ad hoc to fulfill a certain agenda.
Abstract: Every descriptive language is not only metaphoric and interpretative, but it is also developed (or adopted) ad hoc to fulfill a certain agenda.

56 citations


Journal ArticleDOI
TL;DR: A mixed integer programming (MIP) method suitable for constructing orthogonal designs, or improving existing Orthogonal arrays, for experiments involving quantitative factors with limited numbers of levels of interest is presented.

30 citations


Posted Content
TL;DR: This work describes a hardware implementation of an array of 28 one-instruction Subleq processors on a low-cost FPGA board and provides implementation details of the complier from a C-style language to Sublequ.
Abstract: Subleq (Subtract and Branch on result Less than or Equal to zero) is both an instruction set and a programming language for One Instruction Set Computer (OISC). We describe a hardware implementation of an array of 28 one-instruction Subleq processors on a low-cost FPGA board. Our test results demonstrate that computational power of our Subleq OISC multi-processor is comparable to that of CPU of a modern personal computer. Additionally, we provide implementation details of our complier from a C-style language to Subleq.

19 citations


Journal ArticleDOI
TL;DR: In this article, the authors study orthogonality, domination, weight, regular and minimal types in the contexts of rosy and super-rosy theories, and show that domination can be expressed as a regular type.
Abstract: We study orthogonality, domination, weight, regular and minimal types in the contexts of rosy and super-rosy theories.

11 citations


Journal ArticleDOI
TL;DR: Using principles with pragmatism, a new approach and accompanying algorithm are presented to a longstanding problem in applied statistics: the interpretation of principal components, effectively combining simplicity, retention of optimality and computational efficiency, while complementing existing methods.
Abstract: Combining principles with pragmatism, a new approach and accompanying algorithm are presented to a longstanding problem in applied statistics: the interpretation of principal components. Following Rousson and Gasser [53 (2004) 539–555] 'the ultimate goal is not to propose a method that leads automatically to a unique solution, but rather to develop tools for assisting the user in his or her choice of an interpretable solution'. Accordingly, our approach is essentially exploratory. Calling a vector ‘simple’ if it has small integer elements, it poses the open question: 'What sets of simply interpretable orthogonal axes—if any—are angle close' to the principal components of interest? its answer being presented in summary form as an automated visual display of the solutions found, ordered in terms of overall measures of simplicity, accuracy and star quality, from which the user may choose. Here, ‘star quality’ refers to striking overall patterns in the sets of axes found, deserving to be especially drawn to the user’s attention precisely because they have emerged from the data, rather than being imposed on it by (implicitly) adopting a model. Indeed, other things being equal, explicit models can be checked by seeing if their fits occur in our exploratory analysis, as we illustrate. Requiring orthogonality, attractive visualization and dimension reduction features of principal component analysis are retained. Exact implementation of this principled approach is shown to provide an exhaustive set of solutions, but is combinatorially hard. Pragmatically, we provide an efficient, approximate algorithm. Throughout, worked examples show how this new tool adds to the applied statistician’s armoury, effectively combining simplicity, retention of optimality and computational efficiency, while complementing existing methods. Examples are also given where simple structure in the population principal components is recovered using only information from the sample. Further developments are briefly indicated.

11 citations


Proceedings ArticleDOI
13 Feb 2011
TL;DR: A formal framework that allows for the use of program demonstrations to resolve several types of ambiguities and omissions that are common in natural language instruction is introduced.
Abstract: We contribute to the difficult problem of programming via natural language instruction. We introduce a formal framework that allows for the use of program demonstrations to resolve several types of ambiguities and omissions that are common in such instructions. The framework effectively combines some of the benefits of programming by demonstration and programming by natural instruction. The key idea of our approach is to use non-deterministic programs to compactly represent the (possibly infinite) set of candidate programs for given instructions, and to filter from this set by means of simulating the execution of these programs following the steps of a given demonstration. Due to the rigorous semantics of our framework we can prove that this leads to a sound algorithm for identifying the intended program, making assumptions only about the types of ambiguities and omissions occurring in the instruction. We have implemented our approach and demonstrate its ability to resolve ambiguities and omissions by considering a list of classes of such issues and how our approach resolves them in a concrete example domain. Our empirical results show that our approach can effectively and efficiently identify programs that are consistent with both the natural instruction and the given demonstrations.

7 citations


Dissertation
01 Jan 2011
TL;DR: This thesis proposes an integration of OOP, EBP and AOP leading to a simple and regular programming model that reduces the number of language constructs while keeping expressiveness and offering additional programming options.
Abstract: Object-Oriented Programming (OOP) has become the de facto programming paradigm. Event-Based Programming (EBP) and Aspect-Oriented Programming (AOP) complement OOP, covering some of its deficiencies when building complex software. Today’s applications combine the three paradigms. However, OOP, EBP and AOP have not yet been properly integrated. Their underlying concepts are in general provided as distinct language constructs, whereas they are not completely orthogonal. This lack of integration and orthogonality complicates the development of software as it reduces its understandability, its composability and increases the required glue code. This thesis proposes an integration of OOP, EBP and AOP leading to a simple and regular programming model. This model integrates the notions of class and aspect, the notions of event and join point, and the notions of piece of advice, method and event handler. It reduces the number of language constructs while keeping expressiveness and offering additional programming options. We have designed and implemented two programming languages based on this model: EJava and ECaesarJ. EJava is an extension of Java implementing the model. We have validated the expressiveness of this language by implementing a well-known graphical editor, JHotDraw, reducing its glue code and improving its design. ECaesarJ is an extension of CaesarJ that combines our model with mixins and language support for state machines. This combination was shown to greatly facilitate the implementation of a smart home application, an industrial-strength case study that aims to coordinate different devices in a house and automatize their behaviors

6 citations


Proceedings ArticleDOI
27 Jun 2011
TL;DR: An approach to address the problems of data dependence in UML state machine diagrams by constructing a control flow graph that explicitly describes all possible transitions and a hierarchy graph, which depicts the hierarchical structure of state machine diagram.
Abstract: Slicing is a well-known reduction technique in many areas such as debugging, maintenance, and testing, and thus, there has been considerable research in the application of slicing techniques to models at the design level. UML state machine diagrams can properly describe the behavior of large software systems at the design level. The slicing of UML state machine diagrams is helpful for their maintenance. But it is difficult to apply a slicing algorithm to automatically reduce the diagrams with respect to slicing criteria, because of the unique properties of these diagrams, such as hierarchy and orthogonality. These properties make constructing a data dependence graph highly complicated. Hierarchy between states leads to implicit paths between states, which may affect data dependence. Also, orthogonality (i.e., parallelism) can cause an intransitivity problem when tracing data dependence. In this paper, we discuss an approach to address such problems. We first construct a control flow graph, which explicitly describes all possible transitions; and a hierarchy graph, which depicts the hierarchical structure of state machine diagram. Next we retrieve data dependence information and construct a dependence graph across different levels. We also show how data dependence information is retrieved, by virtue of ATM example.

6 citations


Book ChapterDOI
03 Jul 2011
TL;DR: This paper uses ASP techniques to validate the context values against the feasible contexts compatible with a context specification structure called Context Dimension Tree, and convey to the user the context-dependent views associated with the (possibly multiple) current contexts, thus retaining, from the underlying dataset, only the relevant data for each such context.
Abstract: In a world of global networking, the variety and abundance of available data generates the need for effectively and efficiently gathering, synthesizing, and querying such data, while reducing information noise. A system where context awareness is integrated with – yet orthogonal to – data management allows the knowledge of the context in which the data are used to better focus on currently useful information (represented as a view), keeping noise at bay. This activity is called context-aware data tailoring. In this paper, after a brief review of the literature on context awareness, we describe a technique for context-aware data tailoring by means of Answer Set Programming (ASP). We use ASP techniques to i) validate the context values against the feasible contexts compatible with a context specification structure called Context Dimension Tree, and ii) convey to the user the context-dependent views associated with the (possibly multiple) current contexts, thus retaining, from the underlying dataset, only the relevant data for each such context. At the same time, ASP allows us to retain the orthogonality of context modeling while adopting the same framework as that of data representation.

5 citations


Journal ArticleDOI
TL;DR: In this article, a new approach and accompanying algorithm are presented to the interpretation of principal components, combining principles with pragmatism, and the goal is to assist the user in his or her choice of an interpretable solution.
Abstract: Combining principles with pragmatism, a new approach and accompanying algorithm are presented to a longstanding problem in applied statistics: the interpretation of principal components. Following Rousson and Gasser [53 (2004) 539--555] @p250pt@ the ultimate goal is not to propose a method that leads automatically to a unique solution, but rather to develop tools for assisting the user in his or her choice of an interpretable solution. Accordingly, our approach is essentially exploratory. Calling a vector 'simple' if it has small integer elements, it poses the open question: @p250pt@ What sets of simply interpretable orthogonal axes---if any---are angle-close to the principal components of interest? its answer being presented in summary form as an automated visual display of the solutions found, ordered in terms of overall measures of simplicity, accuracy and star quality, from which the user may choose. Here, 'star quality' refers to striking overall patterns in the sets of axes found, deserving to be especially drawn to the user's attention precisely because they have emerged from the data, rather than being imposed on it by (implicitly) adopting a model. Indeed, other things being equal, explicit models can be checked by seeing if their fits occur in our exploratory analysis, as we illustrate. Requiring orthogonality, attractive visualization and dimension reduction features of principal component analysis are retained.

3 citations


Proceedings ArticleDOI
29 Dec 2011
TL;DR: An incremental orthogonal projective non-negative matrix factorization algorithm (IOPNMF) which aims to learn a parts-based subspace that reveals dynamic data streams and shows that the experimental results show that the algorithm learns parts- based representations successfully.
Abstract: In this paper, we propose an incremental orthogonal projective non-negative matrix factorization algorithm (IOPNMF), which aims to learn a parts-based subspace that reveals dynamic data streams. There exist two main contributions. Firstly, our proposed algorithm can learn parts-based representations in an online fashion. Secondly, by using projection and orthogonality constrains, our IOPNMF algorithm can guarantee to learn a linear parts-based subspace. To demonstrate the effectiveness of our method, we conduct two kinds of experiments, incremental learning parts-based components on facial database and visual tracking on several challenging video clips. The experimental results show that our IOPNMF algorithm learns parts-based representations successfully.

Book ChapterDOI
23 Nov 2011
TL;DR: In this paper, a new orthogonality concept that gives sufficient conditions for consistent estimation of the parameters of interest is presented. But the orthogonality concept is not used to compare estimators.
Abstract: Observations in a dataset are rarely missing at random. One can control for this non-random selection of the data by introducing fixed effects or other nuisance parameters. This chapter deals with consistent estimation the presence of many nuisance parameters. It derives a new orthogonality concept that gives sufficient conditions for consistent estimation of the parameters of interest. It also shows how this orthogonality concept can be used to derive and compare estimators. The chapter then shows how to use the orthogonality concept to derive estimators for unbalanced panels and incomplete data sets (missing data).

Proceedings ArticleDOI
11 Dec 2011
TL;DR: This work describes an efficient greedy method for finding diverse dimensions from transactional databases, and proposes a mining framework that effectively represents a dimensionality-reducing transformation from the space of all items to thespace of orthogonal dimensions.
Abstract: We introduce the problem of diverse dimension decomposition in transactional databases. A dimension is a set of mutually-exclusive item sets, and our problem is to find a decomposition of the item set space into dimensions, which are orthogonal to each other, and that provide high coverage of the input database. The mining framework we propose effectively represents a dimensionality-reducing transformation from the space of all items to the space of orthogonal dimensions. Our approach relies on information-theoretic concepts, and we are able to formulate the dimension-finding problem with a single objective function that simultaneously captures constraints on coverage, exclusivity and orthogonality. We describe an efficient greedy method for finding diverse dimensions from transactional databases. The experimental evaluation of the proposed approach using two real datasets, flickr and delicious, demonstrates the effectiveness of our solution. Although we are motivated by the applications in the collaborative tagging domain, we believe that the mining task we introduce in this paper is general enough to be useful in other application domains.

Patent
24 Mar 2011
TL;DR: In this paper, the authors present an efficient and faster method of computation of streaming SVD for data sets such that errors including reconstruction error and loss of orthogonality are error bounded.
Abstract: The present disclosure is directed to techniques for efficient streaming SVD computation. In an embodiment, streaming SVD can be applied for streamed data and/or for streamed processing of data. In another embodiment, the streamed data can include time series data, data in motion, and data at rest, wherein the data at rest can include data from a database or a file and read in an ordered manner. More particularly, the disclosure is directed to an efficient and faster method of computation of streaming SVD for data sets such that errors including reconstruction error and loss of orthogonality are error bounded. The method avoids SVD re-computation of already computed data sets and ensures updates to the SVD model by incorporating only the changes introduced by the new entrant data sets.

Journal ArticleDOI
30 May 2011
TL;DR: In this paper, the usual notion of orthogonality was extended to Banach spaces, and a characterization of compact operators on Banach space that admit orthonormal Schauder bases was established.
Abstract: In this paper, we extend the usual notion of orthogonality to Banach spaces. Also, we establish a characterization of compact operators on Banach spaces that admit orthonormal Schauder bases. AMS Subject Classification(2000): 46B20, 47L05, 46A32, 46B28.

Book ChapterDOI
01 Jan 2011
TL;DR: In this paper, the same authors tried to justify the reason for this perception and also to respond to the question above, namely, "Is the same thing also true in ℝn?"
Abstract: We are used to thinking of orthogonal coordinate systems as the most interesting and most useful. This habit comes from studying the graphs of functions in the plane or in space. Is the same thing also true in ℝn? In this chapter we will try to justify the reason for this perception and also to respond to the question above.

Proceedings ArticleDOI
16 Jul 2011
TL;DR: The research attempts to present a new pre-processing method of data, Orthogonal Transformation Method (OTM), which improves the Orthogonality of the data structure by adding variables so that the accuracy of the automatic class distribution database of IC products of imbalanced data set is improved and thus the performance of IC design is upgraded.
Abstract: In the past, for the imbalance class distribution, in most cases the representative class data were chosen by sampling, in order to improve the efficacy of the class distribution model in predicting the minority of classes in the imbalanced data set. The research attempts to present a new pre-processing method of dataiXthe Orthogonal Transformation Method (OTM), which, by integrating the conceptions of Taguchi Orthogonal Arrays, without changing the original data structure, improves the Orthogonality of the data structure by adding variables so that the accuracy of the automatic class distribution database of IC products of imbalanced data set is improved, the range of information retrieval is accurately narrowed, the efficiency and the quality of retrieval can be exalted to a great extent and thus the performance of IC design is upgraded. For the first year, the programs to be implemented and expected results are: Orthogonal Transformation Method, programming and performance evaluation.