scispace - formally typeset
Search or ask a question
Author

Francois Siewe

Bio: Francois Siewe is an academic researcher from De Montfort University. The author has contributed to research in topics: Context (language use) & Ubiquitous computing. The author has an hindex of 13, co-authored 87 publications receiving 615 citations. Previous affiliations of Francois Siewe include United Nations University & University of Dschang.


Papers
More filters
Journal ArticleDOI
TL;DR: A new theory of equivalence of processes is proposed which allows the identification of systems that have the same context-aware behaviours and it is proved that CCA encodes the @p-calculus which is known to be a universal model of computation.

94 citations

Proceedings ArticleDOI
30 Oct 2003
TL;DR: A formal model for the specification of access control policies that can handle the enforcement of multiple policies through policies composition, and which can be used to specify other system's properties such as functional and temporal requirements.
Abstract: Despite considerable number of work on authorization models, enforcing multiple polices is still a challenge in order to achieve the level of security required in many real-world systems. Moreover current approaches address security settings independently, and their incorporation into systems development lifecycle is not well understood. This paper presents a formal model for the specification of access control policies. The approach can handle the enforcement of multiple policies through policies composition. Temporal dependencies among authorizations can be formulated. Interval Temporal Logic (ITL) is our underlying formal framework an policies are modeled as safety properties expressing how authorizations are granted over time. The approach is compositional, and can be used to specify other system's properties such as functional and temporal requirements. The use of a common formalism eases the integration of security requirements into system requirements so that they can be reasoned about uniformly throughout the development lifecycle. Furthermore specification of policies are executable in Tempura, a simulation tool for ITL.

48 citations

Proceedings ArticleDOI
11 Oct 2009
TL;DR: This work proposes an automatic transformation system to build Web Ontology Language OWL ontologies from a relational model written in Structured Query Language SQL, which uses metadata, which helps to extract some semantic aspects which could not be inferred from the SQL.
Abstract: Today, databases provide the best technique for storing and retrieving data, but they suffer from the absence of a semantic perspective, which is needed to reach global goals such as the semantic web and data integration. Using ontologies will solve this problem by enriching databases semantically. Since building an ontology from scratch is a very complicated task, we propose an automatic transformation system to build Web Ontology Language OWL ontologies from a relational model written in Structured Query Language SQL. Our system also uses metadata, which helps to extract some semantic aspects which could not be inferred from the SQL. Our system analyzes database tuples to capture these metadata. Finally, the outcome ontology of the system is validated manually by comparing it with a conceptual model of the database (E/R diagram) in order to obtain the optimal ontology.

40 citations

Journal ArticleDOI
TL;DR: A novel recommender algorithm for machine learning is proposed, which combines students actual rating with their learning styles to recommend personalised course learning objects (LOs) and solves the cold-start and the rating sparsity problems using the FSLSM representations of the student learning styles and the learning object profiles.
Abstract: The e-learning recommender system in learning institutions is increasingly becoming the preferred mode of delivery, as it enables learning anytime, anywhere. However, delivering personalised course learning objects based on student's preferences is still a challenge. Current mainstream recommendation algorithms, such as the Collaborative Filtering (CF) and Content-Based Filtering (CBF), deal with only two types of entities, namely users and items with their ratings. However, these methods do not pay attention to student's preferences, such as learning styles, which are especially important for the accuracy of course learning objects prediction or recommendation. Moreover, several recommendation techniques experience cold-start and rating sparsity problems. To address the challenge of improving the quality of recommender systems, in this paper a novel recommender algorithm for machine learning is proposed, which combines students actual rating with their learning styles to recommend personalised course learning objects (LOs). Various recommendation techniques are considered in an experimental study investigating the best technique to use in predicting student ratings for e-learning recommender systems. We use the Felder-Silverman Learning Styles Model (FSLSM) to represent both the student learning styles and the learning object profiles. The predicted ratings are compared with the actual student ratings to determine the accuracy of the recommendation techniques, using the Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) metrics. This approach has been experimented on 80 students for an online course created in the MOODLE Learning Management System. The results of the experiment show that the best recommendation technique is our proposed hybrid recommendation algorithm that combines the collaborative filtering and the content-based filtering techniques to enhance the accuracy of the predictions, and solves the cold-start and the rating sparsity problems using the FSLSM representations of the student learning styles and the learning object profiles.

31 citations

Proceedings ArticleDOI
02 Jun 2008
TL;DR: This work investigates the enforcement of stateful policies in a concurrent environment using the UCON model and proposes a technique for enforcing policies concurrently based on the static analysis of dependencies between policies.
Abstract: Policy-based approaches to the management of systems distinguish between the specification of requirements, in the form of policies, and their enforcement on the system. In this work we focus on the latter aspect and investigate the enforcement of stateful policies in a concurrent environment. As a representative of stateful policies we use the UCON model and show how dependencies between policy rules affect their enforcement. We propose a technique for enforcing policies concurrently based on the static analysis of dependencies between policies. The potential of our technique for improving the efficacy of enforcement mechanisms is illustrated using a small, but representative example.

26 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

01 Jan 2002

9,314 citations

01 Apr 1997
TL;DR: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity.
Abstract: The objective of this paper is to give a comprehensive introduction to applied cryptography with an engineer or computer scientist in mind. The emphasis is on the knowledge needed to create practical systems which supports integrity, confidentiality, or authenticity. Topics covered includes an introduction to the concepts in cryptography, attacks against cryptographic systems, key use and handling, random bit generation, encryption modes, and message authentication codes. Recommendations on algorithms and further reading is given in the end of the paper. This paper should make the reader able to build, understand and evaluate system descriptions and designs based on the cryptographic components described in the paper.

2,188 citations

Book ChapterDOI
04 Oct 2019
TL;DR: Permission to copy without fee all or part of this material is granted provided that the copies arc not made or distributed for direct commercial advantage.
Abstract: Usually, a proof of a theorem contains more knowledge than the mere fact that the theorem is true. For instance, to prove that a graph is Hamiltonian it suffices to exhibit a Hamiltonian tour in it; however, this seems to contain more knowledge than the single bit Hamiltonian/non-Hamiltonian.In this paper a computational complexity theory of the “knowledge” contained in a proof is developed. Zero-knowledge proofs are defined as those proofs that convey no additional knowledge other than the correctness of the proposition in question. Examples of zero-knowledge proof systems are given for the languages of quadratic residuosity and 'quadratic nonresiduosity. These are the first examples of zero-knowledge proofs for languages not known to be efficiently recognizable.

1,962 citations