scispace - formally typeset
Search or ask a question

Showing papers by "Georgios Triantafyllidis published in 2016"


Proceedings ArticleDOI
06 Oct 2016
TL;DR: A novel framework which applies known image features combined with advanced linear image representations for weed recognition resulting in a novel and generic weed control approach that in this knowledge is unique among weed recognition methods and systems is introduced.
Abstract: In this paper, we introduce a novel framework which applies known image features combined with advanced linear image representations for weed recognition. Our proposed weed recognition framework, is based on state-of-the-the art object/image categorization methods exploiting enhanced performance using advanced encoding and machine learning algorithms. The resulting system can be applied in a variety of environments, plantation or weed types. This results in a novel and generic weed control approach, that in our knowledge is unique among weed recognition methods and systems. For the experimental evaluation of our system, we introduce a challenging image dataset for weed recognition. We experimentally show that the proposed system achieves significant performance improvements in weed recognition in comparison with other known methods.

23 citations


Proceedings ArticleDOI
01 Sep 2016
TL;DR: A system capable of monitoring human activity through head pose and facial expression changes, utilising an affordable 3D sensing technology (Microsoft Kinect sensor) and a lightweight data exchange format (JavaScript Object Notation-JSON) is employed, in order to manipulate the data extracted from the two aforementioned settings.
Abstract: Despite significant recent advances in the field of head pose estimation and facial expression recognition, raising the cognitive level when analysing human activity presents serious challenges to current concepts. Motivated by the need of generating comprehensible visual representations from different sets of data, we introduce a system capable of monitoring human activity through head pose and facial expression changes, utilising an affordable 3D sensing technology (Microsoft Kinect sensor). An approach build on discriminative random regression forests was selected in order to rapidly and accurately estimate head pose changes in unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial expressions (ERFE) was adopted. After that, a lightweight data exchange format (JavaScript Object Notation-JSON) is employed, in order to manipulate the data extracted from the two aforementioned settings. Such mechanism can yield a platform for objective and effortless assessment of human activity within the context of serious gaming and human-computer interaction.

2 citations


Proceedings ArticleDOI
28 Sep 2016
TL;DR: This paper introduces a novel representation for the classification of 3D images that is not based on a fixed pyramid but adapts to image content and uses image regions instead of rectangular pyramid scales.
Abstract: In this paper we introduce a novel representation for the classification of 3D images. Unlike most current approaches, our representation is not based on a fixed pyramid but adapts to image content and uses image regions instead of rectangular pyramid scales. Image characteristics, such as depth and color, are used for defining regions within images. Multiple region scales are formed in order to construct the proposed pyramid image representation. The proposed method achieves excellent results in comparison to conventional representations.

2 citations


01 Aug 2016
TL;DR: In this article, a transdisciplinary model termed the "Architectural Experiment" is applied in a specific case by combining serial, parallel and iterative processes which include contextual analysis, architectural design, simulation, C++ programming, implementation of the dynamic smart-film diffuser, programming of voltage ranges on Arduino boards, rapid prototype construction and lighting technology.
Abstract: New lighting technologies may fulfill a need for holistic design methods by offering opportunities for both architects and engineers to apply methods and knowledge from media technology that combine daylight and interactive light, in order to complement and deepen an understanding of context. The framework combines daylight and interactive light and includes human needs analysis, spatial understanding, qualitative analysis, qualitative tests and visual assessments. A transdisciplinary model termed the "Architectural Experiment" is applied in a specific case by combining serial, parallel and iterative processes which include contextual analysis, architectural design, simulation, C++ programming, implementation of the dynamic smart-film diffuser, programming of voltage ranges on Arduino boards, rapid prototype construction and lighting technology.

1 citations


Book ChapterDOI
02 May 2016
TL;DR: It is investigated whether it is possible or not to develop an intelligent system that through a multimodal input detects the intended emotions of the played music and in real-time adjusts the lighting accordingly.
Abstract: Playing music is about conveying emotions and the lighting at a concert can help do that. However, new and unknown bands that play at smaller venues and bands that don’t have the budget to hire a dedicated light technician have to miss out on lighting that will help them to convey the emotions of what they play. In this paper it is investigated whether it is possible or not to develop an intelligent system that through a multimodal input detects the intended emotions of the played music and in real-time adjusts the lighting accordingly. A concept for such an intelligent lighting system is developed and described. Through existing research on music and emotion, as well as on musicians’ body movements related to the emotion they want to convey, a row of cues is defined. This includes amount, speed, fluency and regularity for the visual and level, tempo, articulation and timbre for the auditory. Using a microphone and a Kinect camera to detect such cues, the system is able to detect the intended emotion of what is being played. Specific lighting designs are then developed to support the specific emotions and the system is able to change between and alter the lighting design based on the incoming cues. The results suggest that the intelligent emotion-based lighting system has an advantage over a just beat synced lighting and it is concluded that there is reason to explore this idea further.

Book ChapterDOI
17 Jul 2016
TL;DR: The preliminary evaluation failed to provide solid evidence on the development of feelings of closeness in families with children where at least one parent has an irregular work schedule and the evaluation indicated that both parents and children would prefer another shape and design.
Abstract: This paper presents MUVA (MUltimodal Visceral design Ambient device), a prototype for a storytelling light- and sound-based ambient device. The aim of this device is to encourage social interaction and expand the emotional closeness in families with children where at least one parent has an irregular work schedule. MUVA differs from the other ambient devices, because it is targeted to children, and it adopts a visceral design approach in order to be appealing to its users. It is a raindrop-shaped lamp, which features audio playing, while its light color is affected by the audio playing. MUVA can be used by parents to store pre-recorded audio of themselves telling stories, which their children can listen to when they are away. In order to investigate if MUVA is appealing to its users and if it creates feelings of closeness between parents and children when the first are absent, we conducted interviews and observations of children and an online survey study with parents. Our preliminary evaluation failed to provide solid evidence on the development of feelings of closeness. However, the majority of children participating in our test found the record function of the product enjoyable, while the majority of parents thought MUVA would be a fun communication method. Finally, our evaluation indicated that both parents and children would prefer another shape and design.

Book ChapterDOI
02 May 2016
TL;DR: This paper fuses effective and robust algorithms which are employed for expression recognition, along with the use of a neural network system using the features extracted by the SIFT algorithm to create an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video.
Abstract: This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips.