scispace - formally typeset
Search or ask a question
Author

Douglas Dyllon Jeronimo de Macedo

Bio: Douglas Dyllon Jeronimo de Macedo is an academic researcher from Universidade Federal de Santa Catarina. The author has contributed to research in topics: Computer science & DICOM. The author has an hindex of 8, co-authored 76 publications receiving 273 citations. Previous affiliations of Douglas Dyllon Jeronimo de Macedo include International Federation of Sport Climbing & Universidade Federal do Pampa.


Papers
More filters
Journal ArticleDOI
03 Apr 2017
TL;DR: A survey of the creation of value in Big Data environments, which relied in a qualitative method approach of bibliographical, exploratory and descriptive nature, and aims to compare the preliminary results to the typical perception of value creation, found in the literature.
Abstract: Knowing how value creation is understood and managed by Big Data-based companies can be a key strategy to boost business. The typical process of identifying value creation in organizations is quite complex, since it involves internal and external factors to them. In Big Data scenarios, we should also consider the uncertainty about future processes of value creation, regarding the trends inferred by predictive analytics processes. Big Data environments operate in a scale of large volumes of parallel-processed data; and aim to generate relevant information that otherwise would be impossible for traditional systems, especially if we expect good performance of transaction speed and of coping with the extensive variety of data types, inherent to such environments. In order to properly identify and work with the idea of value in their businesses, Big Data-based companies have thus far more challenging hurdles to overcome. This paper proposes a survey of the creation of value in such environments. For this purpose, we undertook a theoretical study, which relied in a qualitative method approach of bibliographical, exploratory and descriptive nature. Finally, since this work is still on progress and it is not yet a conclusive proposal, we aim to compare our preliminary results to the typical perception of value creation, found in the literature.

33 citations

Journal ArticleDOI
TL;DR: An agile, easy-to-use, high-quality telemedicine network that provides greater access to patient data and helps in medical decisions, cutting unnecessary costs to the state and benefiting the population as a whole is designed.
Abstract: Motivated by the need to reduce the cost of patient transport to health centers, the authors designed a prototype national telemedicine network in Brazil. As a result of this project, we now have an agile, easy-to-use, high-quality telemedicine network that provides greater access to patient data and helps in medical decisions, cutting unnecessary costs to the state and benefiting the population as a whole.

31 citations

Proceedings ArticleDOI
13 Jun 2016
TL;DR: An infrastructure model for big data for a smart city projet is proposed to present the stages for the processing of data in the step of extraction, storage, processing and visualization, as well as the types of tools needed for each phase.
Abstract: The spread of projects focused on smart cities have grown in recent years. With this, the massive amount of data generated in these initiatives, creates a degree of complexity in how to manage all this information. In this paper we propose an infrastructure model for big data for a smart city projet. The goal of this model is to present the stages for the processing of data in the step of extraction, storage, processing and visualization, as well as the types of tools needed for each phase. To implement our proposed model, we used the Particip ACT Brazil a project based in smart cities. This project uses different databases to compose its big data and uses this data to seek solutions to urban problems. We observe that our model provides a structured vision of the software to be used in big data server of ParticipACT Brazil. In addition, we can also note that our model can be used in other big data servers.

20 citations

Journal ArticleDOI
TL;DR: This paper presents an architecture that targets high performance levels in storing and retrieving DICOM medical images, adopting a distributed approach in a cluster configuration and indicates an enhanced level of performance around 16% in terms of storage process.
Abstract: Conventional storage and retrieval of information from telemedicine environments is usually based on ordinary database systems. Aspects such as scalability, information distribution, high performance system techniques and operational costs are well known challenges to be circumvented in the research for novel proposals in the field of large-scale telemedicine systems. In this paper we present an architecture that targets high performance levels in storing and retrieving DICOM medical images, adopting a distributed approach in a cluster configuration. Our proposal has two main components: the first element is a data model that is based on image hierarchy, considering the hierarchical data format 5 (HDF5). The second component is a distributed file system, characterised by the parallel virtual file system (PVFS) that was employed in this proposal as a distributed storage data system. As a result, this paper presents a differentiated approach for storage and retrieval of information for a telemedicine environment. Experimental results, utilising the architecture, indicate an enhanced level of performance around 16% in terms of storage process. This number represents an improved performance in comparison to a conventional database system.

16 citations

Journal ArticleDOI
TL;DR: The findings indicate that building and deploying resilient and reliable critical services is an achievable goal through a set of system design artifacts based on well-established concepts in the fields of security and dependability.

16 citations


Cited by
More filters
Proceedings Article
01 Jan 2002
TL;DR: In this paper, an algorithm for generating attack graphs using model checking as a subroutine is presented, which allows analysts to decide which minimal set of security measures would guarantee the safety of the system.
Abstract: An attack graph is a succinct representation of all paths through a system that end in a state where an intruder has successfully achieved his goal. Today Red Teams determine the vulnerability of networked systems by drawing gigantic attack graphs by hand. Constructing attack graphs by hand is tedious, error-prone, and impractical for large systems. By viewing an attack as a violation of a safety property, we can use off-the-shelf model checking technology to produce attack graphs automatically: a successful path from the intruder's viewpoint is a counterexample produced by the model checker In this paper we present an algorithm for generating attack graphs using model checking as a subroutine. Security analysts use attack graphs for detection, defense and forensics. In this paper we present a minimization analysis technique that allows analysts to decide which minimal set of security measures would guarantee the safety of the system. We provide a formal characterization of this problem: we prove that it is polynomially equivalent to the minimum hitting set problem and we present a greedy algorithm with provable bounds. We also present a reliability analysis technique that allows analysts to perform a simple cost-benefit trade-off depending on the likelihoods of attacks. By interpreting attack graphs as Markov Decision Processes we can use the value iteration algorithm to compute the probabilities of intruder success for each attack the graph.

467 citations

Journal ArticleDOI
TL;DR: This paper discusses the similarities and differences among Big Data technologies used in different IoT domains, suggests how certain Big Data technology used in one IoT domain can be re-used in another IoT domain, and develops a conceptual framework to outline the critical Big data technologies across all the reviewed IoT domains.

213 citations

Journal ArticleDOI
TL;DR: All the four types of cloud models available in the market are defined, discussed and compared with the benefits and pitfalls, thus giving a clear idea, which model to adopt for your organization.
Abstract: These days cloud computing is booming like no other technology. Every organization whether it's small, mid-sized or big, wants to adapt this cutting edge technology for its business. As cloud technology becomes immensely popular among these businesses, the question arises: Which cloud model to consider for your business? There are four types of cloud models available in the market: Public, Private, Hybrid and Community. This review paper answers the question, which model would be most beneficial for your business. All the four models are defined, discussed and compared with the benefits and pitfalls, thus giving you a clear idea, which model to adopt for your organization.

171 citations

01 Jan 2005
TL;DR: AGWL as discussed by the authors is an XML-based language which allows a programmer to define agraph of activities that refer mostly to computational tasks, which are connected by control and data flowlinks.
Abstract: Currently Gridapplication developers often configure available application components intoa workflow oftasks that they cansubmit forexecuting on theGrid. Inthis paper, wepresent anAbstract Grid Workflow Language (AGWL)fordescribing Grid workflow applications atahighlevel ofabstraction. AGWL hasbeendesigned suchthattheusercan concentrate onspecifying Gridapplications without dealing witheither thecomplexity oftheGridorany specific implementation technology (e.g. Webservice). AGWL isanXML-based language whichallows a programmer todefine agraph ofactivities thatrefer mostly tocomputational tasks. Activities areconnected bycontrol anddata flowlinks. A rich setofconstructs (compound activities) isprovided tosimplify the specification ofGridworkflow applications which includes compound activities suchasif,forEach andwhi1e loops aswellasadvanced compound activities including parallel sections, parallel loops andcollection iterators. Moreover, AGWLsupports a generic highlevelaccessmechanism to data repositories. AGWL isthemaininterface tothe ASKALONGridapplication development environment andhasbeenapplied tonumerousrealworld applications. Wedescribe amaterial science workflow thathasbeensuccessfully portedto a Grid infrastructure based onanAGWLspecification. Only a dozenAGWL activities areneededtodescribe a workflow withseveral hundred activity instances.

132 citations

Journal ArticleDOI
TL;DR: A novel method called ConnCrack combining conditional Wasserstein generative adversarial network and connectivity maps is proposed for road crack detection, which achieves state-of-the-art performance compared with other existing methods in terms of precision, recall and F1 score.

105 citations