scispace - formally typeset
Search or ask a question

Showing papers on "Enterprise software published in 2020"


Proceedings ArticleDOI
26 Jul 2020
TL;DR: A custom management software stack is built to enable an efficient use of the system by a diverse community of users and provide guides and recipes for running deep learning workloads at scale utilizing all available GPUs.
Abstract: We describe the design, deployment and operation of a computer system built to efficiently run deep learning frameworks. The system consists of 16 IBM POWER9 servers with 4 NVIDIA V100 GPUs each, interconnected with Mellanox EDR InfiniBand fabric, and a DDN all-flash storage array. The system is tailored towards efficient execution of the IBM Watson Machine Learning enterprise software stack that combines popular open-source deep learning frameworks. We build a custom management software stack to enable an efficient use of the system by a diverse community of users and provide guides and recipes for running deep learning workloads at scale utilizing all available GPUs. We demonstrate scaling of a PyTorch and TensorFlow based deep neural networks to produce state-of-the-art performance results.

52 citations


Journal ArticleDOI
TL;DR: In this article, the authors present guidelines for applied machine learning in the construction industry from training to operationalizing models, which are drawn from their experience of working with construction folks to deliver Construction Simulation Tool (CST).

34 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the impact of information and communication technology (ICT) on firm-level productivity in Turkey using a novel longitudinal data set and found that the contribution of ICT labor is higher in the services than the manufacturing sector.
Abstract: This paper examines the impact of information and communication technology (ICT) on firm-level productivity in Turkey using a novel longitudinal data set. We combine two firm-level data sets compiled by the Turkish Statistical Institute and construct an unbalanced panel data set covering the period 2007–2014. Our data set matches firms in the ICT Usage in Enterprises Survey to the Annual Industry and Service Statistics Survey that includes data on production factors and firm characteristics. We estimate production functions augmented by ICT labor, software investments, and indicators for the usage of enterprise system applications including ERP, CRM, and SCM. Our results confirm that there is a positive relationship between firm-level productivity and ICT use. Empirical findings support the complementarity hypothesis between ICT labor and software usage variables. This result implies that the productivity contribution of ICT labor is larger in firms using specialized software. Accordingly, firms need to invest in ICT labor to reap the benefits from enterprise software investments. We also estimate our models using sub-samples based on size and sector. While the elasticity of ICT labor is higher in small and medium-size firms, larger firms have greater marginal product than smaller firms. Our results also suggest that the strength of the link between ICT and productivity may be different across sectors. In particular, we find that the productivity contribution of ICT labor is higher in the services than the manufacturing sector.

23 citations


Journal ArticleDOI
TL;DR: An in-depth six-year study of the use of enterprise resource planning system by a large information technology service provider company shows that users balance two patterns of routine performance through different technologies with varying degrees of malleability: one to generate fluidity and another to generate stability.
Abstract: Advanced workplace technologies are increasingly used alongside traditional enterprise software packages (such as enterprise resource planning) in the workplace However, we have only limited understanding of how different kinds of technologies are used to dynamically shape work routines and fluidity in a digital workplace We conducted an in-depth six-year study of the use of enterprise resource planning (ERP) system by a large information technology service provider company The company used the system to manage its global staffing processes We explored how the users of this system sought to achieve the fluidity needed to do their work Our findings show that users balance two patterns of routine performance through different technologies with varying degrees of malleability: one to generate fluidity and another to generate stability We call this process ‘generative balancing’ Our research contributes to the literature on workplace technologies and ERP use by providing insights into how the use of technologies with different degrees of malleability helps to craft digital workspaces and enables users to deal with tensions between accomplishing local-level performance and realizing corporate-level strategic intents

21 citations


Book ChapterDOI
01 Jan 2020
TL;DR: In this article, a case study was conducted, and interview data were gathered from 15 IT professionals in a Phoenix, Arizona company to explore the critical successful factors in the implementation of ERP systems.
Abstract: Enterprise resource planning (ERP) systems are considered, by many, to be extremely solid, while giving organizations the ability to quickly capture and manage data across diverse sectors. Because the successful employment of an ERP system depends upon skillful implementation, specific factors contributing to successful ERP implementation are essential. What are the critical factors in the implementation of ERP system? How do company administrators and IT professionals perceive the critical successful factors for the effective implementation of the ERP? How are critical successful factors defined? How do IT professionals perceive the influence of critical factors on the effective implementation of ERP in a Phoenix company? In this chapter, the critical successful factors in the implementation of ERP systems will be explored. A single case study was conducted, and the interview data were gathered from 15 IT professionals in a Phoenix, Arizona company. Problems, solutions, recommendations, and future research direction will be presented.

19 citations


Journal ArticleDOI
TL;DR: This manuscript aims to challenge the mainstream research directions of code analysis and motivate for a transition towards code analysis of enterprise systems with interesting problems and opportunities and suggests one possible perspective of the problem area using aspect-oriented programming.
Abstract: Code analysis brings excellent benefits to software development, maintenance, and quality assurance. Various tools can uncover code defects or even software bugs in a range of seconds. For many projects and developers, the code analysis tools became essential in their daily routines. However, how can code analysis help in an enterprise environment? Enterprise software solutions grow in scale and complexity. These solutions no longer involve only plain objects and basic language constructs but operate with various components and mechanisms simplifying the development of such systems. Enterprise software vendors have adopted various development and design standards; however, there is a gap between what constructs the enterprise frameworks use and what current code analysis tools recognize. This manuscript aims to challenge the mainstream research directions of code analysis and motivate for a transition towards code analysis of enterprise systems with interesting problems and opportunities. In particular, this manuscript addresses selected enterprise problems apparent for monolithic and distributed enterprise solutions. It also considers challenges related to the recent architectural push towards a microservice architecture. Along with open-source proof-of-concept prototypes to some of the challenges, this manuscript elaborates on code analysis directions and their categorization. Furthermore, it suggests one possible perspective of the problem area using aspect-oriented programming.

18 citations



Journal ArticleDOI
TL;DR: This paper provides unique insights into how the acquisition process of SaaS is different from the extant models used to explain enterprise software acquisitions and an understanding of how information search is conducted by the business users will help software vendors to target business users better.
Abstract: Organizations worldwide are adopting software as a service (SaaS) applications, where they pay a subscription fee to gain access rather than buying the software. The extant models on software acquisition processes, several of which are based on organizational buying behavior, do not sufficiently explain how SaaS application acquisition decisions are made. This study aims to investigate the acquisition process organizations follow for SaaS software, the changes to the roles of the Chief Information Officer (CIO) and the business user and also looks at the impact of SaaS on the proliferation of unauthorized software systems.,The authors used exploratory research using the grounded theory approach based on 18 in-depth interviews conducted with respondents who have studied with enterprise software delivered on-premise and as SaaS in different roles such as sales, consulting, CIO, information technology (IT) management and product development.,The authors identified a need to classify the SaaS software and developed a framework that uses software specificity and its strategic importance to the organization to classify SaaS applications. The aforementioned framework is used to explain how software evaluation processes have changed for different kinds of SaaS applications. The authors also found that the CIO’s and the business users’ have changed substantially in SaaS application evaluations and found evidence to show that shadow IT will be restricted to some classes of SaaS applications.,By focusing on the changes to the roles and responsibilities of the members of the buying center, this paper provides unique insights into how the acquisition process of SaaS is different from the extant models used to explain enterprise software acquisitions. An understanding of how information search is conducted by the business users will help software vendors to target business users better.

10 citations


Journal ArticleDOI
01 Apr 2020
TL;DR: This paper investigates how workers navigate these environments through a qualitative study of the work practices of employees in an app-enhanced organization, and sets out a call for further attention to the material and lively dimensions of software and the emergent challenges they pose for contemporary work practices.
Abstract: This paper draws attention to the growing adoption of web and mobile apps in the enterprise, typically supported by digital storage in the cloud. While these developments offer several advantages, they also pose challenges for workers who must make sense of increasingly complex software configurations – with apps accessible from multiple devices (typically supporting different features or capabilities) and used alongside legacy enterprise software. We investigate how workers navigate these environments through a qualitative study of the work practices of employees in an app-enhanced organization. Our findings focus on two sets of practices. The first involves appraising what software programs and software/device combos offer what features, what we refer to as software calculus. The second involves orienting towards data formats and database structures that underlie specific software programs and their interactions with specific devices, what we label data thinking. Building on prior work on the role of materiality in CSCW, our findings set out a call for further attention to the material and lively dimensions of software and the emergent challenges they pose for contemporary work practices.

7 citations


Book ChapterDOI
25 Nov 2020
TL;DR: In this article, the authors propose a holistic and data-driven framework for continuous and automated acquisition, analysis and aggregation of heterogeneous digital sources for the purposes of requirements elicitation and management.
Abstract: Increased digitalization and the pervasiveness of Big Data, along with vastly improved data processing capabilities, have led to the consideration of digital data as additional sources of system requirements, complementing conventional stakeholder-driven approaches. The volume, velocity and variety of these digital sources present numerous challenges which existing system development methods are unable to manage in a systematic and efficient manner. We propose a holistic and data-driven framework for continuous and automated acquisition, analysis and aggregation of heterogeneous digital sources for the purposes of requirements elicitation and management. The proposed framework includes a conceptualization in the form of a meta-model and a high-level process for its use; the framework is illustrated in a real case of an enterprise software.

7 citations


Posted Content
TL;DR: A methodology conceived to support non-technical people in addressing business process innovation and developing enterprise software application, called EasInnova, is solidly rooted in Model-Driven Engineering and adopts a three staged model of an innovation undertaking.
Abstract: Low Code platforms, according to Gartner Group, represent one of the more disruptive technologies in the development and maintenance of enterprise applications. The key factor is represented by the central involvement of business people and domain expert, with a substantial disintermediation with respect to technical people. In this paper we propose a methodology conceived to support non-technical people in addressing business process innovation and developing enterprise software application. The proposed methodology, called EasInnova, is solidly rooted in Model-Driven Engineering and adopts a three staged model of an innovation undertaking. The three stages are: AsIs that models the existing business scenario; Transformation that consists in the elaboration of the actual innovation; ToBe that concerns the modeling of new business scenario. The core of EasInnova is represented by a matrix where columns are the three innovation stages and the rows are the three Model-Driven Architecture layers: CIM, PIM, PSM. The cells indicate the steps to be followed in achieving the sought innovation. Finally, the produced models will be transferred onto a BonitaSoft, the Low Code platform selected in our work. The methodology is described by means of a simple example in the domain of home food delivery.

Proceedings ArticleDOI
30 Mar 2020
TL;DR: This work proposed an approach to identify semantic code clones in enterprise frameworks by using control flow graphs (CFGs) and applying various proprietary similarity functions to compare enterprise targeted metadata for each pair of CFGs, which enables it to detect semantic code clone with high accuracy within a time complexity of O(n2).
Abstract: Enterprise systems are widely adopted across industries as methods of solving complex problems. As software complexity increases, the software's codebase becomes harder to manage and maintenance costs raise significantly. One such source of cost-raising complexity and code bloat is that of code clones. We proposed an approach to identify semantic code clones in enterprise frameworks by using control flow graphs (CFGs) and applying various proprietary similarity functions to compare enterprise targeted metadata for each pair of CFGs. This approach enables us to detect semantic code clones with high accuracy within a time complexity of O(n2) where n is equal to the number of CFGs composed in the enterprise application (usually around hundreds). We demonstrated our solution on a blind study utilizing a production enterprise application.

Proceedings ArticleDOI
05 Jun 2020
TL;DR: A new model is proposed that uses both an algorithmic and non-algorithmic approach and comes up with efforts needed during the upgrade of an enterprise software.
Abstract: Determining the effort required to make a transition is one of the key factors that help in the decision-making of an enterprise software upgrade. In the last decade, extensive research has been carried out on Software Development Effort Estimation. The results show various models and approaches taken towards arriving at the effort needed during software development. However, to the best of our knowledge, no effective model has been studied that can estimate change efforts of upgrading a large productive enterprise system. In this paper, we propose a new model that uses both an algorithmic and non-algorithmic approach and comes up with efforts needed during the upgrade of an enterprise software. The proposed model has been evaluated through an extensive experimental validation and was applied to the upgrade of three enterprise systems of different domains (namely, Enterprise Resource Planning, Customer Relationship Management and Human Resource). The results obtained showed that the model was 91% accurate in providing effort estimates for the above mentioned three systems.

Journal ArticleDOI
TL;DR: A novel model that enables enterprises to systematically evaluate the fit between their specific organizational characteristics and the organizational characteristics complementary with successful deployment of international pre-configured enterprise software products is presented.
Abstract: Implementation, deployment and maintenance of enterprise software pre-configured products are one of the key challenges managers need to address in order to stay competitive in the never ending search to find better ways of conducting business. In the literature there are discovered two general approaches through which managers can use for a successful implementation, deployment and maintenance of enterprise software products. First approach is based on the internal re-deployment of the managerial practices that are already used to manage other fields in the enterprise. Second – the deployment of “world-wide” industry “best practices” that international vendors of enterprise software and their local representatives sell as part of their pre-configured software products. This paper presents a novel model that enables enterprises to systematically evaluate the fit between their specific organizational characteristics and the organizational characteristics complementary with successful deployment of international pre-configured enterprise software products. The proposed model is tested through a comparison of two groups of enter-prises from the population of 1000 biggest enterprises in Slovenia. The first group mostly invests in local, while the second group mostly invests in international enterprise software products. The paper finds that on average there are significant and relevant differences in 44% of the examined organizational characteristics between the groups of enterprises that mostly invest in international or local enterprise software products. The model serves as a comprehensive organizational risk checklist for enterprises that are about to invest in enterprise software products.

Book ChapterDOI
15 Jun 2020
TL;DR: This paper presents an event-based approach in a non-intrusive customization framework that can enable customization for multi-tenant SaaS and address the problem of too many API calls to the main software product.
Abstract: Popular enterprise software such as ERP, CRM is now being made available on the Cloud in the multi-tenant Software as a Service (SaaS) model. The added values come from the ability of vendors to enable customer-specific business advantage for every different tenant who uses the same main enterprise software product. Software vendors need novel customization solutions for Cloud-based multi-tenant SaaS. In this paper, we present an event-based approach in a non-intrusive customization framework that can enable customization for multi-tenant SaaS and address the problem of too many API calls to the main software product. The experimental results on Microsoft’s eShopOnContainers show that our approach can empower an event bus with the ability to customize the flow of processing events, and integrate with tenant-specific microservices for customization. We have shown how our approach makes sure of tenant-isolation, which is crucial in practice for SaaS vendors. This direction can also reduce the number of API calls to the main software product, even when every tenant has different customization services.

Proceedings ArticleDOI
01 Jan 2020
TL;DR: In this article, two approaches for customizing multi-tenant SaaS using microservices are presented: intrusive and non-intrusive, and the key technical challenges and feasible solutions to implement this architecture are discussed.
Abstract: Customization is a widely adopted practice on enterprise software applications such as Enterprise resource planning (ERP) or Customer relation management (CRM). Software vendors deploy their enterprise software product on the premises of a customer, which is then often customized for different specific needs of the customer. When enterprise applications are moving to the cloud as mutli-tenant Software-as-a-Service (SaaS), the traditional way of on-premises customization faces new challenges because a customer no longer has an exclusive control to the application. To empower businesses with specific requirements on top of the shared standard SaaS, vendors need a novel approach to support the customization on the multi-tenant SaaS. In this paper, we summarize our two approaches for customizing multi-tenant SaaS using microservices: intrusive and non-intrusive. The paper clarifies the key concepts related to the problem of multi-tenant customization, and describes a design with a reference architecture and high-level principles. We also discuss the key technical challenges and the feasible solutions to implement this architecture. Our microservice-based customization solution is promising to meet the general customization requirements, and achieves a balance between isolation, assimilation and economy of scale.

Proceedings ArticleDOI
07 Jan 2020
TL;DR: Overall, the evolution of SAP’s strategy is understood as a change from a high level of control over the administration of its technological environment to a more flexible strategy that gives alternative options such as Platform-as-aService (PaaS) and Infrastructure- as-a-Service (IaaS).
Abstract: This paper seeks to provide a better understanding of software business providers' strategy when adapting to the emergence of cloud computing markets. Based on a longitudinal case-study and on a historical content analysis of SAP's discourse, it highlights four main periods of adaptation, since 2009. The analysis of these four periods emphasizes the existence of an initial superior technology (HANA) on the ERP market when referring to cloud-based solutions. Overall, the evolution of SAP's strategy is understood as a change from a high level of control over the administration of its technological environment to a more flexible strategy that gives alternative options such as Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) mode. Content analysis shows that SAP's discourse gives an increasing importance to agreements with third parties in order to mitigate the perception of the perceived lock-in effect.

Book ChapterDOI
14 Oct 2020
TL;DR: In this paper, the authors propose a logic model that makes a precise decision on identifying the root cause, as well as on identifying a preferred resolution to the root causes of a software application service malfunction.
Abstract: Many businesses today still very much depend on software application service to process the daily transactions. Under such heavy dependency, it is frustrated whenever the software application service is unavailable. For the errors which can cause the software application service malfunction, it can be possible to arise mostly in either within the software application layer, or other factors which are falling outside the software application layer. In this complex situation, a lot of time consume to identify the root cause is unavoidable. The objective is not seeing from the angle of only solving the problem. More importantly, it aims to reduce the total time spent on root cause analysis, and to decide the preferred resolution for the root cause. Indeed this is crucial to propose an approach toward to develop a logic model, that makes a precise decision on identifying the root cause, as well as on identifying the preferred resolution to the root cause. The proposed logic model will consist of the algorithm incorporated with both Analytic Hierarchy Process (AHP) and Supervised Learning.

Book ChapterDOI
08 Jun 2020
TL;DR: A concept for a PPC system for AM is presented, which takes into account the requirements for integration into the operational enterprise software system and promises a more efficient utilization of the plants and a more elastic use.
Abstract: Additive Manufacturing is increasingly used in the industrial sector as a result of continuous development. In the Production Planning and Control (PPC) system, AM enables an agile response in the area of detailed and process planning, especially for a large number of plants. For this purpose, a concept for a PPC system for AM is presented, which takes into account the requirements for integration into the operational enterprise software system. The technical applicability will be demonstrated by individual implemented sections. The presented solution approach promises a more efficient utilization of the plants and a more elastic use.

Book ChapterDOI
01 Jan 2020
TL;DR: Research results suggest that the main reason for company collapse was related to the inability to achieve product/market fit due to the lack of Clusterpoint integration with the major Cloud infrastructure vendors (Amazon WS, Microsoft Azure, etc.) and its focus on closed-source business model.
Abstract: This chapter presents a postmortem analysis of the collapse of a Latvian enterprise software company Clusterpoint that attempted to enter the global market with a Cloud-based database-as-a-service (DBaaS) offering to compete with MongoDB and other vendors in the NoSQL database category. The beginning of Clusterpoint is dated back to 2006 when three co-founders established the Clusterpoint Ltd. Company in Riga, Latvia. Clusterpoint developed its proprietary, closed source Clusterpoint database software and sold it in the local Latvian market using traditional enterprise licensing model. In 2015, Clusterpoint used more than 2 million EUR for launching its Cloud-based DBaaS offering and entering primarily the US market. In 2016, it was recognized by market research agency Gartner as one of Cool Vendors in platform-as-a-service (PaaS) segment. However, by the end of 2017 the company was unable to attract another investment round for financing its operations and was forced to file for insolvency due to liquidity issues. Research results suggest that the main reason for company collapse was related to the inabilitytoachieve product/market fit due to the lack of Clusterpoint integration with the major Cloud infrastructure vendors (Amazon WS, Microsoft Azure, etc.) and its focus on closed-source business model. The effect of these two factors was amplified by premature scaling. The authors use the research method of an empirical case study and postmortem analysis approach, analyzing the outcome in the retrospect (doing the “research-in-the-past”) of a completed project. One of the co-authors has worked as an employee at Clusterpoint during from February 2015 till March 2016.

Proceedings ArticleDOI
03 Dec 2020
TL;DR: In this article, a layered architecture to construct Virtual DataStack (VDS) is proposed, which contains near real-time synthetic virtual datasets (specific to a particular domain) along with other components that comprises the generation of datasets Virtual dataStack for CampusStack (academic campus management) domain has been generated and results prove that Query Inclusive Computational Logic reduces lines of code as well as the time taken significantly.
Abstract: Dataset plays a major role in evaluating Machine Learning (ML), Artificial Intelligence (AI), Business Intelligence (BI) and enterprise software To prove the efficacy of any algorithm / software and also its limitation with respect to the size of the input data, it has to be exercised on a large dataset Creating such a large dataset is a challenging task especially with near real time data In this paper, a layered architecture to construct Virtual DataStack (VDS) is proposed The VDS contains near real time synthetic virtual datasets (specific to a particular domain) along with other components that comprises the generation of datasets Virtual DataStack for CampusStack (academic campus management) domain has been generated and results prove that Query Inclusive Computational Logic reduces lines of code as well as the time taken significantly This paper focuses mainly on efficient generation of near real time data for enterprise domains with a specific attention to Education Domain and CampusStack

Posted Content
TL;DR: Observations, experience and lessons learnt from a technology-transfer project between academia and industry focused on engineering, development and testing of a software for optimization of pumping energy costs for oil pipelines could be useful for other researchers and practitioners in engineering of scientific and engineering software systems.
Abstract: Development of scientific and engineering software is usually different and could be more challenging than the development of conventional enterprise software. The authors were involved in a technology-transfer project between academia and industry which focused on engineering, development and testing of a software for optimization of pumping energy costs for oil pipelines. Experts with different skillsets (mechanical, power and software engineers) were involved. Given the complex nature of the software (a sophisticated underlying optimization model) and having experts from different fields, there were challenges in various software engineering aspects of the software system (e.g., requirements and testing). We report our observations and experience in addressing those challenges during our technology-transfer project, and aim to add to the existing body of experience and evidence in engineering of scientific and engineering software. We believe that our observations, experience and lessons learnt could be useful for other researchers and practitioners in engineering of other scientific and engineering software systems.

Patent
12 Nov 2020
TL;DR: In this article, a system and method for providing support for extensibility and customization in an analytic applications environment is described, which enables the use of custom semantic extensions to extend a semantic model, and provide custom content at a presentation layer.
Abstract: In accordance with an embodiment, described herein is a system and method for providing support for extensibility and customization in an analytic applications environment. An extract, transform, load (ETL) or other data pipeline or process provided by the analytic applications environment, can operate in accordance with an analytic applications schema and/or a customer schema associated with a customer (tenant), to receive data from the customer's enterprise software application or data environment, for loading into a data warehouse instance. A semantic layer enables the use of custom semantic extensions to extend a semantic model, and provide custom content at a presentation layer. Extension wizards or development environments can guide users in using the custom semantic extensions to extend or customize the semantic model, through a definition of branches and steps, followed by promotion of the extended or customized semantic model to a production environment.

Patent
29 Sep 2020
TL;DR: In this paper, a system having a variety of mobile assets to which are attached one or more wireless sensors and associated with each mobile asset to automate enterprise software organization of assets among dynamically changing sites.
Abstract: A system having a variety of mobile assets to which are attached one or more wireless sensors and associated with each mobile asset to automate enterprise software organization of assets among dynamically changing sites. A plurality of receivers receive data from redundant wireless sensors, and at least one gateway aggregates wireless sensor data from the receivers, the wireless sensors, receivers and gateway forming a local network associated with a given site. At least one server hosting enterprise software receives aggregated data from the at least one gateway. The enterprise software identifies the location of an asset based on association of a local network with a site and association of sensors with an asset.

Patent
02 Apr 2020
TL;DR: In this article, the authors present a system, method, and computer program product embodiment for automating component management in enterprise applications by receiving metadata associated with the enterprise application implementation and storing an inventory including at least a portion of the metadata.
Abstract: Disclosed herein are system, method, and computer program product embodiments for automating component management in enterprise applications. An embodiment operates by receiving metadata associated with the enterprise application implementation and storing an inventory including at least a portion of the metadata. The system then determines one or more component dependencies of the enterprise application implementation based on the inventory and providing one or more recommendations for component installation or deletion based on the inventory and one or more component dependencies. The system also generates one or more testcases based on the inventory and the one or more component dependencies.

Book ChapterDOI
01 Jan 2020
TL;DR: The project aims to establish a software prototype that supports requirements engineering activities to incrementally improve enterprise cloud software in the post-delivery phase based on actual usage data.
Abstract: The shift from on-premise to cloud enterprise software has fundamentally changed the interactions between software vendors and users. Since enterprise software users are now working directly on an infrastructure that is provided or monitored by the software vendor, enterprise cloud software providers are technically able to measure nearly every interaction of each individual user with their cloud products. The novel insights into actual usage that can thereby be gained provide an opportunity for requirements engineering to improve and effectively extend enterprise cloud products while they are being used. Even though academic literature has been proposing ideas and conceptualizations of leveraging usage data in requirements engineering for nearly a decade, there are no functioning prototypes that implement such ideas. Drawing on an exploratory case study at one of the world’s leading cloud software vendors, we conceptualize an Action Design Research project that fills this gap. The project aims to establish a software prototype that supports requirements engineering activities to incrementally improve enterprise cloud software in the post-delivery phase based on actual usage data.

Book ChapterDOI
01 Jan 2020
TL;DR: This research paper summarizes the conclusions of the research towards the autonomic evaluation of interoperability capability between different enterprise applications and reveals basic concepts on which it proved the assumption that enterprise application could be evaluated in a more objective, calculable manner.
Abstract: Enterprise is a dynamic and self-managed system, and the applications are an integral part of this complex system. The integration and interoperability of enterprise software are two essential aspects that are at the core of system efficiency. This research focuses on the interoperability evaluation methods for the sole purpose of evaluating multiple enterprise applications interoperability capabilities in the model-driven software development environment. The peculiarity of the method is that it links the causality modeling of the real world (domain) with the traditional MDA. The discovered domain causal knowledge transferring to CIM layer of MDA form the basis for designing application software that is integrated and interoperable. The causal (deep) knowledge of the subject domain is used to evaluate the capability of interoperability between software components. The management transaction concept reveals causal dependencies and the goal-driven in-formation transformations of the enterprise management activities (an in-depth knowledge). An assumption is that autonomic interoperability is achievable by gathering knowledge from different sources in an organization, particularly enterprise architecture, and software architecture analysis through web services can help gather required knowledge for automated solutions. In this interoperability capability evaluation research, 13 different enterprise applications were surveyed. Initially, the interoperability capability evaluation was performed using four know edit distance calculations: Levenshtein, Jaro-Winkler, Longest common subsequence, and Jaccard. These research results are a good indicator of software interoperability capability. Combining these results with a bag of words library gathered from “Schema.org” and included as an addition to the evaluation system, we improve our method by moving more closely to semantic similarity analysis. The prototype version for testing of enterprise applications integration solution is under development, but it already allows us to collect data and help research this domain. This research paper summarizes the conclusions of our research towards the autonomic evaluation of interoperability capability between different enterprise applications. It reveals basic concepts on which we proved our assumption that enterprise application could be evaluated in a more objective, calculable manner.

Patent
05 Nov 2020
TL;DR: In this paper, the authors propose an analytic applications environment that enables data analytics within the context of an organization's enterprise software application or data environment, or a software-as-a-service or other type of cloud environment; and supports the development of computer-executable software analytic applications.
Abstract: In accordance with an embodiment, an analytic applications environment enables data analytics within the context of an organization's enterprise software application or data environment, or a software-as-a-service or other type of cloud environment; and supports the development of computer-executable software analytic applications. A data pipeline or process, such as, for example, an extract, transform, load process, can operate in accordance with an analytic applications schema adapted to address particular analytics use cases or best practices, to receive data from a customer's (tenant's) enterprise software application or data environment, for loading into a data warehouse instance. Each customer (tenant) can additionally be associated with a customer tenancy and a customer schema. The data pipeline or process populates their data warehouse instance and database tables with data as received from their enterprise software application or data environment, as defined by a combination of the analytic applications schema, and their customer schema.

Proceedings ArticleDOI
14 Dec 2020
TL;DR: In this paper, the authors take advantage of blockchains' tamper-proof and decentralization properties to develop a robust and secure electronic medical record management system, and choose HyperLedger Fabric as their underlying technical architecture.
Abstract: Medical Record Management System is an important information management system in healthcare centers and hospitals. Information kept in such systems need to be clean, correct and tamper-proof. In this paper, we take advantage of blockchains’ tamper-proof and decentralization properties to develop a robust and secure electronic medical record management system. In particular we choose HyperLedger Fabric as our underlying technical architecture. HyperLedger Fabric yields higher throughput and lower latency compared with other blockchains, which is a perfect candidate for enterprise software development. Our system is a novel innovation that can serve as an ideal replacement for conventional Medical Record Management System.

Patent
29 Dec 2020
TL;DR: In this paper, the system and methods for distribution of enterprise software and compensation for usage of the enterprise software are disclosed, and examples of such systems include: store information including executable code of software applications, receive user input from administrative users regarding eligibility of individual software applications for different users, facilitate execution of different eligible software applications as selected by the different users; monitor billable execution of the software applications; determine compensation amounts that correspond to monitored billability execution; and presenting information to a given administrative user regarding the determined compensation amounts.
Abstract: Systems and methods for distribution of enterprise software and compensation for usage of the enterprise software are disclosed. Exemplary implementations may: store information including executable code of software applications; receive user input from administrative users regarding eligibility of individual software applications for different users; facilitate execution of different eligible software applications as selected by the different users; monitor billable execution of the software applications; determine compensation amounts that correspond to monitored billable execution; and presenting information to a given administrative user regarding the determined compensation amounts.