scispace - formally typeset
Search or ask a question
BookDOI

Computational Intelligence Applications in Business Intelligence and Big Data Analytics

About: The article was published on 2017-06-26 and is currently open access. It has received 3 citations till now. The article focuses on the topics: Business intelligence & Computational intelligence.

Summary (2 min read)

Introduction

  • SOA has emerged supporting scalability and service reuse.
  • The authors approach is based on SOA reference Architecture and Service Component model for big data applications known as SoftBD, secondly, a large scale real-world case study demonstrating their approach to SOA for Big Data Analytics.
  • The SOA deployment model is based on service provider to publish their services through a registry, and a service requester to be able access the published services, compose new services, and request new services.
  • Big Data has emerged to address the challenges faced by volume, velocity, and veracity of data being received and analyzed in real-time.
  • Therefore, the authors need an SOA model which tackles required speed and accuracy of data.

Value(of data/information) ∞ √Number of Business Users (BU)×Number of Business Areas

  • Equation 1 Measuring the value of Big Data.
  • These are essential characteristics of SOA and SOA based services are business driven and should be designed with principles of loose coupling and message driven services which are autonomous.
  • Figure 1.3 shows the different approaches to Big Data which has provided us an insight into developing an SOA reference architecture for big data analytics that supports various demands for analytics.
  • Harvesting reuse, another major objective of the SOA investment, has also called for a radical rethink of the design, says Lock.
  • After entering into a contract with BEPET, Business Customers can buy their energy in bulk over a period of time or in exceptional cases request an ad hoc energy top up at the rates agreed in their business contracts.

Business Process Modelling

  • To understand the various business workflows, the business processes for BEPET would be modelled using BonitaSoft (Bonitasoft 2015).
  • Before the BPMN models are generated, the various actors involved in the business process are identified based on the role that they perform within the business process & the workflows that these actors would be involved in are shortlisted (Pant and Juric 2008).
  • These actors can either be external actors like the Direct Customer or the Business Customer or they can be internal actors like the Call Centre Representative, Business Sales Representative, etc.
  • There are also some automated ‘system’ actors like the Payment Gateway, Bank Interface, etc.
  • The following list of workflows has been identified for external actors as shown in Table 1.

BPMN Process Models for BEPT

  • Once all the workflows have been identified, the business process models have been generated using the Bonitasoft Community Edition BPM Studio software.
  • As seen from the business process model, the interactions between the various business processes have been captured well.
  • The workflow for Business Sales was used to generate the sample User Interface screens for BEPET (Create and Run Your First Process 2015).
  • The next few screenshots indicate how this process was carried out.
  • To generate these UI screens the following workflow process was utilized and the BPMN model for adding a new contract is shown in Figure 1.7 and its UI interface is shown in Figure 1.8.

Cost Optimization

  • The cost of providing the ‘Meter Reading service’ to ‘Regular Customers’ was identified using this process simulation.
  • This also allowed BEPET to determine the extent of discount that it can offer its ‘Online-Only Customers’.
  • This allowed BEPET to differentiate itself from other similar energy providers by providing additional cost savings.
  • These are the manpower cost estimates generated as shown in Figure 1.14.

Resource Profiling

  • Process Simulation also enabled BEPET to identify an optimal resource profile for Its Meter Reading Operations.
  • It could identify using various load profiles, varying on the basis of number of customers, to determine how many Meter Readers it would need to hire and whether it would be advisable to hire locum Meter Readers or simply hire them as permanent staff.
  • BEPET could also perform a comparative assessment of providing the meter reading services only on weekdays v/s throughout the week.
  • As seen from the graph below the wait time increases over the weekend due to lack of human resources as shown in Figure 1.15.

7. Conclusion

  • This chapter has contributed a unique and innovative and generic SoftBD framework, service component model, and a generic SOA architecture for large scale big data applications.
  • Concepts, technology, and design, prentice hall, 2005, also known as Service-oriented architecture.
  • Chang, V and Ramachandran, M (2015) Quality of Service for Financial Software as a Service, ESaaSa 2015-CLOSER 2015 Zimmermann, A., et al. (2013) Towards Service-oriented Enterprise Architectures for Big Data Applications in the Cloud, 17th IEEE International Enterprise Distributed Object Computing Conference Workshops, 2013.
  • Apache Axis2 - Apache Axis2/Java - Next Generation Web Services. [ONLINE] Bonitasoft (2015) Simulate Processes for Better Optimization | Bonitasoft | Open Source Workflow & BPM software.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

Citation:
Ramachandran, M (2017) Service-oriented architecture for big data and business intelligence analyt-
ics in the cloud. In: Computational Intelligence Applications in Business and Big Data Analytics. Auer-
bach Publications, pp. 237-258. ISBN 9781351720250 DOI: https://doi.org/10.1201/9781315180748
Link to Leeds Beckett Repository record:
https://eprints.leedsbeckett.ac.uk/id/eprint/7428/
Document Version:
Book Section (Accepted Version)
This is an Accepted Manuscript of a book chapter published by Routledge in Computational
Intelligence Applications in Business and Big Data Analytics on 06 June 2017, available online:
http://www.routledge.com/9781498761017
The aim of the Leeds Beckett Repository is to provide open access to our research, as required by
funder policies and permitted by publishers and copyright law.
The Leeds Beckett repository holds a wide range of publications, each of which has been
checked for copyright and the relevant embargo period has been applied by the Research Services
team.
We operate on a standard take-down policy. If you are the author or publisher of an output
and you would like it removed from the repository, please contact us and we will investigate on a
case-by-case basis.
Each thesis in the repository has been cleared where necessary by the author for third party
copyright. If you would like a thesis to be removed from the repository or believe there is an issue
with copyright, please contact us on openaccess@leedsbeckett.ac.uk and we will investigate on a
case-by-case basis.

1
Service-Oriented Architecture for Big Data and Business
Intelligence Analytics in the Cloud
Muthu Ramachandran
School of Computing, Creative Technologies and Engineering
Faculty of Arts, Environment, and Technology
Leeds Beckett University, Leeds, UK
Email: m.ramachandran@leedsbeckett.ac.uk
Abstract
SOA has emerged supporting scalability and service reuse. At the same time Big Data Analytics has
impacted on business services and business process management. However, there is a lack of a
systematic engineering approach to big data analytics. This chapter will provide a systematic
approach to SOA design strategies and business process for Big Data Analytics. Our approach is
based on SOA reference Architecture and Service Component model for big data applications known
as SoftBD, secondly, a large scale real-world case study demonstrating our approach to SOA for Big
Data Analytics. SOA big data architecture is a scalable, generic and customisable for a variety of data
applications. The main contribution of this chapter includes a unique and innovative and generic
softBD framework, service component model, and a generic SOA architecture for large scale big data

2
applications. This chapter has also contributed to Big Data metrics which allows to measure and
evaluate when analyzing the data.
1. Introduction
Distributed systems have been developed and deployed in a traditional software architecture model
based on layered architecture. However, this has not able to provide a sustainable IT system which is
cost-effective. Therefore, SOA has emerged to address this issue and has emerged with key design
principles such as loose coupling, service reusability, service composability, and service
discoverability. The SOA deployment model is based on service provider to publish their services
through a registry, and a service requester to be able access the published services, compose new
services, and request new services. The major challenge of this work is to integrate SOA for Big Data
Applications. Big Data has emerged to address the challenges faced by volume, velocity, and veracity
of data being received and analyzed in real-time. Therefore, we need an SOA model which tackles
required speed and accuracy of data. The model proposed in this chapter will aim to achieve these
two characteristics. This way this paper aims to achieve merging two major issues (SOA and Big
Data). Zimmermann et al. (2013) emphasizes the need for an enterprise SOA architecture for Big
Data Applications and have proposed Enterprise Reference Architecture Cube (ESARC) for such large
scale application.
Big Data has become a key business improvement indicator for large businesses and as the key
indicator of success in the Cloud and IoT Computing Technologies. Big Data can be defined as the
management of data received from different sources on the use and behavior of a system in real-
time at the scale of terabytes, peta-bytes, etc. The size of data depends on the nature of systems
such as mobile phone usage, web usage, social media usage, real-time internet and sensor data
received, streaming media data received and sent. In formal term, Big Data has been defined as the
5Vs model (volume, velocity, variety, value and veracity). Value and veracity are two essential
characteristics that specify the need of valuable and truthfulness data (Neves and Bernardino 2016).

3
Therefore, it is important for businesses and organizations to develop a long term strategy for
managing, monitoring, analyzing, and predicting data. We have identified a measure of big data
value:
Value(of data/information) ∞ √Number of Business Users (BU)×Number of Business Areas
Equation 1 Measuring the value of Big Data
As equation 1 suggest, the value of a big data is directly proportional (increases in value) to the
square root of a number of business users which is multiplied by the business areas that they work.
The need for integrating business intelligence and business process modelling for big data, with
service-oriented architecture is the key to achieve business value as suggested by Curko, Bach, and
Radonic (2007) SOA concept as a key technology for integration of BI, BPMS, transaction, big data,
and other IT systems.
However, there is a lack of business intelligence analytics applied to large scale big data that has
been received from multiple sources and also lack of applying intelligence and enterprise
architecture for large scale big data that are emerging from multiple business and data sources. In
addition, existing approaches (Zimmermann et al. 2013) in this area doesn’t consider applying
intelligence analytics for prediction by applying soft computing methods such as Bayesian theory,
Fuzzy logic, and Neuro-Fuzzy.
This chapter divides into two major sections: SOA approach for Big Data with SOA reference
Architecture and Service Component model for big data applications, secondly, a large scale real-
world case study demonstrating our approach to SOA for Big Data Analytics. Our approach is a
scalable big data architecture model which is generic and customisable for a variety of data
applications. The main contribution of this chapter includes a unique and innovative and generic
softBD framework, service component model, and a generic SOA architecture for large scale big data

4
applications. This chapter has also contributed to Big Data metrics which allows to measure and
evaluate when analyzing the data.
2. SOA Based Soft computing Framework for Big Data
One of the key reasons for chossing soft compute approach is to apply predictions to large amount
of data being generated by IoT, IoE, the cloud system, and other sources such user generated data.
Existing approaches in this area have considered architecting data, but not to have considered
predictive analysis based on the currently collected and previously collect data for the similar
situation. Service Oriented Architecture (SOA) has emerged supporting scalability and service reuse.
At the same time Big Data Analytics has impacted on business services. This chapter will provide a
systematic approach for SOA design strategies and business process for Big Data Analytics.
Distributed systems have been developed and deployed in a traditional software architecture model
based on layered architecture. However, this has not able to provide a sustainable IT system which is
cost-effective. Therefore, SOA has emerged to address this issue and has emerged with key design
principles such as loose coupling, service reusability, service composability, and service
discoverability as shown in Figure 1. Thus, this chapter has proposed a reference architecture which
is based on SOA and it has potential to solve classical problem of customisation, composbility
interoperability etc. The major focus of this chapter to integrate SOA for Big Data Applications.
Consequently, big data have emerged to address the challenges faced by volume, velocity, and
veracity of data being received and analyzed in real-time. Therefore, we need an SOA model which
tackles required speed and accuracy of data. The model proposed in this chapter will aim to achieve
these two characteristics. The earlier studies emphasize the need for an enterprise SOA architecture
for Big Data Applications and have proposed Enterprise Reference Architecture Cube (ESARC) for
such large scale application. Consistent with earlier studies, this research aims to achieve merging
two major issues (SOA and Big Data).

Citations
More filters
Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed three merged attribute selection methods and applied a nonlinear distance correlation to select important attributes, and used rule-based classifiers to generate a set of useful rules.

14 citations

Journal ArticleDOI
TL;DR: A hybrid optimization strategy called REACT is presented, which is based on the combination of the features of Iterative Dichotomiser 3 and Particle Swarm Optimization for feature selection and classification of RA, and the effectiveness of the proposed diagnosis strategy is validated.
Abstract: Rheumatoid arthritis (RA) is a major chronic autoimmune disorder which affects multiple organs and joints of human body. Disease varies in its behavior and concern such that an early prediction is a complex process with regard to time so the diagnosis is not an easy task for the physicians. The common existing methodologies employed to analyze the severity of RA are the clinical, laboratory and physical examinations. The advancement of data mining has been employed for the RA diagnosis through learning from history of datasets. To improve the efficiency and reliability of the approach, this paper presents a hybrid optimization strategy called REACT, which is based on the combination of the features of Iterative Dichotomiser 3 and Particle Swarm Optimization for feature selection and classification of RA. The effectiveness of the proposed diagnosis strategy is validated through its prediction accuracy, specificity, sensitivity, positive predictive value and negative predictive value with existing approaches.

11 citations

Journal ArticleDOI
TL;DR: GWO algorithm supports with minimal time to converge as well improved accuracy of 89.26% along with actual churn match compared to PSO and ACO approaches, whereas CUPGO also focuses on customer retention of 34.81% to retain valuable customers.
Abstract: Challenge of an early prediction of customer churn is a major demand among research community. To understand an intention of a customer on reasons to make a churn as well time taken by a customer to churn is always an unknown mystery. Though good numbers of research works have suggested works on customer churn an exact measure of accurate churn and approaches to suggest on retention is the major discussion of this paper. Traditional approaches such as ACO, PSO are supports on appreciable churn prediction but consider more time to converge, whereas GWO algorithm supports with minimal time to converge as well improved accuracy of 89.26% along with actual churn match compared to PSO and ACO approaches. CUPGO also focuses on customer retention of 34.81% to retain valuable customers. CUPGO works on a large dataset collected over two consistent years.

4 citations

Frequently Asked Questions (14)
Q1. What have the authors contributed in "Service-oriented architecture for big data and business intelligence analytics in the cloud" ?

This chapter will provide a systematic approach to SOA design strategies and business process for Big Data Analytics. Their approach is based on SOA reference Architecture and Service Component model for big data applications known as SoftBD, secondly, a large scale real-world case study demonstrating their approach to SOA for Big Data Analytics. The main contribution of this chapter includes a unique and innovative and generic softBD framework, service component model, and a generic SOA architecture for large scale big data 

Having an SOA would enable the IT department to support value added processes, rather than support functionality in a more piecemeal way. 

Since it very rarely requires updates and due to the prohibitively high cost & potential business impact of migrating it to a new system, BEPET has taken a business decision to not migrate it to the new system for now. 

The need for integrating business intelligence and business process modelling for big data, with service-oriented architecture is the key to achieve business value as suggested by Curko, Bach, and Radonic (2007) SOA concept as a key technology for integration of BI, BPMS, transaction, big data, and other IT systems. 

components based service development has been a natural choice as it supports service design principles as well as big data design principles of security, privacy, and large scale, real-time processing, and customisation. 

The third layer shows Data Analytics and other Big Data Services such as guided analytics, integration, policies, event handling, business rules, business activity monitoring, etc. 

The workflow for ‘Take Meter Reading’ was used to perform a process simulation activity in Bonita BPM Studio (Simulate Processes for Better Optimization 2015). 

"The authors have a team of seven supporting 60-odd applications, and when The authortell them the authors are breaking these into services, they are rightly concerned about the risks," says Lock. 

For instance the Call Centre Manager would also have additional responsibilities like Manage the Call Centre Team, Reporting to the Management, Appraisals, etc. 

The workflow for Business Sales was used to generate the sample User Interface screens for BEPET (Create and Run Your First Process 2015). 

As shown in the figure, it divided into four layers, first layer consists of Business and Orchestration where new business services in big data monitoring, analyzing, organizing, and prediction takes place. 

This chapter has contributed a unique and innovative and generic SoftBD framework, service component model, and a generic SOA architecture for large scale big data applications. 

These actors can either be external actors like the Direct Customer or the Business Customer or they can be internal actors like the Call Centre Representative, Business Sales Representative, etc. 

The vertically integrated data model links application data services to resources in a more application-specific way, where the customer relationship management or enterprise resource planning or dynamic data authentication application data is largely separated first at the as-a-Service level and that separation is maintained down to the data infrastructure.