Category Archives: Architecture

Introducing WSO2 Enterprise Integrator 6.0

WSO2 started out as a middleware company. Since then, we’ve realized – and championed the fact that our products enable not just technological infrastructure, but radically change how a company works.

All over the world, enterprises use our products to maximize revenue, create entirely new customer experiences and products, and interact with their employees in radically different ways. We call this digital transformation – the evolution of a company from one age to another, and our role in this has become more a technology partner than a simple software provider.

In this realization, we’ve announced WSO2 Enterprise Integrator (EI) 6.0. Enterprise Integrator brings together all of the products and technologies WSO2’s created for the enterprise integration domain – a single package of digital transformation tools closely connected together for ease of use.

When less is more

Those of you who are familiar with WSO2 products will know that we had more than 20 products across the entire middleware stack.

The rationale behind having such a wide array of products was to enable systems architects and developers to pick and choose the relevant bits that are required to build their solution architecture. These products were categorized into several broad areas such as integration, analytics, Internet of Things (IoT) and so on.

We realized that it was overwhelming for the architects and developers to figure out which products should be chosen. We also realized that digital transformation requires these products to be used in certain common patterns that mirrored five fields: Enterprise Integration, API Management, Internet of Things, Security and Smart Analytics.

In order to make things easier for everyone, we decided to match our offerings to how they’re used best. In Integration, this means we’ve combined the functionality of the WSO2 Enterprise Service Bus, Message Broker, Data Services Server and others; now, rather than including and setting up many many products to implement an enterprise integration solution you can simply download and run Enterprise Integrator 6 (EI 6.0).

What’s it got?

EI 6.0 contains service integration or service bus functionality. It has data integration, service, and app hosting, messaging, business processes, analytic and tooling. It also contains connectors which enable you to connect to external services and systems.



The package contains the following runtimes:

  1. Service Bus

Includes functionality from ESB, WSO2 Data Services Server (DSS) and WSO2 App Server (AS)

  1. Business Processes

Includes functionality of WSO2 Business Process Server (BPS).

  1. Message Broker

Includes the functionality of WSo2 Message Broker (MB). However, this is not to be used for purely message brokering solutions; this runtime is there for guaranteed delivery integration scenarios and Enterprise Integration Patterns (EIPs).

  1. Analytics

The analytics runtime for EI 6.0, useful for tracking performance, tracing mediation flows and more.

In order to provide a unified user experience, we’ve made some changes to the directory structure. This is what it looks like now:

The main runtime is the integrator or service bus runtime and all directories relevant to that runtime are at the top level.

This is very similar to the directory structure we use for other WSO2 products; the main difference is the WSO2 directory, under which the other runtimes are available.

Under the other runtimes, you find the same directory structure as the older releases of those products, as shown below.

One might ask why we’ve included multiple runtimes instead of putting everything in a single runtime. The reason for doing so is the separation of concerns. Short running, stateless integrations will be executed on the service bus runtime while long-running and possibly stateful integrations will be executed on the BPS runtime. We also have optional runtimes such as message broker and analytics which will be required only for certain integration scenarios and when analytics are required, respectively.

By leaving out unnecessary stuff, we can reduce the memory footprint and ensure that only what is required is loaded. In addition, when it comes to configuration files, only files related to a particular runtime will be available under the relevant runtime’s directory.

On the Management Console

There’s also been a change to the port that the management console uses. The 9443 servlet transport port is no longer accessible; we now use the 8243 HTTPS port. Integration services, web apps, data services and the management console are all accessible only on the passthrough transport port, which defaults to 8243.

Tooling

Eclipse-based tooling is available for the main integration and business process runtimes. For data integration, we recommend using the management console itself from the main integration runtime.


Why 6.0?

As the name implies, EI is an integration product. The most widely used product in the integration domain is the WSO2 Enterprise Service Bus (ESB), which in the industry is known to run billions of transactions per day. EI is in effect the evolution of WSO2 ESB 5.0, adding features coming from other products. Thus, it’s natural to dub this product 6.0 – the heart of it is still the same.

However, we’ve ensured that the user experience is largely similar to what it was in terms of the features of the previous generation of products.  The Carbon platform that underlies all of our products made it easy to achieve that goal.

Migration to EI 6.0

The migration cost from the older ESB, BPS, DSS and other related products to EI 6.0 is minimal. The same Synapse and Data Services languages, specifications and standards have been followed in EI 6.0. Minimal changes would be required for deploying automation scripts such as Puppet scripts -the directory structures are still very similar, and the configuration files haven’t changed.

Up Next: Enterprise Integrator 7.0

EI 6.0 is based on several languages – Synapse for mediation, BPMN and BPEL for business processes, DSS language for data integration.

A user who wants to implement an integration scenario involving mediation, business processes, and data integration has to learn several languages with different tooling. While it’s effective, we believe we can do better.

At WSO2Con 2017, we just unveiled Ballerina, an entirely new language for integration. EI 7.0 will be completely based on Ballerina – a single language and tooling experience. Now the integration developer can concentrate on the scenario, and implement it using a single language and tool with first level support for visual tooling using a sequence diagram paradigm to define integration scenarios.

However, 7.0 will come with a high migration cost. Customers who are already using WSO2 products in the integration domain can transition over to EI 6.0 – which we’ll be fully supporting – while planning on their 7.0 migration effort in the long term; the team will be working on tooling which will allow migration of major code to Ballerina.

WSO2 will continue to develop EI 6 and EI 7 in parallel. This means new features and fixes will be released as WUM updates and newer releases of the EI 6.0 family will be available over the next few years so that existing users are not forced to migrate to EI 7.0. This is analogous to how Tomcat continues to release 5.x, 6.x, 7.x and so on.


EI 6.0 is available for download at wso2.com/integration and on github.com/wso2/product-ei/releases. Try it out and let us know what you think – it’s entirely open source, so you can take a look under the hood if that takes your fancy. To report issues and make suggestions, head over to https://github.com/wso2/product-ei/issues.

Need more information? Looking to deploy WSO2 in an enterprise production environment? Contact us and we’ll get in touch with you.

 

Implementing an Effective Deployment Process for WSO2 Middleware


Image reference: https://www.pexels.com/photo/aerospace-engineering-exploration-launch-34521/

At WSO2, we provide middleware solutions for Integration, API Management, Identity Management, IoT and Analytics. Running our products on a local machine is quite straightforward: one just needs to install Java, download the required WSO2 distribution, extract the zip file and run the executable.

This provides a middleware testbed for the user in no time. If the solution needs multiple WSO2 products, those can be run on the same machine by changing the port-offsets and configuring the integrations accordingly.

This works very well for trying out product features and implementing quick PoCs. However, once the preliminary implementation of the project is done, a proper deployment process is needed for moving the system to production. 

Any software project needs at least three environments for managing development, testing, and the live deployments. More importantly, a software governance model would be needed for delivering new features, improvement, bug fixes and managing the overall development process.

This becomes crucial when the project implements the system on top of a middleware solution. Both middleware and application changes will need to be delivered. There might be considerable amounts of prerequisites, artifacts and configurations. Without having a well-defined process, it would be difficult to manage such projects efficiently.

A High-Level Examination

One would have to consider the following points would need to be considered when implementing an effective deployment process:

  • Infrastructure

WSO2 middleware can be deployed on physical machines, virtual machines and on containers. Up to now most deployments have been done on virtual machines.

Around 2015, WSO2 users started moving towards container-based deployments using Docker, Kubernetes and Mesos DC/OS. As containers do not need a dedicated operating system instance, this cuts down resource requirements for running an application – in contrast to a VM. In addition, the container ecosystem makes the deployment process much easier using lightweight container images and container image registries.

We provide Puppet Modules, Dockerfiles, Docker Compose, Kubernetes and Mesos (DC/OS) artifacts for automating such deployments.

  • Configuration Management

The configuration for any WSO2 product can be found inside the relevant repository/conf folder. This folder contains a collection of configuration files corresponding to the features that the product provides.

The simplest solution is to maintain these files in a version control system (VCS) such as Git. If the deployment has multiple environments and a collection of products, it might be better to consider using a configuration management system such as Ansible, Puppet, Chef or Salt Stack for reducing configuration value duplication.

We ship Puppet modules for all WSO2 products for this purpose.

  • Extension Management

WSO2 middleware provides extension points in all WSO2 products for plugging in required features.

For example, in WSO2 Identity Server a custom user store manager can be implemented for connecting to external user stores. In the WSO2 Integration products, handlers or class mediators can be implemented for executing custom mediation logic.  Almost all of these extensions are written in Java and deployed as JAR files. These files will simply need to be copied to the repository/components/lib folder or the repository/components/dropins folder if they are OSGi compliant.

  • Deployable Artifact Management

Artifacts that can be deployed in repository/deployment/server folder fall under this category. For, example, in the ESB, proxy services, REST APIs, inbound endpoints, sequences, security policies can be deployed in runtime via the above folder.

We recommend that you create these artifacts in WSO2 Developer Studio (DevStudio) and package them into Carbon Archive (CAR) files for deploying them as collections. WSO2 DevStudio provides a collection of project templates for managing deployable files of all WSO2 products. These files can be effectively maintained using a VCS.

These files can be effectively maintained using a Version Control System.

  • Applying Patches/Updates

Patches were applied to a WSO2 product by copying the patch<number> folder which is found inside the patch zip file to the repository/deployment/patches/ folder.

We recently introduced a new way of applying patches for WSO2 products with WSO2 Update Manager (WUM). The main difference of updates, in contrast to the previous patch model, is that fixes/improvements cannot be applied selectively; it applies all the fixes issued up to a given point using a CLI. This is the main intention of this approach.

  • Lifecycle Management

In any software project it is important to have at least three environments – one for managing development, one for testing and one for production deployments.  New features, bug fixes or improvements need to be first done in the development environment and then moved to the testing environment for verification. Once the functionality and performance are verified the changes can be applied in production (as explained in the “Rolling Out Changes”) section.

The performance verification step might need to have resources identical to the production environment for executing load tests. This is vital for deployments where performance is critical.

With our products, changes can be moved from one environment to the other as a delivery.  Deliveries can be numbered and managed via tags in Git.

The key advantage of using this approach is the ability to track, apply and roll back updates at any given time.

  • Rolling Out Changes

Changes to the existing solution can be rolled out in two main methods:

1. Incremental Deployment (also known as Canary Release).

The idea of this approach is to incrementally apply changes to the existing solution without having to completely switch the entire deployment to the new solution version. This gives the ability to verify the delivery in the production environment using a small portion of the users before propagating it to everyone.

2. Blue-Green Deployment

In Blue-Green deployment method, the deployment is switched to the newer version of the solution at once. It would need an identical set of resources for running the newer version of the solution in parallel to the existing deployment until the newer version is verified. In case of failure, the system can be switched back to the previous version via the router. Taking such approach might need a far more thorough testing procedure compared to the first approach.

Deployment Process Approach 1

This illustrates the simplest form of executing a WSO2 deployment effectively.

In this model the configuration files, deployable artifacts and extension source code are maintained in a version control system. WSO2 product distributions are maintained separately in a file server. Patches/updates are directly applied to the product distributions and new distributions are created. The separation of distributions and artifacts allows product distributions to be updated without losing any project content.

As shown by the green box in the middle, a deployable product distribution is created, combining the latest product distributions, configuration files, deployable artifacts and extensions. Deployable distributions can be extracted on physical, virtual machines or containers and run. Depending on the selected deployment pattern, multiple deployable distributions will need to be created for a product.

In a containerized deployment, each deployable product distribution will have a container image. Depending on the containerized platform, a set of orchestration and load balancing artifacts might also be used.

Deployment Process Approach 2

In the second approach, a configuration management system has been used for reducing the duplication of the configuration data and automating the installation process.

Similar to the first approach, deployable artifacts, configuration data and extension source code are managed in a version control system. Configuration data needs to be stored in a format that is supported by the configuration management system.

For an example, in a Puppet configuration, data is either stored in manifest files or Hiera YAML files. Deployable WSO2 product distributions are not created. Rather, that process is executed by the configuration management system inside a physical machine, virtual machine or in a container at the container build time.

In conclusion

Any of the deployment approaches we’ve spoken about above can be followed with any infrastructure. If a configuration management system is used, it can be used for installing and configuring the solution on virtual machines and as well as on containers. The main difference with containers is that configuration management agent will only be triggered at the container image build time. It may not be run in the when the container is running.

If a configuration management system is used, it can be used for installing and configuring the solution on virtual machines and as well as on containers. The main difference with containers is that configuration management agent will only be triggered at the container image build time. It may not be run in the when the container is running.

At the end of the day, a proper deployment process is essential. For more information and support, please reach out to us. We’d be happy to help.

Perfecting the Coffee Shop Experience With Real-time Data Analysis

Picture a coffee shop.

The person who runs this shop (let’s call her Sam) operates an online coffee ordering service. Sam intends to differentiate her value offering by providing a more personalized customer experience.

Offering customers their favorite coffee as they walk into the store, rewarding loyal customers with a free drink on special occasions – these are some of the things on her mind.

Further the value creation is not limited to her customers but extends to business operations such as real-time monitoring and management of inventory. Sam wants:

  • A reward system where points will be calculated based on order value. Once a reward tier point value is reached, the customer will be notified in real-time about an entitlement for a free drink
  • Inventory levels are updated in real-time on order placement. An automated notification is sent to suppliers in real-time as predicted re-ordering levels are reached

Overview of the solution


Understanding the customer is the first action in  providing a personalized experience. To do this, one must collect intelligence. In today’s digital business, customers pass through many ‘touchpoints’, leaving a digital trail. For example, many would search ‘health benefits of coffee’, some would publish a review on their favourite coffee type – and so on.           

Application Program Interfaces (or APIs) come into play here. In a business context, APIs are a way that businesses could expose their services externally, so that consumers, using an app or some technological interface, can subscribe to and access these services.

For example, Sam can have an “Order API”  that provides a way for consumers to order coffee from her shop using their mobile app.

What we now need is a simple way to create and publish said API and a central place for consumers to find and subscribe for this API. We also need proper security and an access control mechanism.

Data leaving through the API needs to be collected, stored and analyzed to identify patterns. For example, Sam would like to know what the most used combination of ‘coffee and flavors’ is, at which point of the day, by which type of users – which would be helpful for targeted promotion campaigns. For this, we need to understand the data that comes through. 

In base terms, the system requirements for developing such a solution are to: 

  • Design an API for end user access
  • Publish end user attributes (API data) for analytics
  • Process API data in real-time
  • Communicate outcomes

The solution requires to integrating API Management with real time event processing ,where the API end user attributes can be published to a steaming analytic engine for real time processing. There are many offering in the market that provides separate offering, however integrating these offering has it’s own challenges.

WSO2 offers a completely integrated 100% open source  enterprise platform that enables this kind of use case – on-premise, in the cloud, and on mobile devices.

We offer both an API management and streaming analytics product, architected around the same underlying platform, which enables seamless integration between these offerings.

WSO2 API Manager is a fully open source solution for managing all aspects of APIs including creating, publishing, and exposing APIs to users in a secure and scalable manner. It is a production-ready API management solution that has the capability of managing all stages of the API lifecycle in a massively scalable production environment.

WSO2 CEP is one of the fastest open source solutions available today, find events patterns in real-time milliseconds. It utilizes a high-performance streaming processing engine which facilitates real time event detection, correlation and notification of alerts, combined with rich visualization tools to help build monitoring dashboards.

WSO2 MSF4J is a lightweight framework that offers a fast and easy programming model and an end-to-end microservices architecture to ensure agile delivery and flexible deployment of complex, service-oriented applications.

Building an API for end user access

Let’s examine how we can build this with what we’ve listed above.

WSO2 API Manager includes architectural components, the API Gateway, API Publisher and API Store (Developer Portal), Key Manager, Traffic Manager and API Analytics. The API Publisher provides the primary capability to create and publish an API. The developer portal provides a way for subscribers to access the API.

API data to the streaming analytics engine is published through a default message flow. The solution we have in mind requires changing this default flow to capture and publish custom user data.

This is implemented  as a custom ‘data publishing mediator’ (see mediation extensions).  

In a nutshell, message mediation for simplification can be described as the inflow processing of messages, which could be modified,transformed, routed and many other ‘logics’.  Mediators are the implemented component of the logic, which when linked together creates a sequence or flow of the messages.  With API Manager tooling support, a custom flow is designed using a class mediator to decode, capture and publish end user attributes.

The custom sequence extracts the decoded end user attributes passed via JWT headers. The class mediator acts as a data agent that publishes API data to WSO2 CEP. The parameters passed to the class mediator include the connection details to CEP and the published event stream.

Real-time processing of API Data

To capture API data for real-time processing, the same stream definition and event receiver  is created and mapped to the stream. WSO2 provides a comprehensive set of extensionspredictive analytics capabilities  are added via the WSO2 ML extension.

Coffee reordering

The mechanics of reordering coffee based on a real-time analysis goes thus:

An event table represents the inventory details (‘drink name’ ‘ordered quantity’ , available quantity). The API data stream is joined with the event table and the available quantity in stock is reduced using the order quantity as and when events are received. When the reorder quantity level is reached, a email notification is published.

Real-time rewards

Similar to the approach above, the API data is joined with an event table, the event table represents the end user and the reward points generated per order. The reward points are equated to the order size and reward points are added with each new order placed. A reward limit threshold is defined, and when the limit is reached for a new order a notification is sent to the end user, offering a free drink.

Communicating outcomes

To communicate the  outcome of the real time processing event processing, WSO2 CEP provides capability to generate alerts via an SMS, email, user interface  etc. through event publishers. Email notification can be generated to alert management when re-order level are reached, as well as send an SMS to the client to notify offer for a free drink.

Meanwhile, the backend service for order processing is developed as a Java Microservice using WSO2 MS4FJ, which processes the order and can respond with the order id and cost.

Why Open Source?

As a small business, Sam’s resources are limited. Her best strategy for implementing the solution is open source, which offers lower startup costs and effort compared to the high licensing fee and complications involved with the commercial vendors.

Being open source also allows Sam to download, learn and evaluate the product without a high investment, thus minimizing her business risks. Depending on the results of her evaluations, he could go forward or ‘throw away’.

To grow in a competitive business  environment requires companies to differentiate. For small scale business  it becomes more of a challenge to implement such solution due to resource limitations. The seamless integrated capability provided by the open-source WSO2 Platform provides business a low risk and cost effective technology to build and deliver real-time business value to their clients.

The code for this use case

Listed below are what you need to recreate this discussion as a demo:

 

Pre-Requisites

Down the following products and set the port offsets to run the servers on the same server. WSO2 APIM runs on the default offset (0) while the WSO2 CEP offset is 4.

Products
WSO2 API Manager 2.0.0
WSO2 Complex Event Processor 4.2.0
WSO2 MSF4J (WSO2 MicroServices Framework for Java)
WSO2 App Cloud

For simplification purposes the inventory details are stored as tables of a MySQL database.

Execute MySQL database script db_script.mysql to create ‘Inventory’ Database and ‘Rewards’ and ‘orders’ table.

WSO2 MSF4J

  1. Execute the Kopi-service’ java microservice
    1. <WSO2_MSFJ_HOME>/kopi-service/target / Java -jar kopi-service-0.1.jar

Alternatively the java microservice can be deployed in the WSO2 App Cloud.

WSO2 CEP Setup

  1. Setup email configuration for the output event publisher
  2. Copy the JDBC driver JAR file for your database to <CEP_HOME>/repository/components/lib.
  3. Startup the server
  4. Configure a data source as “CEP-DS”.  Select Mysql as the RDBMS and set the database as ‘Inventory’ created.
  5. The created datasource is referenced when defining ‘Event Tables’ when creating the Siddhi queries.
  6. Deploy “Streaming-CApp” CApp . The correct deployment should visualize an event flow as depicted.

WSO2 API Manager Setup

  1. Configure WSO2 API Manager to pass end user attributes as JWT Token.
  2. Copy the custom data publisher implementation (org.wso2.api.publish-1.0-SNAPSHOT.jar ) library to $API_MGR_HOME /repository/components/lib
  3. Startup the Server.
  4. Login to the API Publisher:
  5. Create and publish an  API with the following details
    1. Context -<context>
    2. Version – <version>
    3. API Definition
      1. GET – /order/{orderId}
      2. POST -/order
    4. Set HTTP Endpoint: http://<server-ip:port>/WSO2KopiOutletPlatform/services/w_s_o2_kopi_outlet_service
    5. Change the default API call request flow by enabling message mediation and uploading file  datapublisher.xml as the ‘In Custom Sequence’.
  6. Login to the API Store and subscribe to the created API
  7. Invoke API with an order exceeding available quantity { “Order”:{ “drinkName”:”Doppio”, “additions”:”cream”, “orderQuantity”:2000 } }

 

Predicting re-order levels

The re-order quantity is initially calculated based on a ‘re-order factor(ROF) and order quantity formula (ROF * order quantity). Siddhi provides a machine learning extension for predictive analytics. The reorder quantity can be predicted using machine learning model.  

The re-order data points calculated previously (with the formula) can be used as data sets to generate a machine learning model with WSO2 Machine Learner. A predicted re-order quantity is calculated based on the “Linear Regression” algorithm, with the “Reorder factor (ROF) and coffee type  as the features.

The siddhi query for predicting reorder quantity is commented under ‘Predict reorder quantity using Machine Learning extensions. It can be executed by replacing the query under ‘Calculating reorder quantity’.

Appendix: code

Custom Sequence

<?xml version=”1.0″ encoding=”UTF-8″?>

<sequence name=”publish-endUser” trace=”disable” xmlns=”http://ws.apache.org/ns/synapse”>

 <log level=”full”/>

 <property expression=”get-property(‘$axis2:HTTP_METHOD’)” name=”VERB”

   scope=”default” type=”STRING” xmlns:ns=”http://org.apache.synapse/xsd”/>

 <property expression=”get-property(‘transport’,’X-JWT-Assertion’)”

   name=”authheader” scope=”default” type=”STRING” xmlns:ns=”http://org.apache.synapse/xsd”/>

 <log level=”custom”>

   <property expression=”base64Decode(get-property(‘authheader’))”

     name=”LOG_AUTHHEADER” xmlns:ns=”http://org.apache.synapse/xsd”/>

 </log>

 <property expression=”base64Decode(get-property(‘authheader’))”

   name=”decode_auth” scope=”default” type=”STRING” xmlns:ns=”http://org.apache.synapse/xsd”/>

 <script description=”” language=”js”><![CDATA[var jsonStr= mc.getProperty(‘decode_auth’);

var val= new Array();

val=jsonStr.split(“}”);

var decoded= new Array();

decoded= val[1].split(“enduser\”\:”);

var temp_str= new Array();

temp_str=decoded[1].split(‘\”‘);

mc.setProperty(“end_user”,temp_str[1]);]]></script>

 <property expression=”get-property(‘end_user’)” name=”endUser”

   scope=”default” type=”STRING”/>

 <log level=”custom”>

   <property expression=”get-property(‘endUser’)” name=”Log_Enduser”/>

 </log>

 <class name=”org.wso2.api.publish.PublishMediate”>

   <property name=”dasPort” value=”7619″/>

   <property name=”dasUsername” value=”admin”/>

   <property name=”dasPassword” value=”admin”/>

   <property name=”dasHost” value=”localhost”/>

   <property name=”streamName” value=”Data_Stream:1.0.0″/>

 </class>

</sequence>

Siddhi Query

/* Enter a unique ExecutionPlan */

@Plan:name(‘Predict’)

/* Enter a unique description for ExecutionPlan */

— @Plan:description(‘ExecutionPlan’)

/* define streams/tables and write queries here … */

@Import(‘API_Stream:1.0.0’)

define stream APIStream (drinkName string, additions string, orderQuantity double, endUser string);

@Export(‘allOrder_Stream:1.0.0’)

define stream allOrderstream (drinkName string, qtyAvl double, qtyPredict double);

@Export(‘predictStream:1.0.0’)

define stream predictStream (drinkName string, qtyPredict double);

@Export(‘Order_Stream:1.0.0’)

define stream orderStream (drinkName string, orderQty double, qtyAvl double, qtyOrder double, ROF double); 

@Export(‘reOrder_Stream:1.0.0’)

define stream reOrderStream (drinkName string, qtyAvl double, qtyPredict double);

@Export(‘outOrder_Stream:1.0.0’)

define stream outOrderStream (drinkName string, qtyOrder double, qtyReorder double, ROF double);

@Export(‘ULPointStream:1.0.0’)

define stream ULPointStream (subScriber string, points double);

@Export(‘totPointStream:1.0.0’)

define stream totPointStream (subScriber string, totPoints double);

@Export(‘FreeOrderStream:1.0.0’)

define stream FreeOrderStream (subScriber string, points double); 

@from(eventtable=’rdbms’, datasource.name=’CEP-DS’, table.name=’orders’)

define table drinkEventTable(drinkName string, qtyAvl double, qtyOrder double, ROF double);

@from(eventtable=’rdbms’, datasource.name=’CEP-DS’, table.name=’rewards’)

define table pointEventTable(subscriber string, points double);

from APIStream#window.length(0)as t join drinkEventTable as d

on t.drinkName==d.drinkName

select t.drinkName as drinkName, t.orderQuantity as orderQty, d.qtyAvl as qtyAvl,d.qtyOrder as qtyOrder, d.ROF as ROF

insert into orderStream; 

/* ——Drink Reordering————- */

/* —–Calculating reorder quantity———– */ 

from orderStream#window.length(0) as p join drinkEventTable as o

on o.drinkName==p.drinkName

select o.drinkName,o.qtyAvl,(p.orderQty* p.ROF) as qtyPredict

insert into allOrderstream;

/*———————Predict reorder quantity using Machine Learning extentions—————*/

/*

from orderStream

select drinkName,ROF

insert into ROF_Incoming;

from ROF_Incoming#ml:predict(‘registry://_system/governance/ml/Reorder.Model’,’double’,drinkName,ROF)

select drinkName, qtyReorder as qtyPredict

insert into predictStream;

from predictStream#window.length(0) as p join drinkEventTable as o

on o.drinkName==p.drinkName

select o.drinkName,o.qtyAvl, p.qtyPredict

insert into allOrderstream;

*/

/*——————————————————–*/

partition with (drinkName of allOrderstream)

begin @intro(‘query3’)

from allOrderstream[qtyPredict>=qtyAvl]

select drinkName,qtyAvl,qtyPredict

insert into #tempStream2;

from e2=#tempStream2

select e2.drinkName, e2.qtyAvl,e2.qtyPredict

insert into reOrderStream

end;

from orderStream[(qtyAvl-orderQty)>=0]#window.length(0)as t join drinkEventTable as d

on t.drinkName==d.drinkName

select t.drinkName as drinkName,(d.qtyAvl – t.orderQty) as qtyAvl

update drinkEventTable

on drinkName==drinkEventTable.drinkName; 

/*——————————————– */

 /*—– Offer free drink ——-*/

from APIStream

select endUser as subScriber ,orderQuantity as points

insert into ULPointStream;

 from ULPointStream as u join pointEventTable as p

on u.subScriber == p.subscriber

select u.subScriber as subscriber ,(u.points+p.points) as points

update pointEventTable

on subscriber==pointEventTable.subscriber;

from ULPointStream[not(pointEventTable.subscriber==subScriber in pointEventTable)]

select subScriber as subscriber,points

insert into pointEventTable;

from ULPointStream as u join pointEventTable as p

on u.subScriber == p.subscriber

select u.subScriber as subScriber,p.points as totPoints

insert into totPointStream;

 partition with (subScriber of totPointStream)

begin @info(name = ‘query4’)

from totPointStream[totPoints>=100]

select *

insert into #tempStream;

 from e1= #tempStream

select subScriber, totPoints as points

insert into FreeOrderStream

end ;

/*————————————*/

What does it take to build a platform ?

At WSO2, we pride ourselves to have built a very strong runtime platform, which is called Carbon. All products are based on Carbon, no exception.

For Carbon V5, we rebooted the architecture to make it leaner, as it had grown a bit of a fat belly over the last 8 years, but the principles remain the same: modular, composable, extensible architecture. We continue to leverage OSGi, which has served us well, but we are removing the dependencies on technical components which also served us well, such as Axis2, but have unfortunately grown old and are not so fit for the new IT world.

Developer Studio, which is based on Eclipse, already has a modular/extensible architecture. In V4, we are just changing the packaging so that each product team can release their tooling individually. Starting very soon, you will see appearing dedicated tooling for each product (such as Dev Studio for ESB) on every product page but of course, as for Carbon, you will be able to combine the ESB features, for example with the DSS and CEP ones if you need to to have a single IDE across all the WSO2 products.

For analytics, Data Analytics Server (DAS) is our combined offering for all types of analytics: batch (analyzing data at rest), streaming (analyzing data in real-time) and predictive (learn from existing data and predict behavior). Again , Data Analytics server serves well as a platform, since applications to be installed on top (aka toolboxes) are packaged individually and deployed. So Data Analytics for API Manager will be nothing more than the DAS product with pre-installed toolboxes for log analysis, API activity and technical monitoring. Similarly to Carbon and Dev Studio, analytics for multiple products can be combined on a single DAS server.

In fact, in order to provide a consistent experience across a large number of products, you have no choice but thinking about the underlying components first. What I explained above extends to our stores (API Store, Apps Store, Processes Store, etc.) . To do that right, we first created an enterprise store, which is really a framework for building your own store. To build dashboards for analytics, we needed a dashboard product, on which our teams could build their own visualizations and gadgets. That was User Engagement Server, now renamed as Dashboard Server.

This philosophy can be represented in the diagram visible below: at the bottom, you find the foundation servers and frameworks. On top of those, product teams build extensions and don’t have to worry about the core functionality of the framework. Of course, customers can also create extensions, or modify the default ones, typically adding their own analytics and own visualizations.

At WSO2 we are committed to this approach, as it has allowed to quickly evolve and innovate in the past 10 years. Customers benefit from this in many ways, primarily on consistency in installation, behavior, or operational management.

Transform Your Enterprise IT: Integrate and Automate

Most enterprises deal with a variety of common IT problems to which they would find quick fixes. One such example is the need to maintain five different usernames and passwords to login to five different systems. Another typical example is the closing of a sales deal – the sales department would conclude the deal and ensure the goods are delivered; this would be updated on the sales records, however, when the finance department reconciles invoices against sales at the end of the quarter, there might be mismatches because the invoicing process was missed.

area-review

To address these issues, most enterprises will use a combination of basic IT and collaboration software to manage day-to-day requirements. And over time, these requirements will change, prompting a slight shift in the enterprise’s IT landscape too. This may result in a situation where different teams within the organization will find the most efficient ways to carry out tasks and meet their IT requirements with the use of packaged software, possibly by building their own, or even subscribing to more SaaS-type offerings.

While this might temporarily fix specific problems, it will pose long-term challenges as such measures are often not pre-planned or do not follow a particular IT roadmap. The actual negative effects of individual teams working in silos would only be felt when the company starts to grow and the use of various systems increase as well. Eventually, the use of several systems that don’t talk to each other will cause operational issues and even hurt motivation among employees.

The recurrent problems with these multiple systems working in silos include extensive manual effort, errors, blame, rework, frustration, complaints, and the need to manage multiple passwords. These in turn result in inefficiencies.

To address these challenges, the enterprise needs an easy-to-implement, cost-effective solution. There’s no guarantee though that there would be a plug and play type of system or one that could be customized to meet the enterprise’s exact requirements. The enterprise would seek a unique, bespoke solution that would either mean they change the way they work with existing software or rethink the software itself.

The most viable option would be to integrate the systems (which, of course, have proven to be efficient to meet a specific requirement) used by different functions and then explore some sort of automation that will provide relief to employees.

WSO2’s highly-acclaimed open-source middleware platform has the capabilities that enable seamless integration of IT applications, thus streamlining day-to-day business activities of a given enterprise. This in turn will boost efficiency and integration across business functions and teams and improve overall productivity as well.

For instance, WSO2 Identity Server (WSO2 IS) can define an identification for a user in a particular organization, enabling him/her to log into multiple systems on-cloud or on-premise with a single username/password.

The enterprise too will benefit as WSO2 IS offers provisioning capabilities that allow your IT to register and auto-provision new employees across multiple systems as well as easily de-provision them when they leave the organization.

WSO2 Enterprise Service Bus can meet all your integration challenges with its capability to connect various systems that speak different languages. It also comes with a defined set of connectors to further support integration of systems, be it on the cloud or on-premise.

Once all of your systems have been integrated, you can leverage WSO2 Data Analytics Server (WSO2 DAS) to pull reports from different functions within your organization and automatically collate data that will translate to valuable information required to make business decisions. WSO2 DAS has in-built dashboard capabilities that will automatically create and publish dashboards on a real-time basis.

Moreover, all WSO2’s products are 100% open source, which gives enterprises the freedom of choice and empowers the business with limitless possibilities to expand.

Learn more about WSO2’s comprehensive and open platform for your connected enterprise.

For more details on how to establish friendly enterprise IT and get more love from your team, watch this talk by WSO2’s VP Operations, Shevan Goonetilleke.

Enterprise Mobility Management: Moving Beyond Traditional Mobile Device Management

Today, managing mobility is not just confined to embracing the bring your own device (BYOD) or corporately owned, personally enabled (COPE) concepts in your enterprise, or which device platform or operating system you use. The focus has shifted to more advanced strategies that enable enterprises to become connected and reach a new level of agility through digital transformation.

While the modern enterprise mobility management landscape has transformed significantly, it has also brought about more complexities.

Employees now work from locations all over the world, access data from various data centers and share this data not only through corporate networks, but also through cloud services and APIs. Because of this sense of globalization and the advent of cooler and more convenient mobile devices, enterprises started adopting mechanisms that consider all these factors in their infrastructure in order make their employees and their company as a whole more productive.

Mobile_Device_Management

This made device management not only about managing, securing and storing device data. It’s now about making mobility management part of the entire enterprise ecosystem. This means you need to think about broader aspects like governance, analytics, and identity provisioning. Such a system needs to

  • Be extensible enough to support all devices and operating system types.
  • Have a plug-in model that allows you to integrate with other tools (such as analytics and governance tools) existing in your environment.
  • Be able to moderate, approve and provision applications through a corporate app store.
  • Produce analytics dashboards, audit trails and reports to supplement business strategies.
  • Have comprehensive policy management and enforcement functionality with capabilities such as compliance monitoring, containerization, data encryption and password enforcement.

So how exactly do you go about building such a comprehensive enterprise mobility management system? By using the right tool for the right job. You need to implement a tool that not only meets the above requirements, but is also scalable enough to accommodate your enterprise’s growth. It should also be user-friendly and customizable in order to win over your employees.

Where can you find such a solution? Right here. WSO2 Enterprise Mobility Manager (WSO2 EMM) offers all of this and more. Key advantages of adopting WSO2 EMM:

  • Gives you the ability to compose, enforce and manage granular level security policies for individual and groups of devices.
  • Enables strategic decision making by making information gathered across all mobile business activities available through powerful dashboards with analytics and reporting.
  • Strengthens security through data encryption and password enforcement among other things.
  • Embraces device ownership schemes like BYOD enabling employees to be more efficient and make decisions faster while saving enterprises the procurement and data plan cost associated with each user.

WSO2 EMM is a 100% open source comprehensive enterprise-grade platform with all the capabilities you need for enterprise mobility management including device configuration management, policy enforcement, app management, device data security, and compliance monitoring.

To learn more about WSO2 EMM and its capabilities, watch WSO2 Technical Lead Prabath Abeysekara’s talk on Enterprise Mobility Management: Moving Beyond Traditional MDM at WSO2Con Asia 2016.

Modern Solution Development: The Battle Between ‘Retaining’ and ‘Changing’ Technology

In today’s fast-paced technology world, change is constant and rapid. New concepts continually emerge, gain traction, disappear, and reemerge. While it’s important to embrace this evolution, core concepts that work in older technology should not be tossed out either.  

During his closing keynote at WSO2Con USA 2015, Dr. Donald Ferguson – former vice president and CTO of Dell, identified concepts independent of the specific technology realization in order to highlight requirements that current technologies don’t meet.

image00

He noted that although concepts such as loose coupling, service delivery, and asynchronous messaging have been used for various different technologies like common object request broker architecture (CORBA), Web services, and service-oriented architecture (SOA), each of these is just an improvement, yet based on the same ideas. “The key thing when going forward is to make sure that we don’t loose some of the things that we managed to bring forward because they were good,” he adds.

He explains these similarities, improvements, and limitations are apparent when comparing SOA to microservices for instance; features such as programming style, code type, messaging type, and the use of databases are similar in both concepts whereas there are certain important distinctions in means of evolution, systematic change, and scaling. “It’s more about how you do it – the internal architecture, than the externals. With one exception – smart endpoints and dumb pipes” says Ferguson. This concept encourages the microservice community to use a light-weight message bus (a hub) that acts solely as a message router and leaves the smart part of things (receiving a request, applying appropriate logic and producing a response) to the service itself.

But as Ferguson states, “You don’t want just a hub, you want it to be active”. If you open any book on enterprise application design patterns, they first show you what not to do – a monolithic point-to-point architecture. To avoid doing this you need to connect everything through a hub that needs to be able to reformat, route and combine messages as well as understand different protocols and data types that will travel across it. This is where middleware, or specifically the enterprise service bus (ESB) becomes important.

Ferguson notes that dumb fast messaging seems more appealing than using a powerful ESB but it just repeats the fallacies of quick point-to-point connections. Using an active hub and taking advantage of middleware to do it is much more advantageous because it adds value and improves robustness, reusability and scalability.

He further adds that any organization can realize tremendous value from microservices and other new technology; however, this could sometimes result in the risk of losing benefits like interface dependency and optimized composition that emerged in the past. “This needs to be done through application design patterns and middleware that empowers them…that’s part of the value WSO2 is,”he concludes.

WSO2’s complete middleware stack includes the WSO2 integration, API management, security and analytics platforms. By leveraging these components and more you can easily develop modern solutions despite what technology you use.

To learn more, watch Don Ferguson’s presentation at WSO2Con US 2015.

 

How you can Increase Agility and Expandability with Event Driven Architecture (EDA)

From ordering your favorite kind of pizza or a taxi to manufacturing and financial processes, everything is event driven today. People expect to do everything immediately, get instant feedback on the status of their request, and interact in real-time with anybody involved in the process.

John Mathon, the former vice president of enterprise evangelism at WSO2, wrote a white paper which explores how you can keep pace with these demands by implementing event driven architecture (EDA) in your enterprise.

EDA is essentially a messaging system that notifies interested parties of events that occur in order for them to benefit from it. The publish/subscribe model was implemented in the earliest real-time event-driven systems. Anonymity, discoverability and guaranteed delivery were a few of the characteristics that made it popular.

But this simple model deemed insufficient for the demanding and varied needs of subscribers, notes Mathon. Here came the rise of the enterprise service bus (ESB), which standardized enterprise integration patterns, the business process server (BPS) which allowed messages to trigger business processes that dealt with events and business activity monitor, now named data analytics server (DAS), to monitor the health of enterprises through statistics.

These tools became standard components in an EDA and are useful even today, which is why IoT is reusing pub/subs all over again.

Screen Shot 2016-04-26 at 3

The easiest, fastest and most efficient way of implementing EDA in your enterprise is to incorporate already existing event-driven technologies. You may think writing dedicated software would be more cost efficient and cater more to your specific needs, but in the long run the cost of maintenance would be over a dozen times more than the initial cost of development.

Existing tools are designed to increase performance and reliability of your system. It’s also easy for non-programmers to use because of features such as drag-and-drop components. They can handle large loads and are robust, secure and resilient to failure.

You can choose a specific tool for a specific problem. For example, long-running processes use BPS and short-running ones use message broker (MB). Also, when the tools are combined together it can provide additional power by working together to achieve one goal.

The problem with combining tools is that they can each be large monolithic entities that require significant communication bandwidth and can cause increased load on servers. WSO2 solves this problem because all the tools you require are built as light-weight components with the same base framework making it possible to combine them in the same Java runtime.

When implementing an EDA you need to keep in mind the message flow rates and the characteristic of the message flows. Make sure not to create extremely large messages or do a lot of computation during processing. You also need to consider whether you will be designing for microservices; your architecture design depends on this. API management is another key factor that you need to keep in mind. And lastly, you need to know which tool to use for which job.

WSO2 offers a full suite of open source components for EDA to implement highly scalable and reliable enterprise grade solutions. This includes a complete middleware stack, which includes the WSO2 integration, analytics, security and API management platforms.

For more details download John’s whitepaper here.

Event-Driven Architecture and the Internet of Things

It’s common knowledge now that the Internet of Things is projected to be a multi-trillion dollar market with billions of devices expected to be sold in a few years. It’s happening already. What’s driving IoT is a combination of low-cost hardware and lower power communications, thus enabling virtually everything to become connected cheaply. Even Facebook talked about it in their recent F8 conference (photo by Maurizio Pesce). 

16748634049_d7aea3646d_k

And why wouldn’t they? A vast array of devices that make our lives easier and smarter are flooding the market ranging from fuel-efficient thermostats, security systems, drones, and robots, among others. The industrial market for connected control and monitoring has existed and will expand in automated factories, logistics automation, and building automation. However, efficiencies are being found with new areas. For instance, connected tools for the construction site enable construction companies to better manage construction processes. We are also seeing increased intelligence from what can be referred to as the network effect – the excess value created by the combination of devices all being on a network.

What’s remarkable is that all IoT protocols share one common characteristic, i.e. they are all designed around publish/subscribe. The benefit of publish/subscribe event driven computing is simplicity and efficiency.

Devices or endpoints can be dynamic, and added or lost with little impact to the system. New devices can be discovered and rules applied to add them to the network and establish their functionality. All IoT standards support some form of discovery mechanism so that new devices can be added as near seamlessly as possible. Over the air a message can be delivered once to many listeners simultaneously without any extra effort by the publisher.

Addressing The Challenges

All of this efficiency and flexibility sounds too good to be true? You guessed right. The greatest challenge with this is security and privacy. While most protocols support encryption of messages, there are serious issues with security and privacy with today’s protocols. There are many IoT protocols and the diversity indicates a lot of devices will not be secure and it is likely that different protocols will have different vulnerabilities. Authentication of devices is not generally performed, so various attacks based on impersonation are possible.

Most devices and protocols don’t automate software updating and complicated action is needed sometimes to update software on devices. This can lead to vulnerabilities persisting for long periods. However, eventually, these issues will be worked out and devices will automatically download authenticated updates. The packets will be encrypted to prevent eavesdropping and it will be harder to hack IoT device security, albeit this could take years. Enterprise versions of devices will undoubtedly flourish, thereby supporting better security as this will be a requirement for enterprise adoption.

Publish/subscribe generates a lot of excitement due to the agility it gives people to leverage information easily, thus enabling faster innovation and more network effect. Point-to-point technologies lead to brittle architectures that are burdensome to add or change functionality.

WSO2 has staked out a significant amount of mindshare and software to support IoT technologies. WSO2 helps companies with its lean, open-source componentized event driven messaging and mediation technology that can go into devices and sensors for communication between devices and services on hubs, in the cloud or elsewhere; big data components for streaming, storing and analyzing data from devices; process automation and device management for IoT and application management software for IoT applications and devices. WSO2 can help large and small firms deploying or building IoT devices to bring products to market sooner and make their devices or applications smarter, easier, and cheaper to manage.

To learn more about event-driven architecture refer to our white paper – Event-Driven Architecture: The Path to Increased Agility and High Expandability

Want to know more about using analytics to architect solutions? Read  IoT Analytics: Using Big Data to Architect IoT Solutions

 

Enabling Microservice Architecture with Middleware

Microservices is rapidly gaining popularity among today’s enterprise architects to ensure continuous, agile delivery and flexible deployments. However many mistake microservice architecture (MSA) to be a completely new architectural pattern. What most don’t understand is that it’s an evolution of Service Oriented Architecture (SOA). It has an iterative architectural approach and development methodology for complex, service-oriented applications.

microservices

Asanka Abeysinghe, the vice president of solutions architecture at WSO2, recently wrote a white paper, which explores how you can efficiently implement MSA in a service-oriented system.

Here are some insights from the white paper.

When implementing MSA you need to create sets of services for each business unit in order to build applications that benefit their specific users. When doing so you need to consider the scope of the service rather than the actual size. You need to solve rapidly changing business requirements by decentralizing governance and your infrastructure should be automated in such a way that allows you to quickly spin up new instances based on runtime features. These are just a few of the many features of MSA, some of which are shared by SOA.

MSA combines the best practices of SOA and links them with modern application delivery and tooling (Docker and Kubernetes) and technology to carry out automation (Puppet and Chef).

In MSA you need to give importance to how you scope out a service rather than the size. The inner architecture of an MSA addresses the implementation architecture of the microservices, themselves. But to enable flexible and scalable development and deployment of microservices, you first need to focus on its outer architecture, which addresses its platform capabilities.

Enterprise middleware plays a key role in both the inner and outer architecture of MSA. Your middleware needs to have high performance functionality and support various service standards. It has to be lean and use minimum resources in your infrastructure as well as be DevOps-friendly. It should allow your system to be highly scalable and available by having an iterative architecture and being pluggable. It should also include a comprehensive data analytics solutions to ensure design for failure.

This may seem like a multitude of functionality and requirements that are just impossible to meet. But with WSO2’s complete middleware stack, which includes the WSO2 Microservices Framework for Java and WSO2 integration, API management, security and analytics platforms, you can easily build an efficient MSA for your enterprise.

MSA is no doubt the way forward. But you need to incorporate its useful features into your existing architecture without losing applications and key SOA principles that are already there. By using the correct middleware capabilities, enterprises can fully leverage the advantages provided by an MSA to enable ease of implementation and speed of time to market.

For more details download Asanka’s whitepaper here.