Free or low-cost sources of unstructured information, such as Internet news and online discussion sites, provide detailed local and near real-time data on disease outbreaks, even in countries that lack traditional public health surveillance. To improve public health surveillance and, ultimately, interventions, we examined 3 primary systems that process event-based outbreak information: Global Public Health Intelligence Network, HealthMap, and EpiSPIDER. Despite similarities among them, these systems are highly complementary because they monitor different data types, rely on varying levels of automation and human analysis, and distribute distinct information. Future development should focus on linking these systems more closely to public health practitioners in the field and establishing collaborative networks for alert verification and dissemination. Such development would further establish event-based monitoring as an invaluable public health resource that provides critical context and an alternative to traditional indicator-based outbreak reporting.
I got an update on the Oracle Business Rules product recently. Oracle is an interesting company - they have the components of decision management but do not yet have them under a single umbrella. For instance, they have in-database data mining (blogged about here), the Real Time Decisions (RTD) engine, event processing rules and so on. Anyway, this update was on business rules.
One year ago I penned Event Processing in Twitter Space, and today parts of the net are buzzing about Twitter.
In a nutshell, Twitter is a one-to-many communications service that uses short messages (140 chars or less). Following on the heels of the blogging phenomena, Twitter has been primarily used for microblogging and group communications.
Twitter, and Twitter-like technologies, has great promise in many areas. For example, you could be subscribed to the @tsunamiwarning channel on your dream island vacation and get instant updates on potential disasters. A team of people working in network management could subscribe to the @myserverstatus channel and receive updates on their health of their company IT services. Passengers could subscribe to the @ourgatestatus channel and follow up-to-date information on their fight.
Twitter was created to answer the simple question, “What are you doing now?”
Bruce makes an interesting comment on business rules too: that “routing logic in process gateways” are not “business rules”. That doesn’t really make sense: for sure some gateways will be process-housekeeping decisions of little interest to the business user, but others will surely embed business-critical decisions. On the other hand, it has long been acknowledged that a best practice for BPM is to delegate such business decisions to a managed decision service - hence the explicit new business rule (aka decision) task in BPMN 2.0. And,in the CEP world, for tools like TIBCO BusinessEvents to invoke a decision managed by its Decision Manager tool.
Actually the conceptual model of EPN (event processing network) can be thought as a kind of data flow (although I prefer the term event flow - as what is flowing is really events). The processing unit is EPA (Event Processing Agent). There are indeed two types of input to EPA, which can be called "set-at-a-time" and "event-at-a-time". Typically SQL based languages are more geared to "set-at-a-time", and other languages styles (like ECA rule) are working "event-at-a-time". From conceptual point of view, an EPA get events in channels, one input channels may be of a "stream" type, and in other, the event flow one-by-one. As there are some functions that are naturally set-oriented and other that are naturally event-at-a-time oriented, and application may not fall nicely into one of them, it makes sense to have kind of hybrid systems, and have EPN as the conceptual model on top of both of them...
The main characteristic to be aware of in these tools is that BE is primarily rule-based (using an embedded rule engine), whereas BW and iProcess are orchestration / flow engines. In BE we can use a state diagram to indicate a sequence of states which may define what process / rules apply, but this is really just another way of specifying a particular type of rules (i.e. state transition rules).
The main advantages to specifying behavior as declarative rules are:
Handling complex, event-driven behavior and choreography
Iterative development, rule-by-rule
The main advantages of flow diagrams and BPMN-type models are:
Ease of understanding (especially for simpler process routes)
Process paths are pre-determined and therefore deemed guaranteeable.
In combination these tools provide many of the IT capabilities required in an organization. For example, a business automation task uses BW to consolidate information from multiple existing sources, with human business processes for tasks such as process exceptions managed by iProcess. BE is used to consolidate (complex) events from systems to provide business information, or feed into or drive both BW and iProcess, and also monitors end-to-end system and case performance.
On Event Processing Agents implies a “new” event processing reference architecture with terms like,
(1) simple event processing agents for filtering and routing,
(2) mediated event processing agents for event enrichment, transformation, validation,
(3) complex event processing agents for pattern detection, and
(4) intelligent event processing agents for prediction, decisions.
Frankly, while I generally agree with the concepts, I think the terms in On Event Processing Agents tend to add to the confusion because these concepts in On Event Processing Agents are following, almost exactly, the same reference architecture (and terms) for MSDF, illustrated again below to aid the reader.
The success of Service-Oriented Architecture (SOA) has created the foundation for information
and service sharing across application and organizational boundaries. Through the use of SOA,
organizations are demanding solutions that provide vast scalability, increased reusability of
business services, and greater efficiency of computing resources. More importantly,
organizations need agile architectures that can adapt to rapidly changing business requirements
without the long development cycles that are typically associated with these efforts. Event-Driven
Architecture (EDA) has emerged to provide more sophisticated capabilities that address these
dynamic environments. EDA enables business agility by empowering software engineers with
complex processing techniques to develop substantial functionality in days or weeks rather than
months or years. As a result, EDA is positioned to enhance the business value of SOA.
The purpose of this white paper is to describe the approach employed to overcome the significant
technical challenges required to design a dynamic grid computing architecture for a US
government program. The program required optimization of the overall business process while
maximizing scalability to support dramatic increases in throughput. To realize this goal, an
architecture was developed to support the dynamic placement and removal of business services
across the enterprise.
Multiple channels and types of events…
… executing in multiple Inference Agents (Event Processing Agents on an Event Processing Network)…
… where Events drive Production Rules with associated (shared) data…
… and event patterns (complex events) are derived from the simple events and also drive Production Rules via inferencing…
… to lead to “real-time” decisions.
It is from this operational asymmetry that complexity in event processing is required. In other words, as distributed networks grow in complexity, it is difficult to determine causal dependence when trying to diagnosis a distributed networked system. Most who work in a large distributed network ecosystem (cyberspace) understand this. The CEP notion of “the event cloud” was an attempt to express this complexity and uncertainly (in cyberspace).
JT has posted his view on rules and decisions and how they relate. Given that James talks more about services than events, I thought it would be worth reviewing his post from both a Complex Event Processing and a TIBCO BusinessEvents event processing platform perspective.
”Decision Services:
Support business processes by making the business decisions that allow a process to continue.
Support event processing systems by adding business decisions to event correlation decisions (they are often called Decision Agents in this context).
Allow crucial and high-maintenance parts of legacy enterprise applications to be externalized for reuse and agility.
Can be plugged into a variety of systems using Enterprise Service Bus approaches.”
As I was reading the Smart Roads. Smart Bridges. Smart Grids. feature in the WSJ, I couldn't help but notice all the event processing scenarios and patterns. As you would expect, all the scenarios start with sensors and instrumentation and involve event (instrumentation) transmission, detection and processing. The order, frequency and end actions of transmission, detection and processing vary by scenario. What follows are excerpts from the article in the domains of smart transportation and grids and my associated event processing observations.
Some definitions key on the question of the probability of encountering a given condition of a system once characteristics of the system are specified. Warren Weaver has posited that the complexity of a particular system is the degree of difficulty in predicting the properties of the system if the properties of the system’s parts are given. In Weaver’s view, complexity comes in two forms: disorganized complexity, and organized complexity. [2] Weaver’s paper has influenced contemporary thinking about complexity. [3]
Folks who have been in event processing fields like network management (NMS) or security management for many years have a very high expectation for processing complex events. Most of the network and security management platforms on the market have basic rule-based processing available “out of the box” and most of these platforms have had the capability to process events in near-real time for decades. Adding a new “rules-based event processing platform” to the network and security management software mix does little to add any additional capability and certainly does not solve any nagging complex detection problems.
- leave anything related to transport, communication to other layers- use this revised CEP to express and execute event-relevant logic, the purpose of which is to translate the ambient events into relevant business events- have these business events trigger business processes (however lightweight you want to make them)- have these business processes invoke decision services implemented through decision management to decide what they should be doing at every step- have the business processes invoke action services to execute the actions decided by the decision services- all the while generating business events or ambient events- etc.
L. Stojanovic, S. Sen, J. Ma, and D. Riemer. Proceedings of the 6th ACM International Conference on Distributed Event-Based Systems, page 385--386. New York, NY, USA, ACM, (2012)
Y. Xu, N. Stojanovic, L. Stojanovic, and T. Schuchert. Proceedings of the 6th ACM International Conference on Distributed Event-Based Systems, page 379--380. New York, NY, USA, ACM, (2012)
J. Llinas, C. Bowman, G. Rogova, A. Steinberg, and F. White. In P. Svensson and J. Schubert (Eds.), Proceedings of the Seventh International Conference on Information Fusion (FUSION 2004, page 1218--1230. (2004)
W. White, M. Riedewald, J. Gehrke, and A. Demers. PODS '07: Proceedings of the twenty-sixth ACM SIGMOD-SIGACT-SIGART, page 263--272. New York, NY, USA, ACM, (2007)