Skip to main content

Strategy and Transformation

Get Decoupled with Inference Rule Engines

Perficient: Digital Strategy Experts
The Future is Digital

Becoming digital is the surest way for you to understand your customers' needs and meet their expectations. Learn how Perficient can help anticipate what's ahead for you and your customer with a digital strategy centered around empathy, alignment, and agility.

Watch Now: Digital Strategy Experts

The concept of loosely coupled system is not a new one and is one of the central tenets of SOA systems. Each service is solely focused on it’s service contract responsibilities and leaves underlying domain expertise to other services to fulfill it’s requests. The premise is fairly simple, using a service registry, a service that required certain functionality outside it’s scope would look up the appropriate service to fulfill this request. It did not have to know where the service is located or how the service is implemented as long as it met the service contract requirements and got back the results in a consistent fashion. Loosely coupled services promoted reusability, stability and faster time to deployment.
Get Decoupled with Inference Rule EnginesHowever more than these concepts, loose coupling promotes knowledge domains and service autonomy. In a perfectly run IT world were everything ran at the speed of light and resources were not an issue, this would pose no problems, the problem I have with loosely coupled services is the increased granularity of each service. The latency and resources needed to support the propagation of services would increase exponentially based on the granularity levels we architected for each service.
How would these systems work for very large data systems like Complex Event Processors where gigabyte streams of data needs to be analyzed in real-time? In this case it seems loose coupling would not be ideal for performance and needs to be re-evaluated.
For example consider a pricing service that priced cellular phone service based on geographical location, demographics and municipal tax codes. In a loosely coupled system, we would provide a service to get the price based on geography, a discount based on demographics (let’s say seniors get 10% off) and finally a tax rate based on a regional or municipal code. I would assume the marketing department would handle the first 2 services while the legal or accounting side would furnish the tax rates.
So something like a Customer interface with 3 attributes:
State, City and Age.
And to calculate Price we need a getBasePrice() service, a getDiscount() service and finally a getTaxRate() service.
We need to make 3 separate service calls passing in the Customer object to get the facts we need to calculate the correct price. This encapsulates loose coupling perfectly since the calling service does not need to know about any marketing campaigns for a geographical region nor any specials for seniors, nor does it need to have knowledge on municipal tax codes, it can rely on the Customer service interface contract to get the data it needs and make the necessary calculations to return the final cost to the customer. However the performance overhead of this loosely coupled system is fairly high and if the service is very busy, the performance implications get propagated down to the systems below and furthermore the cost of calling all these systems might not meet the SLA requirements of this service.
So how can an inference rule engine help mitigate these performance and scalability  issues while maintaining the loose coupling principles?
An inference engine works using concepts of forward chaining and backward chaining to arrive at a logical conclusion. For example forward chaining is fairly obvious – it moves from logical statements to a final conclusion, while backward chaining moves from an assumed conclusion to it’s logical statements to confirm or reject it’s assumptions. The other principle the inference engines embrace is this concept of knowledge decoupling and domain expertise.
To revisit our pricing example, each pricing rule can be written by separate entities who have the expertise within their domain and taking all these rules together we can infer or arrive at the correct price. I am using the popular Drools ( rule engine from JBoss to illustrate this. Since all rules can run within the same execution process space it is also very fast without the overhead of maintaining a separate service container or the possible latency of multiple network hops.
Our pricing rules would then contain 4 rules which can be written by their respective authorities and then executed together:
Pricing Rule
Rule A: When Price.Base > 0 or Price.Discount > 0 or Price.Tax > 0 Then Set Price.Net = Price.Base  – (Price.Base * Price.Discount) , Set Price.Final = Price.Net + (Price.Net * Price.Tax)
Marketing (Geographical and Demographic) Rules
Rule B: When Customer.State = “WA” and Customer.City = “Seattle” Then Set Price.Base = 100.00
Rule C: When Customer.Age >= 65 Then Set Price.Discount = 0.10
Tax Rules (Accounting)
Rule D: When Customer.State = “WA” and Customer.City = “Seattle” Then Set Price.Tax = 0.10
The final result is the pricing rule. The question many will be asking is how do we guarantee that the rules are executed in the order to guarantee we get the correct result. It seems the last 3 rules can be executed in any manner but the first rule has a dependency on the last 3 rules to get the correct price. This is where the backward chaining and forward chaining algorithms come into play in an inference rules engine like Drools. The final inference result is always guaranteed because the execution of any rule that modifies or changes the conditions required therefore causing rules to be fired in backward or forward chaining order, consider this extreme sequence:
Step 1 – Rule A is evaluated but not fired since initial Base price is zero.
Step 2 – Rule B is evaluated and fired – this also causes Rule A to be activated (marked to be fired) because our Price attribute has changed
Step 3 – Rule A is evaluated again and fired but final price is still zero.
Step 4 – Rule C is evaluated and fired – this again causes Rule A to activated (marked to be fired) because our Price attribute has changed
Step 5 – Rule A is evaluated again and fired but final price is still zero
Step 6 – Rule D is evaluated and fired – this again causes Rule A to activated (marked to be fired) because our Price attribute has changed
Step 7 – Rule A is evaluated again and fired and our final price is finally inferred which is $99 ((100 – (100 * 0.1)) * 1.1)
You can run this through any combination of steps and the final step we arrive at will always be Step 7 in our example. The inference engine finishes execution when there are no more rules left to be fired, the whole inference concept is that one rule can cause other rules to be fired in a forward or backward chain when the conditional attribute of a rule is changed. Of course in actual execution there are optional hints and flows we can take to ensure some optimization but the point is we can add to our rule knowledge base in our inference engine in a decoupled manner without worrying about how the final calculation is arrived at.
Each entity can write rules based on their domain of expertise and the entire knowledge base taken as a whole is used to arrive at the result that we want. If the tax rate changes on a municipality or a new discount is in effect in our example, these rules can be written in isolation by their respective domain experts without having to propagate that knowledge down to the other entities (loose coupling). the rule simply executes and arrives at the correct result.
Rule engines are extremely efficient using pattern matching and binary search algorithms and since most rule knowledge bases are executed within the same process space it is extremely fast as well. In this case we get all the benefits of a loosely coupled system with the performance and efficiency of a tightly coupled one.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Follow Us