When it comes to digital style guides, a giant PDF alone will no longer cut it. They need to be interactive with explanation and assets. It’s also important to make them a tool for use during the creative and development process not just a deliverable at the end. Defining the guide from the beginning will ensure consistency in the initial development and will provide a path and assistance for continuous development.
The following are some the best examples of interactive style guides on the web today:
The number one example of a digital style guide and asset library.
The concept of loosely coupled system is not a new one and is one of the central tenets of SOA systems. Each service is solely focused on it’s service contract responsibilities and leaves underlying domain expertise to other services to fulfill it’s requests. The premise is fairly simple, using a service registry, a service that required certain functionality outside it’s scope would look up the appropriate service to fulfill this request. It did not have to know where the service is located or how the service is implemented as long as it met the service contract requirements and got back the results in a consistent fashion. Loosely coupled services promoted reusability, stability and faster time to deployment.
However more than these concepts, loose coupling promotes knowledge domains and service autonomy. In a perfectly run IT world were everything ran at the speed of light and resources were not an issue, this would pose no problems, the problem I have with loosely coupled services is the increased granularity of each service. The latency and resources needed to support the propagation of services would increase exponentially based on the granularity levels we architected for each service.
How would these systems work for very large data systems like Complex Event Processors where gigabyte streams of data needs to be analyzed in real-time? In this case it seems loose coupling would not be ideal for performance and needs to be re-evaluated.
For example consider a pricing service that priced cellular phone service based on geographical location, demographics and municipal tax codes. In a loosely coupled system, we would provide a service to get the price based on geography, a discount based on demographics (let’s say seniors get 10% off) and finally a tax rate based on a regional or municipal code. I would assume the marketing department would handle the first 2 services while the legal or accounting side would furnish the tax rates.
So something like a Customer interface with 3 attributes:
State, City and Age.
And to calculate Price we need a getBasePrice() service, a getDiscount() service and finally a getTaxRate() service.
We need to make 3 separate service calls passing in the Customer object to get the facts we need to calculate the correct price. This encapsulates loose coupling perfectly since the calling service does not need to know about any marketing campaigns for a geographical region nor any specials for seniors, nor does it need to have knowledge on municipal tax codes, it can rely on the Customer service interface contract to get the data it needs and make the necessary calculations to return the final cost to the customer. However the performance overhead of this loosely coupled system is fairly high and if the service is very busy, the performance implications get propagated down to the systems below and furthermore the cost of calling all these systems might not meet the SLA requirements of this service.
So how can an inference rule engine help mitigate these performance and scalability issues while maintaining the loose coupling principles?
An inference engine works using concepts of forward chaining and backward chaining to arrive at a logical conclusion. For example forward chaining is fairly obvious – it moves from logical statements to a final conclusion, while backward chaining moves from an assumed conclusion to it’s logical statements to confirm or reject it’s assumptions. The other principle the inference engines embrace is this concept of knowledge decoupling and domain expertise.
To revisit our pricing example, each pricing rule can be written by separate entities who have the expertise within their domain and taking all these rules together we can infer or arrive at the correct price. I am using the popular Drools (http://www.jboss.org/drools/) rule engine from JBoss to illustrate this. Since all rules can run within the same execution process space it is also very fast without the overhead of maintaining a separate service container or the possible latency of multiple network hops.
Our pricing rules would then contain 4 rules which can be written by their respective authorities and then executed together:
Rule A: When Price.Base > 0 or Price.Discount > 0 or Price.Tax > 0 Then Set Price.Net = Price.Base - (Price.Base * Price.Discount) , Set Price.Final = Price.Net + (Price.Net * Price.Tax)
Marketing (Geographical and Demographic) Rules
Rule B: When Customer.State = “WA” and Customer.City = “Seattle” Then Set Price.Base = 100.00
Rule C: When Customer.Age >= 65 Then Set Price.Discount = 0.10
Tax Rules (Accounting)
Rule D: When Customer.State = “WA” and Customer.City = “Seattle” Then Set Price.Tax = 0.10
The final result is the pricing rule. The question many will be asking is how do we guarantee that the rules are executed in the order to guarantee we get the correct result. It seems the last 3 rules can be executed in any manner but the first rule has a dependency on the last 3 rules to get the correct price. This is where the backward chaining and forward chaining algorithms come into play in an inference rules engine like Drools. The final inference result is always guaranteed because the execution of any rule that modifies or changes the conditions required therefore causing rules to be fired in backward or forward chaining order, consider this extreme sequence:
Step 1 – Rule A is evaluated but not fired since initial Base price is zero.
Step 2 – Rule B is evaluated and fired – this also causes Rule A to be activated (marked to be fired) because our Price attribute has changed
Step 3 – Rule A is evaluated again and fired but final price is still zero.
Step 4 – Rule C is evaluated and fired – this again causes Rule A to activated (marked to be fired) because our Price attribute has changed
Step 5 – Rule A is evaluated again and fired but final price is still zero
Step 6 – Rule D is evaluated and fired – this again causes Rule A to activated (marked to be fired) because our Price attribute has changed
Step 7 – Rule A is evaluated again and fired and our final price is finally inferred which is $99 ((100 – (100 * 0.1)) * 1.1)
You can run this through any combination of steps and the final step we arrive at will always be Step 7 in our example. The inference engine finishes execution when there are no more rules left to be fired, the whole inference concept is that one rule can cause other rules to be fired in a forward or backward chain when the conditional attribute of a rule is changed. Of course in actual execution there are optional hints and flows we can take to ensure some optimization but the point is we can add to our rule knowledge base in our inference engine in a decoupled manner without worrying about how the final calculation is arrived at.
Each entity can write rules based on their domain of expertise and the entire knowledge base taken as a whole is used to arrive at the result that we want. If the tax rate changes on a municipality or a new discount is in effect in our example, these rules can be written in isolation by their respective domain experts without having to propagate that knowledge down to the other entities (loose coupling). the rule simply executes and arrives at the correct result.
Rule engines are extremely efficient using pattern matching and binary search algorithms and since most rule knowledge bases are executed within the same process space it is extremely fast as well. In this case we get all the benefits of a loosely coupled system with the performance and efficiency of a tightly coupled one.
Earlier this year, Google had launched +Project Glass contest and offered a unique opportunity to experience Google Glass in-person. I entered into the contest as well and my submissions can be found here. Approximately eight thousand winners were selected and I was not one of them.
Recently, I received an email from “Glass Support” with an invitation to become a glass explorer! See email below:
I have not yet decided if I will join the program and purchase the Glass Developer Kit which has a price tag of $1500. Trying to figure out business value for Google Glass; from an enterprise IT perspective, I’m not sure how we can use Glass currently. In addition, I don’t think any of our largest partners are working/developing for Glass – example: IBM, Microsoft, Oracle, etc.
Since I am a technology enthusiast, I would like to get Glass for personal use anyway; it would create nice blog posts to share my experiences with especially how wearable tech is transforming user experience.
I would like to hear your opinion – Should I invest in Google Glass? Why or Why Not?
When I was a kid, I knew Santa was real because his reindeer left footprints, he ate all of our cookies, and the wrapping paper was different than mom’s. But as technology advances, so does the quality of evidence that Santa really does exist.
Here are 5 ways to show your kids that Santa is real:
Magic Santa is an online application that allows you to customize a message from Santa based on your child’s name, age, Christmas wish and personality. It also incorporates family pictures into the message and is completely free. It’s very well done and any child would love to receive one of these messages from the big guy. View an example here.
ReindeerCam allows you to watch the reindeer in their natural environment as they prepare for the big day. As their caretaker, Santa pops in every now and then to pay them a visit. Follow the link to download the app for iOS, Android and WindowsPhone.
A lot has been written about potential cost savings for SaaS as utility computing. SaaS has a subscription cost model versus a large upfront capital investment. However, there is another interesting aspect of moving to a SaaS model – the reduction of risk. And, the reduction of risk should also influence your investment decision.
Let’s use for example a CRM or ERP implementation. An on premise solution requires a large software purchase followed by a lengthy implementation. What’s really amazing is for many years, decades really, we have accepted all the risk of buying software that might not work. I cannot think of another item we purchase that has these terms.
Consider the following typical software license terms:
It’s no wonder we end up with so much shelf-ware. There is really no remedy for buying packaged software that is buggy or not fit-for-purpose. Then there are of course project implementation risks including large cost overruns. Now contrast that to a subscription model where if you don’t like the software you simply stop paying for it. SaaS agreements often commit to availability and protection of data as well.
There are also many risk reducing options for SaaS rollouts including free trials and limiting the subscriber count and term until you are happy with the software – e.g. a pilot project.
Not all software solutions are available as SaaS (yet), but take a look at the SaaS software agreements for negotiation ideas.
Or, All I Really Need to Know About Software Development I Learned in Kindergarten. A visual blog post on the software development lifecycle made up of an actual kanban wall in an actual project war room.
1. Analysis and Design
Don’t forget the Golden Rule!
All together now!
Special thanks to the amazing Pam Rostal who built this classroom, er, I mean war room!
Apple has always been known as a consumer-oriented organization. It’s last business or enterprise-focused product; the Apple xServe rack servers were discontinued at the beginning of 2011. However, a funny thing happened on their way to consumer electronic dominance, they became relevant to business. The evidence is never clearer than the dominance of the iPad as a lightweight business tool. The iPad and tablets in general fit into the executive and manager work styles. How often have you been in meetings where participants are bringing out iPads (or Samsung Galaxy Tab 10.1) to fire up a spreadsheet or a business intelligence dashboard in order to emphasize a point?
Just as smartphone “phablets” have become popular with consumers to view more web data squeezed onto the screen (does anyone use a smartphone to make calls anymore?), vendors are looking to target business users with larger tablet sizes. Both Samsung and Apple are rumored to be releasing 12-inch tablets with Samsung debuting the Galaxy Note 12 (with their S Pen stylus) in early 2014 and Apple with a 12-inch iPad Air. With notebook sales dropping from 13.8 million in 2012 to a reported 13 million this year (2013), tablets have become the mobile go-to device for business executives and managers and slightly larger tablets with a soft keyboard cover will cause even more notebook users to make the switch. Along with the rise of tablets in business, spending on mobile application development was projected to grow by 50% in 2013, to nearly 2% of total IT expenditure. This spending is strictly software development, i.e. developing new mobile applications and making existing enterprise applications mobile-friendly and does not take into account the purchase of mobile hardware (tablets and smartphones).
I have been following the rollout of the federal governments HealthCare.gov website and the subsequent healthcare exchanges. I have been reading many articles outlining the challenges that the team has faced with such a massive implementation, in a limited timeframe. There are many lessons to be learned from the HealthCare.gov story, but I would like to share three take-a-ways that struck me as important for EVERY software deployment, no matter how big or small.
It would appear from statements made from both HealthCare.gov contractors as well as the secretary of health, that there were a number of issues that should have either held back the deployment of the website, or a reduction in scope should have been applied, and possibly, additional team members should have been added to the project.
This reminds me of a simple project management quote:
“We can make it good, fast, or cheap. Pick two.”
Expert project managers know that very few real-life projects stay on track throughout the entire project cycle. A good project manager also understands how to make all three project constraints adjust to each other in order to maintain project quality. Some of the methods to keep projects within constraints are purely political: preventing stakeholders from changing the scope and maintaining boundaries around financial and human resources. Other solutions require classic project management techniques: keeping team members focused and adjusting milestones when necessary.
Melody Smith Jones, project manager at Perficient, is visiting her hometown of Cincinnati today and wrote a blog post on Procter and Gamble and the change in the digital marketing landscape.
It is these two things that my city is known for, P&G and digital marketing, that bring me to my next point: digital marketing is dead. In fact, Marc Pritchard, Global Branding Officer for Procter & Gamble, recently was quoted saying
The era of digital marketing is over. It’s almost dead.
To read Melody’s full blog post, click here.